Digital Apartheid in Gaza: Unjust Content Moderation at the Request of Israel’s Cyber Unit

14 hours 31 minutes ago

This is part one of an ongoing series. 

Government involvement in content moderation raises serious human rights concerns in every context. Since October 7, social media platforms have been challenged for the unjustified takedowns of pro-Palestinian content—sometimes at the request of the Israeli government—and a simultaneous failure to remove hate speech towards Palestinians. More specifically, social media platforms have worked with the Israeli Cyber Unit—a government office set up to issue takedown requests to platforms—to remove content considered as incitement to violence and terrorism, as well as any promotion of groups widely designated as terrorists. 

Many of these relationships predate the current conflict, but have proliferated in the period since. Between October 7 and November 14, a total of 9,500 takedown requests were sent from the Israeli authorities to social media platforms, of which 60 percent went to Meta with a reported 94% compliance rate. 

This is not new. The Cyber Unit has long boasted that its takedown requests result in high compliance rates of up to 90 percent across all social media platforms. They have unfairly targeted Palestinian rights activists, news organizations, and civil society; one such incident prompted Meta’s Oversight Board to recommend that the company “Formalize a transparent process on how it receives and responds to all government requests for content removal, and ensure that they are included in transparency reporting.”

When a platform edits its content at the behest of government agencies, it can leave the platform inherently biased in favor of that government’s favored positions. That cooperation gives government agencies outsized influence over content moderation systems for their own political goals—to control public dialogue, suppress dissent, silence political opponents, or blunt social movements. And once such systems are established, it is easy for the government to use the systems to coerce and pressure platforms to moderate speech they may not otherwise have chosen to moderate.

Alongside government takedown requests, free expression in Gaza has been further restricted by platforms unjustly removing pro-Palestinian content and accounts—interfering with the dissemination of news and silencing voices expressing concern for Palestinians. At the same time, X has been criticized for failing to remove hate speech and has disabled features that allow users to report certain types of misinformation. TikTok has implemented lackluster strategies to monitor the nature of content on their services. Meta has admitted to suppressing certain comments containing the Palestinian flag in certain “offensive contexts” that violate its rules.

To combat these consequential harms to free expression in Gaza, EFF urges platforms to follow the Santa Clara Principles on Transparency and Accountability in Content Moderation and undertake the following actions:

  1. Bring in local and regional stakeholders into the policymaking process to provide a greater cultural competence—knowledge and understanding of local language, culture and contexts—throughout the content moderation system.
  2. Urgently recognize the particular risks to users’ rights that result from state involvement in content moderation processes.
  3. Ensure that state actors do not exploit or manipulate companies’ content moderation systems to censor dissenters, political opponents, social movements, or any person.
  4. Notify users when, how, and why their content has been actioned, and give them the opportunity to appeal.
Everyone Must Have a Seat at the Table

Given the significant evidence of ongoing human rights violations against Palestinians, both before and since October 7, U.S. tech companies have significant ethical obligations to verify to themselves, their employees, the American public, and Palestinians themselves that they are not directly contributing to these abuses. Palestinians must have a seat at the table, just as Israelis do, when it comes to moderating speech in the region, most importantly their own. Anything less than this risks contributing to a form of digital apartheid.

An Ongoing Issue

This isn’t the first time EFF has raised concerns about censorship in Palestine, including in multiple international forums. Most recently, we wrote to the UN Special Rapporteur on Freedom of Expression expressing concern about the disproportionate impact of platform restrictions on expression by governments and companies. In May, we submitted comments to the Oversight Board urging that moderation decisions of the rallying cry “From the river to the sea” must be made on an individualized basis rather than through a blanket ban. Along with international and regional allies, EFF also asked Meta to overhaul its content moderation practices and policies that restrict content about Palestine, and have issued a set of recommendations for the company to implement. 

And back in April 2023, EFF and ECNL submitted comments to the Oversight Board addressing the over-moderation of the word ‘shaheed’ and other Arabic-language content by Meta, particularly through the use of automated content moderation tools. In their response, the Oversight Board found that Meta’s approach disproportionately restricts free expression, is unnecessary, and that the company should end the blanket ban to remove all content using the “shaheed”.

Paige Collings

Electronic Frontier Foundation to Present Annual EFF Awards to Carolina Botero, Connecting Humanity, and 404 Media

1 day 15 hours ago
2024 Awards Will Be Presented in a Live Ceremony Thursday, Sept. 12 in San Francisco

SAN FRANCISCO—The Electronic Frontier Foundation (EFF) is honored to announce that Carolina Botero, Connecting Humanity, and 404 Media will receive the 2024 EFF Awards for their vital work in ensuring that technology supports freedom, justice, and innovation for all people.  

The EFF Awards recognize specific and substantial technical, social, economic, or cultural contributions in diverse fields including journalism, art, digital access, legislation, tech development, and law. 

The EFF Awards ceremony will start at 6:30 pm PT on Thursday, Sept. 12, 2024 at the Golden Gate Club, 135 Fisher Loop in San Francisco’s Presidio. Guests can register at https://www.eff.org/event/eff-awards-2024. The ceremony will be livestreamed and recorded. 

For the past 30 years, the EFF Awards—previously known as the Pioneer Awards—have recognized and honored key leaders in the fight for freedom and innovation online. Started when the internet was new, the Awards now reflect the fact that the online world has become both a necessity in modern life and a continually evolving set of tools for communication, organizing, creativity, and increasing human potential. 

“Maintaining internet access in a conflict zone, conducting fearless investigative reporting on how tech impacts our lives, and bringing the fight for digital rights and social justice to significant portions of Latin America are all ways of ensuring technology advances us all,” EFF Executive Director Cindy Cohn said. “This year’s EFF Award winners embody the internet’s highest ideals, building a better-connected and better-informed world that brings freedom, justice, and innovation for everyone. We hope that by recognizing them in this small way, we can shine a spotlight that helps them continue and even expand their important work.” 

Carolina Botero: Fostering Digital Human Rights in Latin America 

Carolina Botero is a researcher, lecturer, writer, and consultant who is among the foremost leaders in the fight for digital rights in Latin America. In more than a decade as executive director of the Colombia-based Karisma Foundation — founded in 2003 to ensure that digital technologies protect and advance fundamental human rights and promote social justice — she transformed the organization into an outspoken voice fostering freedom of expression, privacy, access to knowledge, justice, and self-determination in our digital world, with regional and international impact. She left that position this year, opening the door for a new generation while leaving a strong and inspiring legacy for those in Latin America and beyond who advocate for a digital world that enhances rights and empowers the powerless. Botero holds a master’s degree in international law and cooperation from Belgium’s Vrije Universiteit Brussel and a master’s degree in commercial and contracting law from Spain’s Universitat Autònoma de Barcelona. She frequently authors op-eds for Colombia’s El Espectador and La Silla Vacía, and serves on the advisory board of The Regional Center for Studies for the Development of the Information Society (Cetic.br), monitoring the adoption of information and communication technologies in Brazil. She previously served on the board of Creative Commons and as a member of the UNESCO Advisory Committee on Open Science.  

Connecting Humanity: Championing Internet Access in Gaza 

Connecting Humanity is a Cairo-based nonprofit organization that helps Palestinians in Gaza regain access to the internet – a crucial avenue for free speech and the free press. Founded in late 2023 by Egyptian journalist, writer, podcaster, and activist Mirna El Helbawi, Connecting Humanity collects and distributes embedded SIMs (eSIMs), a software version of the physical chip used to connect a phone to cellular networks and the internet. Connecting Humanity has collected hundreds of thousands of eSims from around the world and distributed them to people in Gaza, providing a lifeline for many caught up in Israel’s war on Hamas. People in crisis zones rely upon the free flow of information to survive, and restoring internet access in places where other communications infrastructure has been destroyed helps with dissemination of life-saving information and distribution of humanitarian aid, ensures that everyone’s stories can be heard, and enables continued educational and cultural contact. El Helbawi previously worked as an editor at 7 Ayam Magazine and as a radio host at Egypt’s NRJ Group; she was shortlisted for the Arab Journalism Award in 2016, and she created the podcast Helbing

404 Media: Fearless Journalism 

As the media landscape in general and tech media in particular keeps shrinking, 404 Media — launched in August 2023 — has tirelessly forged ahead with incisive investigative reports, deep-dive features, blogs, and scoops about topics such as hacking, cybersecurity, cybercrime, sex, artificial intelligence, consumer rights, government and law enforcement surveillance, privacy, and the democratization of the internet. Co-founders Jason Koebler, Sam Cole, Joseph Cox, and Emanuel Maiberg all worked together at Vice Media’s Motherboard, but after that site's parent company filed for bankruptcy in May 2023, the four journalists resolved to go out on their own and build what Maiberg has called "very much a website by humans, for humans about technology. It’s not about the business of technology — it’s about how it impacts real people in the real world.” Among many examples, 404 Media has uncovered a privacy issue in the New York subway system that let stalkers track peoples’ movements, causing the MTA to shut down the feature; investigated a platform being used to generate non-consensual pornography with AI, causing the platform to make changes limiting abuse; and reported on dangerously inaccurate AI-generated books that Amazon then removed from sale

 To register for this event: https://www.eff.org/event/eff-awards-2024 

For past honorees: https://www.eff.org/awards/past-winners 

 

Josh Richman

Briefing: Negotiating States Must Address Human Rights Risks in the Proposed UN Surveillance Treaty

2 days 3 hours ago

At a virtual briefing today, experts from the Electronic Frontier Foundation (EFF), Access Now, Derechos Digitales, Human Rights Watch, and the International Fund for Public Interest Media outlined the human rights risks posed by the proposed UN Cybercrime Treaty. They explained that the draft convention, instead of addressing core cybercrimes, is an extensive surveillance treaty that imposes intrusive domestic spying measures with little to no safeguards protecting basic rights. UN Member States are scheduled to hold a final round of negotiations about the treaty's text starting July 29.

If left as is, the treaty risks becoming a powerful tool for countries with poor human rights records that can be used against journalists, dissenters, and every day people. Watch the briefing here:

 

%3Ciframe%20width%3D%22560%22%20height%3D%22315%22%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FSBkjj2tkcAY%3Fsi%3D6hgrk1xR81RZ8TWv%26autoplay%3D1%26mute%3D1%22%20title%3D%22YouTube%20video%20player%22%20frameborder%3D%220%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%3B%20web-share%22%20referrerpolicy%3D%22strict-origin-when-cross-origin%22%20allowfullscreen%3D%22%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

Karen Gullo

Journalists Sue Massachusetts TV Corporation Over Bogus YouTube Takedown Demands

2 days 5 hours ago
Posting Video Clips of Government Meetings Is Fair Use That Doesn’t Violate the DMCA, EFF’s Clients Argue

BOSTON—A citizen journalists’ group represented by the Electronic Frontier Foundation (EFF) filed a federal lawsuit today against a Massachusetts community-access television company for falsely convincing YouTube to take down video clips of city government meetings.

The lawsuit was filed in the U.S. District Court for Massachusetts by Channel 781, an association of citizen journalists founded in 2021 to report on Waltham, MA, municipal affairs via its YouTube channel. The Waltham Community Access Corp.’s misrepresentation of copyright claims under the Digital Millennium Copyright Act (DMCA) led YouTube to temporarily deactivate Channel 781, making its work disappear from the internet last September just five days before an important municipal election, the suit says. 

“WCAC knew it had no right to stop people from using video recordings of public meetings, but asked YouTube to shut us down anyway,” Channel 781 cofounder Josh Kastorf said. “Democracy relies on an informed public, and there must be consequences for anyone who abuses the DMCA to silence journalists and cut off people’s access to government.” 

Channel 781 is a nonprofit, volunteer-run effort, and all of its content is available for free. Its posts include videos of its members reporting on news affecting the city, editorial statements, discussions in a talk-show format, and interviews. It also posts short video excerpts of meetings of the Waltham city council and other local government bodies. 

Waltham Community Access Corp. (WCAC) operates two cable television channels:  WCAC-TV is a Community Access station that provides programming geared towards the interests of local residents, businesses, and organizations, and MAC-TV is a Government Access station that provides coverage of municipal meetings, events, and special government-related programming. 

Some city meeting video clips that Channel 781 posted to YouTube were short excerpts from videos recorded by WCAC and first posted to WCAC’s website. Channel 781 posted them on YouTube to highlight newsworthy statements by city officials, to provoke discussion and debate, and to make the information more accessible to the public, including to people with disabilities. 

The DMCA notice and takedown process lets copyright holders ask websites to take down user-uploaded material that infringes their copyrights. Although Kastorf had explained to WCAC’s executive director that Channel 781’s use of the government meeting clips was a fair use under copyright law, WCAC sent three copyright infringement notices to YouTube referencing 15 specific Channel 781 videos, leading YouTube to deactivate the account and render all of its content inaccessible. YouTube didn’t restore access to the videos until two months later, after a lengthy intervention by EFF. 

The lawsuit—which seeks damages and injunctive relief—says WCAC knew, should have known, or failed to consider that the government meeting clips were a fair use of copyrighted material, and so it acted in bad faith when it sent the infringement notices to YouTube. 

“Nobody can use copyright to limit access to videos of public meetings, and those who make bogus claims in order to stifle critical reporting must be held accountable,” said EFF Intellectual Property Litigation Director Mitch Stoltz. “Phony copyright claims must never subvert the public’s right to know, and to report on, what government is doing.” 

For the complaint: https://www.eff.org/document/07-24-2024-channel-781-news-v-waltham-community-access-corporation-complaint

For more on the DMCA: https://www.eff.org/issues/dmca  

For EFF’s Takedown Hall of Shame: https://www.eff.org/takedowns

Contact:  MitchStoltzIP Litigation Directormitch@eff.org
Josh Richman

Supreme Court Dodges Key Question in Murthy v. Missouri and Dismisses Case for Failing to Connect The Government’s Communication to Specific Platform Moderation

4 days 9 hours ago

We don’t know a lot more about when government jawboning social media companies—that is, attempting to pressure them to censor users’ speech— violates the First Amendment; but we do know that lawsuits based on such actions will be hard to win. In Murthy v. Missouri, the U.S. Supreme Court did not answer the important First Amendment question before it—how does one distinguish permissible from impermissible government communications with social media platforms about the speech they publish? Rather, it dismissed the cases because none of the plaintiffs could show that any of the statements by the government they complained of were likely the cause of any specific actions taken by the social media platforms against them or that they would happen again.   

As we have written before, the First Amendment forbids the government from coercing a private entity to censor, whether the coercion is direct or subtle. This has been an important principle in countering efforts to threaten and pressure intermediaries like bookstores and credit card processors to limit others’ speech. But not every communication to an intermediary about users’ speech is unconstitutional; indeed, some are beneficial—for example, platforms often reach out to government actors they perceive as authoritative sources of information. And the distinction between proper and improper speech is often obscure. 

While the Supreme Court did not tell us more about coercion, it did remind us that it is very hard to win lawsuits alleging coercion. 

So, when do the government’s efforts to persuade one to censor another become coercion? This was a hard question prior to Murthy. And unfortunately, it remains so, though a different jawboning case also recently decided provides some clarity. 

Rather than provide guidance to courts about the line between permissible and impermissible government communications with platforms about publishing users’ speech, the Supreme Court dismissed Murthy, holding that every plaintiff lacked “standing” to bring the lawsuit. That is, none of the plaintiffs had presented sufficient facts to show that the government did in the past or would in the future coerce a social media platform to take down, deamplify, or otherwise obscure any of the plaintiffs’ specific social media posts. So, while the Supreme Court did not tell us more about coercion, it did remind us that it is very hard to win lawsuits alleging coercion. 

The through line between this case and Moody v. Netchoice, decided by the Supreme Court a few weeks later, is that social media platforms have a First Amendment right to moderate the speech any user sees, and, because they exercise that right routinely, a plaintiff who believes they have been jawboned must prove that it was because of the government’s dictate, not the platform’s own decision. 

Plaintiffs’ Lack Standing to Bring Jawboning Claims 

Article III of the U.S. Constitution limits federal courts to only considering “cases and controversies.” This limitation requires that any plaintiff have suffered an injury that was traceable to the defendants and which the court has the power to fix. The standing doctrine can be a significant barrier to litigants without full knowledge of the facts and circumstances surrounding their injuries, and EFF has often complained that courts require plaintiffs to prove their cases on the merits at very early stages of litigation before the discovery process. Indeed, EFF’s landmark mass surveillance litigation, Jewel v NSA, was ultimately dismissed because the plaintiffs lacked standing to sue

The main fault in the Murthy plaintiffs’ case was weak evidence

The standing question here differs from cases such as Jewel where courts have denied plaintiffs discovery because they couldn’t demonstrate their standing without an opportunity to gather evidence of the suspected wrongdoing. The Murthy plaintiffs had an opportunity to gather extensive evidence of suspected wrongdoing—indeed, the Supreme Court noted that the case’s factual record exceeds 26,000 pages. And the Supreme Court considered this record in its standing analysis.   

While the Supreme Court did not provide guidance on what constitutes impermissible government coercion of social media platforms in Murthy, its ruling does tell us what type of cause-and-effect a plaintiff must prove to win a jawboning case. 

A plaintiff will have to prove that the negative treatment of their speech was attributable to the government, not the independent action of the platform. This accounts for basic truths of content moderation, which we emphasized in our amicus brief: that platforms moderate all the time, often based on their community guidelines, but also often ad hoc, and informed by input from users and a variety of outside experts. 

When, as in this case, plaintiffs ask a court to stop the government from ongoing or future coercion of a platform to remove, deamplify, or otherwise obscure the plaintiffs’ speech—rather than, for example, compensate for harm caused by past coercion—those plaintiffs must show a real and immediate threat that they will be harmed again. Past incidents of government jawboning are relevant only to predict a repeat of that behavior. Further, plaintiffs seeking to stop ongoing or future government coercion must show that the platform will change its policies and practices back to their pre-coerced state should the government be ordered to stop. 

Fortunately, plaintiffs will only have to prove that a particular government actor “pressured a particular platform to censor a particular topic before that platform suppressed a particular plaintiff ’s speech on that topic.” Plaintiffs do not need to show that the government targeted their posts specifically, just the general topic of their posts, and that their posts were negatively moderated as a result.  

The main fault in the Murthy plaintiffs’ case was weak evidence that the government actually caused a social media platform to take down, deamplify, or otherwise obscure any of the plaintiffs’ social media posts or any particular social media post at all. Indeed, the evidence that the content moderation decisions were the platforms’ independent decisions was stronger: the platforms had all moderated similar content for years and strengthened their content moderation standards before the government got involved; they spoke not just with the government but with other outside experts; and they had independent, non-governmental incentives to moderate user speech as they did. 

The Murthy plaintiffs also failed to show that the government jawboning they complained of, much of it focusing on COVID and vaccine posts, was continuing. As the Court noted, the government appears to have ceased those efforts. It was not enough that the plaintiffs continue to suffer ill effects from that past behavior. 

And lastly, the plaintiffs could not show that the order they sought from the courts preventing the government from further jawboning would actually cure their injuries, since the platforms may still exercise independent judgment to negatively moderate the plaintiffs’ posts even without governmental involvement. 

 The Court Narrows the Right to Listen 

The right to listen and receive information is an important First Amendment right that has typically allowed those who are denied access to censored speech to sue to regain access. EFF has fervently supported this right. 

But the Supreme Court’s opinion in Murthy v. Missouri narrows this right. The Court explains that only those with a “concrete, specific connection to the speaker” have standing to sue to challenge such censorship. At a minimum, it appears, one who wants to sue must point to specific instances of censorship that have caused them harm; it is not enough to claim an interest in a person’s speech generally or claim harm from being denied “unfettered access to social media.” While this holding rightfully applies to the States who had sought to vindicate the audience interests of their entire populaces, it is more problematic when applied to individual plaintiffs. Going forward EFF will advocate for a narrow reading of this holding. 

 As we pointed out in our amicus briefs and blog posts, this case was always a difficult one for litigating the important question of defining illegal jawboning because it was based more on a sprawling, multi-agency conspiracy theory than on specific takedown demands resulting in actual takedowns. The Supreme Court seems to have seen it the same way. 

But the Supreme Court’s Other Jawboning Case Does Help Clarify Coercion  

Fortunately, we do know a little more about the line between permissible government persuasion and impermissible coercion from a different jawboning case, outside the social media context, that the Supreme Court also decided this year: NRA v. Vullo.  

InNRA v. Vullo, the Supreme Court importantly affirmed that the controlling case for jawboning is Bantam Books v. Sullivan 

NRA v. Vullo is a lawsuit by the National Rifle Association alleging that the New York state agency that oversees the insurance industry threatened insurance companies with enforcement actions if they continued to offer coverage to the NRA. Unlike Murthy, the case came to the Supreme Court on a motion to dismiss before any discovery had been conducted and when courts are required to accept all of the plaintiffs’ factual allegations as true. 

The Supreme Court importantly affirmed that the controlling case for jawboning is Bantam Books v. Sullivan, a 1963 case in which the Supreme Court established that governments violate the First Amendment by coercing one person to censor another person’s speech over which they exercise control, what the Supreme Court called “indirect censorship.”   

In Vullo, the Supreme Court endorsed a multi-factored test that many of the lower courts had adopted, as a “useful, though nonexhaustive, guide” to answering the ultimate question in jawboning cases: did the plaintiff “plausibly allege conduct that, viewed in context, could be reasonably understood to convey a threat of adverse government action in order to punish or suppress the plaintiff ’s speech?” Those factors are: (1) word choice and tone, (2) the existence of regulatory authority (that is, the ability of the government speaker to actually carry out the threat), (3) whether the speech was perceived as a threat, and (4) whether the speech refers to adverse consequences. The Supreme Court explained that the second and third factors are related—the more authority an official wields over someone the more likely they are to perceive their speech as a threat, and the less likely they are to disregard a directive from that official. And the Supreme Court made clear that coercion may arise from ither threats or inducements.  

In our amicus brief in Murthy, we had urged the Court to make clear that an official’s intent to coerce was also highly relevant. The Supreme Court did not directly state this, unfortunately. But they did several times refer to the NRA as having properly alleged that the “coercive threats were aimed at punishing or suppressing disfavored speech.”  

At EFF, we will continue to look for cases that present good opportunities to bring jawboning claims before the courts and to bring additional clarity to this important doctrine. 

 

David Greene

Why Privacy Badger Opts You Out of Google’s “Privacy Sandbox”

4 days 13 hours ago

Update July 22, 2024: Shortly after we published this post, Google announced it's no longer deprecating third-party cookies in Chrome. We've updated this blog to note the news.

The latest update of Privacy Badger opts users out of ad tracking through Google’s “Privacy Sandbox.” 

Privacy Sandbox is Google’s way of letting advertisers keep targeting ads based on your online behavior without using third-party cookies. Third-party cookies were once the most common form of online tracking technology, but major browsers, like Safari and Firefox, started blocking them several years ago. After pledging to eventually do the same for Chrome in 2020, and after several delays, today Google backtracked on its privacy promise, announcing that third-party cookies are here to stay. Notably, Google Chrome continues to lag behind other browsers in terms of default protections against online tracking.

Privacy Sandbox might be less invasive than third-party cookies, but that doesn’t mean it’s good for your privacy. Instead of eliminating online tracking, Privacy Sandbox simply shifts control of online tracking from third-party trackers to Google. With Privacy Sandbox, tracking will be done by your Chrome browser itself, which shares insights gleaned from your browsing habits with different websites and advertisers. Despite sounding like a feature that protects your privacy, Privacy Sandbox ultimately protects Google's advertising business.

How did Google get users to go along with this? In 2023, Chrome users received a pop-up about “Enhanced ad privacy in Chrome.” In the U.S., if you clicked the “Got it” button to make the pop-up go away, Privacy Sandbox remained enabled for you by default. Users could opt out by changing three settings in Chrome. But first, they had to realize that "Enhanced ad privacy" actually enabled a new form of ad tracking.

You shouldn't have to read between the lines of Google’s privacy-washing language to protect your privacy. Privacy Badger will do this for you!

Three Privacy Sandbox Features That Privacy Badger Disables For You

If you use Google Chrome, Privacy Badger will update three different settings that constitute Privacy Sandbox:

  • Ad topics: This setting allows Google to generate a list of topics you’re interested in based on the websites you visit. Any site you visit can ask Chrome what topics you’re supposedly into, then display an ad accordingly. Some of the potential topics–like “Student Loans & College Financing”, “Credit Reporting & Monitoring”, and “Unwanted Body & Facial Hair Removal”–could serve as proxies for sensitive financial or health information, potentially enabling predatory ad targeting. In an attempt to prevent advertisers from identifying you, your topics roll over each week and Chrome includes a random topic 5% of the time. However, researchers found that Privacy Sandbox topics could be used to re-identify users across websites. Using 1,207 people’s real browsing histories, researchers showed that as few as three observations of a person’s “ad topics” was enough to identify 60% of users across different websites.

  • Site-suggested ads: This setting enables "remarketing" or "retargeting," which is the reason you’re constantly seeing ads for things you just shopped for online. It works by allowing any site you visit to give information (like “this person loves sofas”) to your Chrome browser. Then when you visit a site that runs ads, Chrome uses that information to help the site display a sofa ad without the site learning that you love sofas. However, researchers demonstrated this feature of Privacy Sandbox could be exploited to re-identify and track users across websites, partially infer a user’s browsing history, and manipulate the ads that other sites show a user.

  • Ad measurement: This setting allows advertisers to track ad performance by storing data in your browser that's then shared with the advertised sites. For example, after you see an ad for shoes, whenever you visit that shoe site it’ll get information about the time of day the ad was shown and where the ad was displayed. Unfortunately, Google allows advertisers to include a unique ID with this data. So if you interact with multiple ads from the same advertiser around the web, this ID can help an advertiser build a profile of your browsing habits.

Why Privacy Badger Opts Users Out of Privacy Sandbox

Privacy Badger is committed to protecting you from online tracking. Despite being billed as a privacy feature, Privacy Sandbox protects Google’s bottom line at the expense of your privacy. Nearly 80% of Google’s revenue comes from online advertising. By building ad tracking into your Chrome browser, Privacy Sandbox gives Google even more control of the advertising ecosystem than it already has. Yet again, Google is rewriting the rules for the internet in a way that benefits itself first.

Researchers and regulators have already found that Privacy Sandbox “fails to meet its own privacy goals.” In a draft report leaked to the Wall Street Journal, the UK’s privacy regulator noted that Privacy Sandbox could be exploited to identify anonymous users and that companies will likely use it to continue tracking users across sites. Likewise, after researchers told Google about 12 attacks they conducted on a key feature of Privacy Sandbox prior to its public release, Google forged ahead and released the feature after mitigating only one of those attacks.

Privacy Sandbox offers some privacy improvements over third-party cookies. But it reinforces Google’s commitment to behavioral advertising, something we’ve been advocating against for years. Behavioral advertising incentivizes online actors to collect as much of our information as possible. This can lead to a range of harms, like bad actors buying your sensitive information and predatory ads targeting vulnerable populations.

Your browser shouldn’t put advertisers' interests above yours. As Google turns your browser into an advertising agent, Privacy Badger will put your privacy first.

What You Can Do Now

If you don’t already have Privacy Badger, install it now to automatically opt out of Privacy Sandbox and the broader ecosystem of online tracking. Already have Privacy Badger? You’re all set! And of course, don’t hesitate to spread the word to friends and family you want to protect from invasive online tracking. With your help, Privacy Badger will keep fighting to end online tracking and build a safer internet for all.

Lena Cohen

Media Briefing: EFF, Partners Warn UN Member States Are Poised to Approve Dangerous International Surveillance Treaty

5 days ago
Countries That Believe in Rule of Law Must Push Back on Draft That Expands Spying Powers, Benefiting Authoritarian Regimes

SAN FRANCISCO—On Wednesday, July 24, at 11:00 am Eastern Time (8:00 am Pacific Time, 5:00 pm CET), experts from Electronic Frontier Foundation (EFF), Access Now, Derechos Digitales, Human Rights Watch, and the International Fund for Public Interest Media will brief reporters about the imminent adoption of a global surveillance treaty that threatens human rights around the world, potentially paving the way for a new era of transnational repression.

The virtual briefing will update members of the media ahead of the United Nations’ concluding session of treaty negotiations, scheduled for July 29-August 9 in New York, to possibly finalize and adopt what started out as a treaty to combat cybercrime.

Despite repeated warnings and recommendations by human rights organizations, journalism and industry groups, cybersecurity experts, and digital rights defenders to add human rights safeguards and rein in the treaty’s broad scope and expansive surveillance powers, UN Member States are expected to adopt the Russian-backed, deeply flawed draft.

The experts will discuss the draft treaty in terms of shifts in geopolitical power, abuse of cybercrime laws, and challenges posed by the rising influence of Russia and China. A question-and-answer session will follow speaker presentations.  

WHAT:
Virtual media briefing on UN surveillance treaty

HOW:
To join the news conference remotely, please register from the following link to receive the webinar ID and password:
https://eff.zoom.us/meeting/register/tZwkd-GsrzoiH9Jt3gsl2CJ55Xv0hBDguxW5

SPEAKERS:
Tirana Hassan, Executive Director, Human Rights Watch
Paloma Lara-Castro, Public Policy Coordinator, Derechos Digitales
Khadija Patel, Journalist in Residence, International Fund for Public Interest Media
Katitza Rodriguez, Policy Director for Global Policy, EFF
Moderator: Raman Jit Singh Chima, Global Cybersecurity Lead and Senior International Counsel, Access Now

WHEN:
Wednesday, July 24, at 11:00 am Eastern Time, 8:00 am Pacific Time, 5:00 pm CET

For EFF’s submissions and Coalition Letters to UN Ad Hoc Committee overseeing treaty negotiations:
https://www.eff.org/pages/submissions#main-content

Contact:  KarenGulloSenior Writer for Free Speech and Privacykaren@eff.org DeborahBrownSenior Researcher and Advocate on Technology and Rights, Human Rights Watchbrownd@hrw.org CatalinaBallacatalina.balla@derechosdigitales.org
Karen Gullo

EFF Tells Minnesota Supreme Court to Strike Down Geofence Warrant As Fourth Circuit Court of Appeals Takes the Wrong Turn

1 week ago

We haven’t seen the end of invasive geofence warrants just yet, despite Google’s big announcement late last year that it was fundamentally changing how it collects location data. Today, EFF is filing an amicus brief in the Minnesota Supreme Court in State v. Contreras-Sanchez, involving a warrant that directed Google to turn over an entire month of location data in response to a geofence warrant. Our brief argues that warrant violates the Fourth Amendment and Minnesota’s state constitution.

Geofence warrants require a provider—almost always Google—to search its entire reserve of user location data to identify all users or devices located within a geographic area during a time period specified by law enforcement. This creates a high risk of turning suspicion on innocent people for crimes they didn’t commit and can reveal sensitive and private information about where individuals have traveled in the past. We’ve seen a recent flurry of court cases involving geofence warrants, and these courts’ rulings will set important Fourth Amendment precedent not just in geofence cases, but other investigations involving similar “reverse warrants” such as users’ keyword searches on search engines.

In Contreras-Sanchez, police discovered a dead body on the side of a rural roadway. They did not know when the body was disposed of and had few leads, so they sought a warrant directing Google to turn over location data for the area around the site for the previous month. Notably, Google responded that turning over the entire monthlong dataset would be too “cumbersome,” even though it covered only a relatively sparsely populated area. Instead, following the now-familiar “three-step” process for geofence warrants, Google provided police with location data corresponding to twelve devices that had entered the area over a single week period. Police focused in on one device, then sought identifying information on that device, leading them to the defendant.

EFF’s brief, filed along with the National Association of Criminal Defense Lawyers and the Minnesota Association of Criminal Defense Lawyers, argues that the geofence warrant acted as a “general warrant” akin to the practices of the British agents in Colonial America who were authorized to go house by house, searching for smuggled goods and evidence of seditious publications. As we write in the brief:

This general warrant allowed law enforcement to go Google account by Google account, searching each user’s private location data for evidence of an alleged crime. The same concerns that animated staunch objection to general warrants in the past are equally relevant to geofence warrants today; these warrants lack individualized suspicion, allow for unbridled officer discretion, and impact the privacy rights of countless innocent individuals. And, like the eighteenth-century writs of assistance that inspired the Fourth Amendment’s drafters, geofence warrants are especially pernicious because they also have the potential to affect fundamental rights including freedom of speech, association, and bodily autonomy. Neither the Fourth Amendment, nor Article 1, Section 10 of the Minnesota Constitution tolerate a warrant of this breadth.

Federal appeals court makes a serious misstep on geofence warrants

Meanwhile, in the leading federal geofence case, United States v. Chatrie, the federal Court of Appeals for the Fourth Circuit issued a seriously misguided opinion earlier this month, holding that a geofence warrant covering a busy area around a bank robbery for two hours wasn’t even a Fourth Amendment search at all—meaning that the police wouldn’t necessarily need a warrant to get access to all of this sensitive location data. The two-judge majority opinion effectively ignores the impact of the U.S. Supreme Court’s landmark Fourth Amendment location data case, Carpenter v. United States, and similarly tries to distinguish the Fourth Circuit’s own important precedent in Leaders of a Beautiful Struggle v. Baltimore Police Department. In the majority’s view, in order to be a search protected by the Fourth Amendment, the government must collect a significant amount of location data over a long period of time, and the two-hour period at issue in Chatrie simply wasn’t long enough to interfere with individuals’ reasonable expectation of privacy in the “whole of their physical movements” the way longer surveillance was in Carpenter and Leaders.

But in a scathing, 70-plus page dissenting opinion, Judge Wynn dismantled these arguments, showing that Carpenter requires courts to look beyond formulaic applications of precedent and examine the actual character of the surveillance at issue. On nearly every metric, geofence warrants have the capacity to reveal just as, if not more, private and intimate associations than the tracking at issue in Carpenter. What’s more, Judge Wynn’s dissent demonstrated what we’ve argued in geofence cases across the country: These warrants violate the Fourth Amendment because they are not targeted to a particular individual or device, like a typical warrant for digital communications. The only “evidence” supporting a geofence warrant is that a crime occurred in a particular area, and the perpetrator likely carried a cell phone that shared location data with Google. For this reason, they inevitably sweep up potentially hundreds of people who have no connection to the crime under investigation—and could turn each of those people into a suspect.

Chatrie’s lawyers are petitioning the entire Fourth Circuit to review the case, and we’re hopeful that the Chatrie panel opinion will be overturned by the full court en banc. We’ll be filing another amicus brief supporting Chatrie’s petition. Stay tuned for that and for the ruling from the Minnesota Supreme Court in Contreras-Sanchez

Related Cases: Carpenter v. United States
Andrew Crocker

EFF, International Partners Appeal to EU Delegates to Help Fix Flaws in Draft UN Cybercrime Treaty That Can Undermine EU's Data Protection Framework

1 week 2 days ago

With the final negotiating session to approve the UN Cybercrime Treaty just days away, EFF and 21 international civil society organizations today urgently called on delegates from EU states and the European Commission to push back on the draft convention's many flaws, which include an excessively broad scope that will grant intrusive surveillance powers without robust human rights and data protection safeguards.

The time is now to demand changes in the text to narrow the treaty's scope, limit surveillance powers, and spell out data protection principles. Without these fixes, the draft treaty stands to give governments' abusive practices the veneer of international legitimacy and should be rejected.

Letter below:

Urgent Appeal to Address Critical Flaws in the Latest Draft of the UN Cybercrime Convention


Ahead of the reconvened concluding session of the United Nations (UN) Ad Hoc Committee on Cybercrime (AHC) in New York later this month, we, the undersigned organizations, wish to urgently draw your attention to the persistent critical flaws in the latest draft of the UN cybercrime convention (hereinafter Cybercrime Convention or the Convention).

Despite the recent modifications, we continue to share profound concerns regarding the persistent shortcomings of the present draft and we urge member states to not sign the Convention in its current form.

Key concerns and proposals for remedy:
  1. Overly Broad Scope and Legal Uncertainty:
  • The draft Convention’s scope remains excessively broad, including cyber-enabled offenses and other content-related crimes. The proposed title of the Convention and the introduction of the new Article 4 – with its open-ended reference to “offenses established in accordance with other United Nations conventions and protocols” – creates significant legal uncertainty and expands the scope to an indefinite list of possible crimes to be determined only in the future. This ambiguity risks criminalizing legitimate online expression, having a chilling effect detrimental to the rule of law. We continue to recommend narrowing the Convention’s scope to clearly defined, already existing cyber-dependent crimes only, to facilitate its coherent application, ensure legal certainty and foreseeability and minimize potential abuse.
  • The draft Convention in Article 18 lacks clarity concerning the liability of online platforms for offenses committed by their users. The current draft of the Article lacks the requirement of intentional participation in offenses established in accordance with the Convention, thereby also contradicting Article 19 which does require intent. This poses the risk that online intermediaries could be held liable for information disseminated by their users, even without actual knowledge or awareness of the illegal nature of the content (as set out in the EU Digital Services Act), which will incentivise overly broad content moderation efforts by platforms to the detriment of freedom of expression. Furthermore, the wording is much broader (“for participation”) than the Budapest Convention (“committed for the cooperation’s benefit”) and would merit clarification along the lines of paragraph 125 of the Council of Europe Explanatory Report to the Budapest Convention
  • The proposal in the revised draft resolution to elaborate a draft protocol supplementary to the Convention represents a further push to expand the scope of offenses, risking the creation of a limitlessly expanding, increasingly punitive framework.
  1. Insufficient Protection for Good-Faith Actors:
  • The draft Convention fails to incorporate language sufficient to protect good-faith actors, such as security researchers (irrespective of whether it concerns the authorized testing or protection of an information and communications technology system), whistleblowers, activists, and journalists, from excessive criminalization. It is crucial that the mens rea element in the provisions relating to cyber-dependent crimes includes references to criminal intent and harm caused.
  1. Lack of Specific Human Rights Safeguards:
  • Article 6 fails to include specific human rights safeguards – as proposed by civil society organizations and the UN High Commissioner for Human Rights – to ensure a common understanding among Member States and to facilitate the application of the treaty without unlawful limitation of human rights or fundamental freedoms. These safeguards should be: 
    • applicable to the entire treaty to ensure that cybercrime efforts provide adequate protection for human rights;
    • be in accordance with the principles of legality, necessity, and proportionality, non-discrimination, and legitimate purpose;
    • incorporate the right to privacy among the human rights specified;
    • address the lack of effective gender mainstreaming to ensure the Convention does not undermine human rights on the basis of gender.
  1. Procedural Measures and Law Enforcement:
  • The Convention should limit the scope of procedural measures to the investigation of the criminal offenses set out in the Convention, in line with point 1 above.
  • In order to facilitate their application and – in light of their intrusiveness – to minimize the potential for abuse, this chapter of the Convention should incorporate the following minimal conditions and safeguards as established under international human rights law. Specifically, the following should be included in Article 24:
    • the principles of legality, necessity, proportionality, non-discrimination and legitimate purpose;
    • prior independent (judicial) authorization of surveillance measures and monitoring throughout their application;
    • adequate notification of the individuals concerned once it no longer jeopardizes investigations;
    • and regular reports, including statistical data on the use of such measures.
  • Articles 28/4, 29, and 30 should be deleted, as they include excessive surveillance measures that open the door for interference with privacy without sufficient safeguards as well as potentially undermining cybersecurity and encryption.
  1. International Cooperation:
  • The Convention should limit the scope of international cooperation solely to the crimes set out in the Convention itself to avoid misuse (as per point 1 above.) Information sharing for law enforcement cooperation should be limited to specific criminal investigations with explicit data protection and human rights safeguards.
  • Article 40 requires “the widest measure of mutual legal assistance” for offenses established in accordance with the Convention as well as any serious offense under the domestic law of the requesting State. Specifically, where no treaty on mutual legal assistance applies between State Parties, paragraphs 8 to 31 establish extensive rules on obligations for mutual legal assistance with any State Party with generally insufficient human rights safeguards and grounds for refusal. For example, paragraph 22 sets a high bar of ”substantial grounds for believing” for the requested State to refuse assistance.
  • When State Parties cannot transfer personal data in compliance with their applicable laws, such as the EU data protection framework, the conflicting obligation in Article 40 to afford the requesting State “the widest measure of mutual legal assistance” may unduly incentivize the transfer of the personal data subject to appropriate conditions under Article 36(1)(b), e.g. through derogations for specific situations in Article 38 of the EU Law Enforcement Directive. Article 36(1)(c) of the Convention also encourages State Parties to establish bilateral and multilateral agreements to facilitate the transfer of personal data, which creates a further risk of undermining the level of data protection guaranteed by EU law.
  • When personal data is transferred in full compliance with the data protection framework of the requested State, Article 36(2) should be strengthened to include clear, precise, unambiguous and effective standards to protect personal data in the requesting State, and to avoid personal data being further processed and transferred to other States in ways that may violate the fundamental right to privacy and data protection.
Conclusion and Call to Action:

Throughout the negotiation process, we have repeatedly pointed out the risks the treaty in its current form pose to human rights and to global cybersecurity. Despite the latest modifications, the revised draft fails to address our concerns and continues to risk making individuals and institutions less safe and more vulnerable to cybercrime, thereby undermining its very purpose.

Failing to narrow the scope of the whole treaty to cyber-dependent crimes, to protect the work of security researchers, human rights defenders and other legitimate actors, to strengthen the human rights safeguards, to limit surveillance powers, and to spell out the data protection principles will give governments’ abusive practices a veneer of international legitimacy. It will also make digital communications more vulnerable to those cybercrimes that the Convention is meant to address. Ultimately, if the draft Convention cannot be fixed, it should be rejected. 

With the UN AHC’s concluding session about to resume, we call on the delegations of the Member States of the European Union and the European Commission’s delegation to redouble their efforts to address the highlighted gaps and ensure that the proposed Cybercrime Convention is narrowly focused in its material scope and not used to undermine human rights nor cybersecurity. Absent meaningful changes to address the existing shortcomings, we urge the delegations of EU Member States and the EU Commission to reject the draft Convention and not advance it to the UN General Assembly for adoption.

This statement is supported by the following organizations:

Access Now
Alternatif Bilisim
ARTICLE 19: Global Campaign for Free Expression
Centre for Democracy & Technology Europe
Committee to Protect Journalists
Digitalcourage
Digital Rights Ireland
Digitale Gesellschaft
Electronic Frontier Foundation (EFF)
epicenter.works
European Center for Not-for-Profit Law (ECNL) 
European Digital Rights (EDRi)
Global Partners Digital
International Freedom of Expression Exchange (IFEX)
International Press Institute 
IT-Pol Denmark
KICTANet
Media Policy Institute (Kyrgyzstan)
Privacy International
SHARE Foundation
Vrijschrift.org
World Association of News Publishers (WAN-IFRA)
Zavod Državljan D (Citizen D)





Katitza Rodriguez

Beyond Pride Month: Protecting Digital Identities For LGBTQ+ People

1 week 2 days ago

The internet provides people space to build communities, shed light on injustices, and acquire vital knowledge that might not otherwise be available. And for LGBTQ+ individuals, digital spaces enable people that are not yet out to engage with their gender and sexual orientation.

In the age of so much passive surveillance, it can feel daunting if not impossible to strike any kind of privacy online. We can’t blame you for feeling this way, but there’s plenty you can do to keep your information private and secure online. What’s most important is that you think through the specific risks you face and take the right steps to protect against them. 

The first step is to create a security plan. Following that, consider some of the recommended advice below and see which steps fit best for your specific needs:  

  • Use multiple browsers for different use cases. Compartmentalization of sensitive data is key. Since many websites are finicky about the type of browser you’re using, it’s normal to have multiple browsers installed on one device. Designate one for more sensitive activities and configure the settings to have higher privacy.
  • Use a VPN to bypass local censorship, defeat local surveillance, and connect your devices securely to the network of an organization on the other side of the internet. This is extra helpful for accessing pro-LGBTQ+ content from locations that ban access to this material.
  • If your cell phone allows it, hide sensitive apps away from the home screen. Although these apps will still be available on your phone, this hides them into a special folder so that prying eyes are less likely to find them.
  • Separate your digital identities to mitigate the risk of doxxing, as the personal information exposed about you is often found in public places like “people search” sites and social media.
  • Create a security plan for incidents of harassment and threats of violence. Especially if you are a community organizer, activist, or prominent online advocate, you face an increased risk of targeted harassment. Developing a plan of action in these cases is best done well before the threats become credible. It doesn’t have to be perfect; the point is to refer to something you were able to think up clear-headed when not facing a crisis. 
  • Create a plan for backing up images and videos to avoid losing this content in places where governments slow down, disrupt, or shut down the internet, especially during LGBTQ+ events when network disruptions inhibit quick information sharing.
  • Use two-factor authentication where available to make your online accounts more secure by adding a requirement for additional proof (“factors”) alongside a strong password.
  • Obscure people’s faces when posting pictures of protests online (like using tools such as Signal’s in-app camera blur feature) to protect their right to privacy and anonymity, particularly during LGBTQ+ events where this might mean staying alive.
  • Harden security settings in Zoom for large video calls and events, such as enabling security settings and creating a process to remove opportunistic or homophobic people disrupting the call. 
  • Explore protections on your social media accounts, such as switching to private mode, limiting comments, or using tools like blocking users and reporting posts. 

For more information on these topics, visit the following:

Paige Collings

UN Cybercrime Draft Convention Dangerously Expands State Surveillance Powers Without Robust Privacy, Data Protection Safeguards

1 week 2 days ago

This is the third post in a series highlighting flaws in the proposed UN Cybercrime Convention. Check out Part I, our detailed analysis on the criminalization of security research activities, and Part II, an analysis of the human rights safeguards.

As we near the final negotiating session for the proposed UN Cybercrime Treaty, countries are running out of time to make much-needed improvements to the text. From July 29 to August 9, delegates in New York aim to finalize a convention that could drastically reshape global surveillance laws. The current draft favors extensive surveillance, establishes weak privacy safeguards, and defers most protections against surveillance to national laws—creating a dangerous avenue that could be exploited by countries with varying levels of human rights protections.

The risk is clear: without robust privacy and human rights safeguards in the actual treaty text, we will see increased government overreach, unchecked surveillance, and unauthorized access to sensitive data—leaving individuals vulnerable to violations, abuses, and transnational repression. And not just in one country.  Weaker safeguards in some nations can lead to widespread abuses and privacy erosion because countries are obligated to share the “fruits” of surveillance with each other. This will worsen disparities in human rights protections and create a race to the bottom, turning global cooperation into a tool for authoritarian regimes to investigate crimes that aren’t even crimes in the first place.

Countries that believe in the rule of law must stand up and either defeat the convention or dramatically limit its scope, adhering to non-negotiable red lines as outlined by over 100 NGOs. In an uncommon alliance, civil society and industry agreed earlier this year in a joint letter urging governments to withhold support for the treaty in its current form due to its critical flaws.

Background and Current Status of the UN Cybercrime Convention Negotiations

The UN Ad Hoc Committee overseeing the talks and preparation of a final text is expected to consider a revised but still-flawed text in its entirety, along with the interpretative notes, during the first week of the session, with a focus on all provisions not yet agreed ad referendum.[1] However, in keeping with the principle in multilateral negotiations that “nothing is agreed until everything is agreed,” any provisions of the draft that have already been agreed could potentially be reopened. 

The current text reveals significant disagreements among countries on crucial issues like the convention's scope and human rights protection. Of course the text could also get worse. Just when we thought Member States had removed many concerning crimes, they could reappear. The Ad-Hoc Committee Chair’s General Assembly resolution includes two additional sessions to negotiate not more protections, but the inclusion of more crimes. The resolution calls for “a draft protocol supplementary to the Convention, addressing, inter alia, additional criminal offenses.” Nevertheless, some countries still expect the latest draft to be adopted.

In this third post, we highlight the dangers of the currently proposed UN Cybercrime Convention's broad definition of "electronic data" and inadequate privacy and data protection safeguards.Together, these create the conditions for severe human rights abuses, transnational repression, and inconsistencies across countries in human rights protections.

A Closer Look to the Definition of Electronic Data

The proposed UN Cybercrime Convention significantly expands state surveillance powers under the guise of combating cybercrime. Chapter IV grants extensive government authority to monitor and access digital systems and data, categorizing data into communications data: subscriber data, traffic data, and content data. But it also makes use of a catch-all category called "electronic data." Article 2(b) defines electronic data as "any representation of facts, information, or concepts in a form suitable for processing in an information and communications technology system, including a program suitable to cause an information and communications technology system to perform a function."

"Electronic data," is eligible for three surveillance powers: preservation orders (Article 25), production orders (Article 27), and search and seizure (Article 28). Unlike the other traditional categories of traffic data, subscriber data and content data, "electronic data" refers to any data stored, processed, or transmitted electronically, regardless of whether it has been communicated to anyone. This includes documents saved on personal computers or notes stored on digital devices. In essence, this means that private unshared thoughts and information are no longer safe. Authorities can compel the preservation, production, or seizure of any electronic data, potentially turning personal devices into spy vectors regardless of whether the information has been communicated.

This is delicate territory, and it deserves careful thought and real protection—many of us now use our devices to keep our most intimate thoughts and ideas, and many of us also use tools like health and fitness tools in ways that we do not intend to share. This includes data stored on devices, such as face scans and smart home device data, if they remain within the device and are not transmitted. Another example could be photos that someone takes on a device but doesn't share with anyone. This category threatens to turn our most private thoughts and actions over to spying governments, both our own and others. 

And the problem is worse when we consider emerging technologies. The sensors in smart devices, AI, and augmented reality glasses, can collect a wide array of highly sensitive data. These sensors can record involuntary physiological reactions to stimuli, including eye movements, facial expressions, and heart rate variations. For example, eye-tracking technology can reveal what captures a user's attention and for how long, which can be used to infer interests, intentions, and even emotional states. Similarly, voice analysis can provide insights into a person's mood based on tone and pitch, while body-worn sensors might detect subtle physical responses that users themselves are unaware of, such as changes in heart rate or perspiration levels.

These types of data are not typically communicated through traditional communication channels like emails or phone calls (which would be categorized as content or traffic data). Instead, they are collected, stored, and processed locally on the device or within the system, fitting the broad definition of "electronic data" as outlined in the draft convention.

Such data likely has been harder to obtain because it may have not been communicated to or possessed by any communications intermediary or system. So it’s an  example of how the broad term "electronic data" increases the kinds (and sensitivity) of information about us that can be targeted by law enforcement through production orders or by search and seizure powers. These emerging technology uses are their own category, but they are most like "content" in communications surveillance, which usually has high protection. “Electronic data” must have equal protection as “content” of communication, and be subject to ironclad data protection safeguards, which the propose treaty fails to provide, as we will explain below.

The Specific Safeguard Problems

Like other powers in the draft convention, the broad powers related to "electronic data" don't come with specific limits to protect fair trial rights. 

Missing Safeguards

For example, many countries' have various kinds of information that is protected by a legal “privilege” against surveillance: attorney-client privilege, the spousal privilege, the priest-penitent privilege, doctor-patient privileges, and many kinds of protections for confidential business information and trade secrets. Many countries, also give additional protections for journalists and their sources. These categories, and more, provide varying degrees of extra requirements before law enforcement may access them using production orders or search-and-seizure powers, as well as various protections after the fact, such as preventing their use in prosecutions or civil actions. 

Similarly, the convention lacks clear safeguards to prevent authorities from compelling individuals to provide evidence against themselves. These omissions raise significant red flags about the potential for abuse and the erosion of fundamental rights when a treaty text involves so many countries with a high disparity of human rights protections.

The lack of specific protections for criminal defense is especially troubling. In many legal systems, defense teams have certain protections to ensure they can effectively represent their clients, including access to exculpatory evidence and the protection of defense strategies from surveillance. However, the draft convention does not explicitly protect these rights, which both misses the chance to require all countries to provide these minimal protections and potentially further undermines the fairness of criminal proceedings and the ability of suspects to mount an effective defense in countries that either don’t provide those protections or where they are not solid and clear.

Even the State “Safeguards” in Article 24 are Grossly Insufficient

Even where the convention’s text discusses “safeguards,” the convention doesn’t actually protect people. The “safeguard” section, Article 24, fails in several obvious ways: 

Dependence on Domestic Law: Article 24(1) makes safeguards contingent on domestic law, which can vary significantly between countries. This can result in inadequate protections in states where domestic laws do not meet high human rights standards. By deferring safeguards to national law, Article 24 weakens these protections, as national laws may not always provide the necessary safeguards. It also means that the treaty doesn’t raise the bar against invasive surveillance, but rather confirms even the lowest protections.

A safeguard that bends to domestic law isn't a safeguard at all if it leaves the door open for abuses and inconsistencies, undermining the protection it's supposed to offer.

Discretionary Safeguards: Article 24(2) uses vague terms like “as appropriate,” allowing states to interpret and apply safeguards selectively. This means that while the surveillance powers in the convention are mandatory, the safeguards are left to each state’s discretion. Countries decide what is “appropriate” for each surveillance power, leading to inconsistent protections and potential weakening of overall safeguards.

Lack of Mandatory Requirements: Essential protections such as prior judicial authorization, transparency, user notification, and the principle of legality, necessity and non-discrimination are not explicitly mandated. Without these mandatory requirements, there is a higher risk of misuse and abuse of surveillance powers.

No Specific Data Protection Principles: As we noted above, the proposed treaty does not include specific safeguards for highly sensitive data, such as biometric or privileged data. This oversight leaves such information vulnerable to misuse.

Inconsistent Application: The discretionary nature of the safeguards can lead to their inconsistent application, exposing vulnerable populations to potential rights violations. Countries might decide that certain safeguards are unnecessary for specific surveillance methods, which the treaty allows, increasing the risk of abuse.

Finally, Article 23(4) of Chapter IV authorizes the application of Article 24 safeguards to specific powers within the international cooperation chapter (Chapter V). However, significant powers in Chapter V, such as those related to law enforcement cooperation (Article 47) and the 24/7 network (Article 41) do not specifically cite the corresponding Chapter IV powers and so may not be covered by Article 24 safeguards.

Search and Seizure of Stored Electronic Data

The proposed UN Cybercrime Convention significantly expands government surveillance powers, particularly through Article 28, which deals with the search and seizure of electronic data. This provision grants authorities sweeping abilities to search and seize data stored on any computer system, including personal devices, without clear, mandatory privacy and data protection safeguards. This poses a serious threat to privacy and data protection.

Article 28(1) allows authorities to search and seize any “electronic data” in an information and communications technology (ICT) system or data storage medium. It lacks specific restrictions, leaving much to the discretion of national laws. This could lead to significant privacy violations as authorities might access all files and data on a suspect’s personal computer, mobile device, or cloud storage account—all without clear limits on what may be targeted or under what conditions.

Article 28(2) permits authorities to search additional systems if they believe the sought data is accessible from the initially searched system. While judicial authorization should be a requirement to assess the necessity and proportionality of such searches, Article 24 only mandates “appropriate conditions and safeguards” without explicit judicial authorization. In contrast, U.S. law under the Fourth Amendment requires search warrants to specify the place to be searched and the items to be seized—preventing unreasonable searches and seizures.

Article 28(3) empowers authorities to seize or secure electronic data, including making and retaining copies, maintaining its integrity, and rendering it inaccessible or removing it from the system. For publicly accessible data, this takedown process could infringe on free expression rights and should be explicitly subject to free expression standards to prevent abuse.

Article 28(4) requires countries to have laws that allow authorities to compel anyone who knows how a particular computer or device works to provide necessary information to access it. This could include asking a tech expert or an engineer to help unlock a device or explain its security features. This is concerning because it might force people to help law enforcement in ways that could compromise security or reveal confidential information. For example, an engineer could be required to disclose a security flaw that hasn't been fixed, or to provide encryption keys that protect data, which could then be misused. The way it is written, it could be interpreted to include disproportionate orders that can lead to forcing persons to disclose a vulnerability to the government that hasn’t been fixed. It could also imply forcing people to disclose encryption keys such as signing keys on the basis that these are “the necessary information to enable” some form of surveillance.

Privacy International and EFF strongly recommend Article 28.4 be removed in its entirety. Instead, it has been agreed ad referendum. At least, the drafters must include material in the explanatory memorandum that accompanies the draft Convention to clarify limits to avoid forcing technologists to reveal confidential information or do work on behalf of law enforcement against their will. Once again, it would also be appropriate to have clear legal standards about how law enforcement can be authorized to seize and look through people’s private devices.

In general, production and search and seizure orders might be used to target tech companies' secrets, and require uncompensated labor by technologists and tech companies, not because they are evidence of crime but because they can be used to enhance law enforcement's technical capabilities.

Domestic Expedited Preservation Orders of Electronic Data

Article 25 on preservation orders, already agreed ad referendum, is especially problematic. It’s very broad, and will result in individuals’ data being preserved and available for use in prosecutions far more than needed. It also fails to include necessary safeguards to avoid abuse of power. By allowing law enforcement to demand preservation with no factual justification, it risks spreading familiar deficiencies in U.S. law worldwide.

Article 25 requires each country to create laws or other measures that let authorities quickly preserve specific electronic data, particularly when there are grounds to believe that such data is at risk of being lost or altered.

Article 25(2) ensures that when preservation orders are issued, the person or entity in possession of the data must keep it for up to 90 days, giving authorities enough time to obtain the data through legal channels, while allowing this period to be renewed. There is no specified limit on the number of times the order can be renewed, so it can potentially be reimposed indefinitely.

Preservation orders should be issued only when they’re absolutely necessary, but Article 24 does not mention the principle of necessity and lacks individual notice and explicit grounds requirements and statistical transparency obligations.

The article must limit the number of times preservation orders may be renewed to prevent indefinite data preservation requirements. Each preservation order renewal must require a demonstration of continued necessity and factual grounds justifying continued preservation.

Article 25(3) also compels states to adopt laws that enable gag orders to accompany preservation orders, prohibiting service providers or individuals from informing users that their data was subject to such an order. The duration of such a gag order is left up to domestic legislation.

As with all other gag orders, the confidentiality obligation should be subject to time limits and only be available to the extent that disclosure would demonstrably threaten an investigation or other vital interest. Further, individuals whose data was preserved should be notified when it is safe to do so without jeopardizing an investigation. Independent oversight bodies must oversee the application of preservation orders.

Indeed, academics such as prominent law professor and former U.S. Department of Justice lawyer Orin S. Kerr have criticized similar U.S. data preservation practices under 18 U.S.C. § 2703(f) for allowing law enforcement agencies to compel internet service providers to retain all contents of an individual's online account without their knowledge, any preliminary suspicion, or judicial oversight. This approach, intended as a temporary measure to secure data until further legal authorization is obtained, lacks the foundational legal scrutiny typically required for searches and seizures under the Fourth Amendment, such as probable cause or reasonable suspicion.

The lack of explicit mandatory safeguards raise similar concerns about Article 25 of the proposed UN convention. Kerr argues that these U.S. practices constitute a "seizure" under the Fourth Amendment, indicating that such actions should be justified by probable cause or, at the very least, reasonable suspicion—criteria conspicuously absent in the current draft of the UN convention.

By drawing on Kerr's analysis, we see a clear warning: without robust safeguards— including an explicit grounds requirement, prior judicial authorization, explicit notification to users, and transparency—preservation orders of electronic data proposed under the draft UN Cybercrime Convention risk replicating the problematic practices of the U.S. on a global scale.

Production Orders of Electronic Data

Article 27(a)’s treatment of “electronic data” in production orders, in light of the draft convention’s broad definition of the term, is especially problematic.

This article, which has already been agreed ad referendum, allows production orders to be issued to custodians of electronic data, requiring them to turn over copies of that data. While demanding customer records from a company is a traditional governmental power, this power is dramatically increased in the draft convention.

As we explain above, the extremely broad definition of electronic data, which is often sensitive in nature, raises new and significant privacy and data protection concerns, as it permits authorities to access potentially sensitive information without immediate oversight and prior judicial authorization. The convention needs instead to require prior judicial authorization before such information can be demanded from the companies that hold it. 

This ensures that an impartial authority assesses the necessity and proportionality of the data request before it is executed. Without mandatory data protection safeguards for the processing of personal data, law enforcement agencies might collect and use personal data without adequate restrictions, thereby risking the exposure and misuse of personal information.

The text of the convention fails to include these essential data protection safeguards. To protect human rights, data should be processed lawfully, fairly, and in a transparent manner in relation to the data subject. Data should be collected for specified, explicit, and legitimate purposes and not further processed in a manner that is incompatible with those purposes. 

Data collected should be adequate, relevant, and limited to what is necessary to the purposes for which they are processed. Authorities should request only the data that is essential for the investigation. Production orders should clearly state the purpose for which the data is being requested. Data should be kept in a format that permits identification of data subjects for no longer than is necessary for the purposes for which the data is processed. None of these principles are present in Article 27(a) and they must be. 

International Cooperation and Electronic Data

The draft UN Cybercrime Convention includes significant provisions for international cooperation, extending the reach of domestic surveillance powers across borders, by one state on behalf of another state. Such powers, if not properly safeguarded, pose substantial risks to privacy and data protection. 

  • Article 42 (1) (“International cooperation for the purpose of expedited preservation of stored electronic data”) allows one state to ask another to obtain preservation of “electronic data” under the domestic power outlined in Article 25. 
  • Article 44 (1) (“Mutual legal assistance in accessing stored electronic data”) allows one state to ask another “to search or similarly access, seize or similarly secure, and disclose electronic data,” presumably using powers similar to those under Article 28, although that article is not referenced in Article 44. This specific provision, which has not yet been agreed ad referendum, enables comprehensive international cooperation in accessing stored electronic data. For instance, if Country A needs to access emails stored in Country B for an ongoing investigation, it can request Country B to search and provide the necessary data.
Countries Must Protect Human Rights or Reject the Draft Treaty

The current draft of the UN Cybercrime Convention is fundamentally flawed. It dangerously expands surveillance powers without robust checks and balances, undermines human rights, and poses significant risks to marginalized communities. The broad and vague definitions of "electronic data," coupled with weak privacy and data protection safeguards, exacerbate these concerns.

Traditional domestic surveillance powers are particularly concerning as they underpin international surveillance cooperation. This means that one country can easily comply with the requests of another, which if not adequately safeguarded, can lead to widespread government overreach and human rights abuses. 

Without stringent data protection principles and robust privacy safeguards, these powers can be misused, threatening human rights defenders, immigrants, refugees, and journalists. We urgently call on all countries committed to the rule of law, social justice, and human rights to unite against this dangerous draft. Whether large or small, developed or developing, every nation has a stake in ensuring that privacy and data protection are not sacrificed. 

Significant amendments must be made to ensure these surveillance powers are exercised responsibly and protect privacy and data protection rights. If these essential changes are not made, countries must reject the proposed convention to prevent it from becoming a tool for human rights violations or transnational repression.

[1] In the context of treaty negotiations, "ad referendum" means that an agreement has been reached by the negotiators, but it is subject to the final approval or ratification by their respective authorities or governments. It signifies that the negotiators have agreed on the text, but the agreement is not yet legally binding until it has been formally accepted by all parties involved.

Katitza Rodriguez

Courts Should Have Jurisdiction over Foreign Companies Collecting Data on Local Residents, EFF Tells Appeals Court

1 week 3 days ago

This post was written by EFF legal intern Danya Hajjaji. 

Corporations should not be able to collect data from a state’s residents while evading the jurisdiction of that state’s courts, EFF and the UC Berkeley Center for Consumer Law and Economic Justice explained in a friend-of-the-court brief to the Ninth Circuit Court of Appeals. 

The case, Briskin v. Shopify, stems from a California resident’s privacy claims against Shopify, Inc. and its subsidiaries, out-of-state companies that process payments for third party ecommerce companies (collectively “Shopify”). The plaintiff alleged that Shopify secretly collected data on the plaintiff and other California consumers while purchasing apparel from an online California-based retailer. Shopify also allegedly tracked the users’ browsing activities across all ecommerce sites that used Shopify’s services. Shopify allegedly compiled that information into comprehensive user profiles, complete with financial “risk scores” that companies could use to block users’ future purchases.  

The Ninth Circuit initially dismissed the lawsuit for lack of personal jurisdiction and ruled that Shopify, an out-of-state defendant, did not have enough contacts with California to be fairly sued in California. 

Personal jurisdiction is designed to protect defendants' due process rights by ensuring that they cannot be hailed into court in jurisdictions that they have little connection to. In the internet context, the Ninth Circuit has previously held that operating a website, plus evidence that the defendant did “something more” to target a jurisdiction, is sufficient for personal jurisdiction.  

The Ninth Circuit originally dismissed Briskin on the grounds that the plaintiff failed to show the defendant did “something more.” It held that violating all users’ privacy was not enough; Shopify would have needed to do something to target Californians in particular.  

The Ninth Circuit granted rehearing en banc, and requested additional briefing on the personal jurisdiction rule that should govern online conduct. 

EFF and the Center for Consumer Law and Economic Justice argued that courts in California can fairly hold out-of-state corporations accountable for privacy violations that involve collecting vast amounts of personal data directly from consumers inside California and using that data to build profiles based in part on their location. To obtain personal data from California consumers, corporations must usually form additional contacts with California as well—including signing contracts within the state and creating California-specific data policies. In our view, Shopify is subject to personal jurisdiction in California because Shopify’s allegedly extensive data collection operations targeted Californians. That it also allegedly collected information from users in other states should not prevent California plaintiffs from having their day in court in their home state.   

In helping the Ninth Circuit develop a sensible test for personal jurisdiction in data privacy cases, EFF hopes to empower plaintiffs to preserve their online privacy rights in their forum of choice without sacrificing existing jurisdictional protections for internet publishers.  

EFF has long worked to ensure that consumer data privacy laws balance rights to privacy and free expression. We hope the Ninth Circuit will adopt our guidelines in structuring a privacy-specific personal jurisdiction rule that is commonsense and constitutionally sound. 

Tori Noble

Victory! EFF Supporters Beat USPTO Proposal To Wreck Patent Reviews

1 week 3 days ago

The U.S. patent system is broken, particularly when it comes to software patents. At EFF, we’ve been fighting hard for changes that make the system more sensible. Last month, we got a big victory when we defeated a set of rules that would have mangled one of the U.S. Patent and Trademark Office (USPTO)’s most effective systems for kicking out bad patents. 

In 2012, recognizing the entrenched problem of a patent office that spewed out tens of thousands of ridiculous patents every year, Congress created a new system to review patents called “inter partes reviews,” or IPRs. While far from perfect, IPRs have resulted in cancellation of thousands of patent claims that never should have been issued in the first place. 

At EFF, we used the IPR process to crowd-fund a challenge to the Personal Audio “podcasting patent” that tried to extract patent royalty payments from U.S. podcasters. We won that proceeding and our victory was confirmed on appeal.

It’s no surprise that big patent owners and patent trolls have been trying to wreck the IPR system for years. They’ve tried, and failed, to get federal courts to dismantle IPRs. They’ve tried, and failed, to push legislation that would break the IPR system. And last year, they found a new way to attack IPRs—by convincing the USPTO to propose a set of rules that would have sharply limited the public’s right to challenge bad patents. 

That’s when EFF and our supporters knew we had to fight back. Nearly one thousand EFF supporters filed comments with the USPTO using our suggested language, and hundreds more of you wrote your own comments. 

Today, we say thank you to everyone who took the time to speak out. Your voice does matter. In fact, the USPTO withdrew all three of the terrible proposals that we focused on. 

Our Victory to Keep Public Access To Patent Challenges 

The original rules would have greatly increased expanded what are called “discretionary denials,” enabling judges at the USPTO to throw out an IPR petition without adequately considering the merits of the petition. While we would like to see even fewer discretionary denials, defeating the proposed limitations patent challenges is a significant win.

First, the original rules would have stopped “certain for-profit entities” from using the IPR system altogether. While EFF is a non-profit, for-profit companies can and should be allowed to play a role in getting wrongly granted patents out of the system. Membership-based patent defense organizations like RPX or Unified Patents can allow small companies to band together and limit their costs while defending themselves against invalid patents. And non-profits like the Linux Foundation, who joined us in fighting against these wrongheaded proposed rules, can work together with professional patent defense groups to file more IPRs. 

EFF and our supporters wrote in opposition to this rule change—and it’s out. 

Second, the original rules would have exempted “micro and small entities” from patent reviews altogether. This exemption would have applied to many of the types of companies we call “patent trolls”—that is, companies whose business is simply demanding license fees for patents, rather than offering actual products or services. Those companies, specially designed to threaten litigation, would have easily qualified as “small entities” and avoided having their patents challenged. Patent trolls, which bully real small companies and software developers into paying unwarranted settlement fees, aren’t the kind of “small business” that should be getting special exemptions from patent review. 

EFF and our supporters opposed this exemption, and it’s out of the final rulemaking. 

Third, last year’s proposal would have allowed for IPR petitions to be kicked out if they had a “parallel proceeding”—in other words, a similar patent dispute—in district court. This was a wholly improper reason to not consider IPRs, especially since district court evidence rules are different than those in place for an IPR. 

EFF and our supporters opposed these new limitations, and they’re out. 

While the new rules aren’t perfect, they’re greatly improved. We would still prefer more IPRs rather than fewer, and don’t want to see IPRs that otherwise meet the rules get kicked out of the review process. But even there, the new revised rules have big improvements. For instance, they allow for separate briefing of discretionary denials, so that people and companies seeking IPR review can keep their focus on the merits of their petition. 

Additional reading: 

Joe Mullin

Modern Cars Can Be Tracking Nightmares. Abuse Survivors Need Real Solutions.

1 week 3 days ago

The amount of data modern cars collect is a serious privacy concern for all of us. But in an abusive situation, tracking can be a nightmare.

As a New York Times article outlined, modern cars are often connected to apps that show a user a wide range of information about a vehicle, including real-time location data, footage from cameras showing the inside and outside of the car, and sometimes the ability to control the vehicle remotely from their mobile device. These features can be useful, but abusers often turn these conveniences into tools to harass and control their victims—or even to locate or spy on them once they've fled their abusers.

California is currently considering three bills intended to help domestic abuse survivors endangered by vehicle tracking. Unfortunately, despite the concerns of advocates who work directly on tech-enabled abuse, these proposals are moving in the wrong direction. These bills intended to protect survivors are instead being amended in ways that open them to additional risks. We call on the legislature to return to previous language that truly helps people disable location-tracking in their vehicles without giving abusers new tools.

We know abusers are happy to lie and exploit whatever they can to further their abuse, including laws and services meant to help survivors.

Each of the bills seeks to address tech-enabled abuse in different ways. The first, S.B. 1394 by CA State Sen. David Min (Irvine), earned EFF's support when it was introduced. This bill was drafted with considerable input from experts in tech-enabled abuse at The University of California, Irvine. We feel its language best serves the needs of survivors in a wide range of scenarios without creating new avenues of stalking and harassment for the abuser to exploit. As introduced, it would require car manufacturers to respond to a survivor's request to cut an abuser's remote access to a car's connected services within two business days. To make a request, a survivor must prove the vehicle is theirs to use, even if their name is not necessarily on the loan or title. They could do this through documentation such as a court order, police report, or marriage separation agreement. S.B. 1000 by CA State Sen. Angelique Ashby (Sacramento) would have applied a similar framework to allow survivors to make requests to cut remote access to vehicles and other smart devices.

In contrast, A.B. 3139 introduced by Asm. Dr. Akilah Weber (La Mesa) takes a different approach. Rather than have people submit requests first and cut access later, this bill would require car manufacturers to terminate access immediately, and only requiring some follow-up documentation up to seven days after the request. Unfortunately, both S.B. 1394 and S.B. 1000 have now been amended to adopt this "act first, ask questions later" framework.

The changes to these bills are intended to make it easier for people in desperate situations to get away quickly. Yet, for most people, we believe the risks of A.B. 3139's approach outweigh the benefits. EFF's experience working with victims of tech-enabled abuse instead suggests that these changes are bad for survivors—something we've already said in official comments to the Federal Communications Commission.

Why This Doesn't Work for Survivors

EFF has two main concerns with the approach from A.B. 3139. First, the bill sets a low bar for verifying an abusive situation, including simply allowing a statement from the person filing the request. Second, the bill requires a way to turn tracking off immediately without any verification. Why are these problems?

Imagine you have recently left an abusive relationship. You own your car, but your former partner decides to seek revenge for your leaving and calls the car manufacturer to file a false report that removes your access to your car. In cases where both the survivor and abuser have access to the car's account—a common scenario—the abuser could even kick the survivor off a car app account, and then use the app to harass and stalk the survivor remotely. Under A.B. 3139's language, it would be easy for an abuser to make a false statement, under penalty of perjury—to "verify" that the survivor is the perpetrator of abuse. Depending on a car app’s capabilities, that false claim could mean that, for up to a week, a survivor may be unable to start or access their own vehicle. We know abusers are happy to lie and exploit whatever they can to further their abuse, including laws and services meant to help survivors. It will be trivial for an abuser—who is already committing a crime and unlikely to fear a perjury charge—to file a false request to cut someone off from their car.

It's true that other domestic abuse laws EFF has worked on allow for this kind of self-attestation. This includes the Safe Connections Act, which allows survivors to peel their phone more easily off of a family plan. However, this is the wrong approach for vehicles. Access to a phone plan is significantly different from access to a car, particularly when remote services allow you to control a vehicle. While inconvenient and expensive, it is much easier to replace a phone or a phone plan than a car if your abuser locks you out. The same solution doesn't fit both problems. You need proof to make the decision to cut access to something as crucial to someone's life as their vehicle.

Second, the language added to these bills requires it be possible for anyone in a car to immediately disconnect it from connected services. Specifically, A.B. 3139 says that the method to disable tracking must be "prominently located and easy to use and shall not require access to a remote, online application." That means it must essentially be at the push of a button. That raises serious potential for misuse. Any person in the car may intentionally or accidentally disable tracking, whether they're a kid pushing buttons for fun, a rideshare passenger, or a car thief. Even more troubling, an abuser could cut access to the app’s ability to track a car and kidnap a survivor or their children. If past is prologue, in many cases, abusers will twist this "protection" to their own ends.

The combination of immediate action and self-attestation is helpful for survivors in one particular scenario—a survivor who has no documentation of their abuse, who needs to get away immediately in a car owned by their abuser. But it opens up many new avenues of stalking, harassment, and other forms of abuse for survivors. EFF has loudly called for bills that empower abuse survivors to take control away from their abusers, particularly by being able to disable tracking—but this is not the right way to do it. We urge the legislature to pass bills with the processes originally outlined in S.B. 1394 and S.B. 1000 and provide survivors with real solutions to address unwanted tracking.

Hayley Tsukayama

Detroit Takes Important Step in Curbing the Harms of Face Recognition Technology

1 week 4 days ago

In a first-of-its-kind agreement, the Detroit Police Department recently agreed to adopt strict limits on its officers’ use of face recognition technology as part of a settlement in a lawsuit brought by a victim of this faulty technology.  

Robert Williams, a Black resident of a Detroit suburb, filed suit against the Detroit Police Department after officers arrested him at his home in front of his wife, daughters, and neighbors for a crime he did not commit. After a shoplifting incident at a watch store, police used a blurry still taken from surveillance footage and ran it through face recognition technology—which incorrectly identified Williams as the perpetrator. 

Under the terms of the agreement, the Detroit Police can no longer substitute face recognition technology (FRT) for reliable policework. Simply put: Face recognition matches can no longer be the only evidence police use to justify an arrest. 

FRT creates an “imprint” from an image of a face, then compares that imprint to other images—often a law enforcement database made up of mugshots, driver’s license images, or even images scraped from the internet. The technology itself is fraught with issues, including that it is highly inaccurate for certain demographics, particularly Black men and women. The Detroit Police Department makes face recognition queries using DataWorks Plus software to the Statewide Network of Agency Photos, or (SNAP), a database operated by the Michigan State Police. According to data obtained by EFF through a public records request, roughly 580 local, state, and federal agencies and their sub-divisions have desktop access to SNAP.  

Among other achievements, the settlement agreement’s new rules bar arrests based solely on face recognition results, or the results of the ensuing photo lineup—a common police procedure in which a witness is asked to identify the perpetrator from a “lineup” of images—conducted immediately after FRT identifies a suspect. This dangerous simplification has meant that on partial matches—combined with other unreliable evidence, such as eyewitness identifications—police have ended up arresting people who clearly could not have committed the crime. Such was the case with Robert Williams, who had been out of the state on the day the crime occurred. Because face recognition finds people who look similar to the suspect, putting that person directly into a police lineup will likely result in the witness picking the person who looks most like the suspect they saw—all but ensuring the person falsely accused by technology will receive the bulk of the suspicion.  

Under Detroit’s new rules, if police use face recognition technology at all during any investigation, they must record detailed information about their use of the technology, such as photo quality and the number of photos of the same suspect not identified by FRT. If charges are ever filed as a result of the investigation, prosecutors and defense attorneys will have access to the information about any uses of FRT in the case.  

The Detroit Police Department’s new face recognition rules are among the strictest restrictions adopted anywhere in the country—short of the full bans on the technology passed by San Francisco, Boston, and at least 15 other municipalities. Detroit’s new regulations are an important step in the right direction, but only a full ban on government use of face recognition can fully protect against this technology’s many dangers. FRT jeopardizes every person’s right to protest government misconduct free from retribution and reprisals for exercising their right to free speech. Giving police the ability to fly a drone over a protest and identify every protester undermines every person’s right to freely associate with dissenting groups or criticize government officials without fear of retaliation from those in power. 

Moreover, FRT undermines racial justice and threatens civil rights. Study after study after study has found that these tools cannot reliably identify people of color.  According to Detroit’s own data, roughly 97 percent of queries in 2023 involved Black suspects; when asked during a public meeting in 2020, then-police Chief James Craig estimated the technology would misidentify people 96 percent of the time. 

Williams was one of the first victims of this technology—but he was by no means the last. In Detroit alone, police wrongfully arrested at least two other people based on erroneous face recognition matches: Porcha Woodruff, a pregnant Black woman, and Michael Oliver, a Black man who lost his job due to his arrest.  

Many other innocent people have been arrested elsewhere, and in some cases, have served jail time as a result. The consequences can be life-altering; one man was sexually assaulted while incarcerated due a FRT misidentification. Police and the government have proven time and time again they cannot be trusted to use this technology responsibly. Although many departments already acknowledge that FRT results alone cannot justify an arrest, that is cold comfort to people like Williams, who are still being harmed despite the reassurances police give the public.  

It is time to take FRT out of law enforcement’s hands altogether. 

Tori Noble

EFF to FCC: SS7 is Vulnerable, and Telecoms Must Acknowledge That

1 week 4 days ago

It’s unlikely you’ve heard of Signaling System 7 (SS7), but every phone network in the world is connected to it, and if you have ever roamed networks internationally or sent an SMS message overseas you have used it. SS7 is a set of telecommunication protocols that cellular network operators use to exchange information and route phone calls, text messages, and other communications between each other on 2G and 3G networks (4G and 5G networks instead use the Diameter signaling system). When a person travels outside their home network's coverage area (roaming), and uses their phone on a 2G or 3G network, SS7 plays a crucial role in registering the phone to the network and routing their communications to the right destination. On May 28, 2024, EFF submitted comments to the Federal Communications Commision demanding investigation of SS7 and Diameter security and transparency into how the telecoms handle the security of these networks.

What Is SS7, and Why Does It Matter?

When you roam onto different 2G or 3G networks, or send an SMS message internationally the SS7 system works behind the scenes to seamlessly route your calls and SMS messages. SS7 identifies the country code, locates the specific cell tower that your phone is using, and facilitates the connection. This intricate process involves multiple networks and enables you to communicate across borders, making international roaming and text messages possible. But even if you don’t roam internationally, send SMS messages, or use legacy 2G/3G networks, you may still be vulnerable to SS7 attacks because most telecommunications providers are still connected to it to support international roaming, even if they have turned off their own 2G and 3G networks. SS7 was not built with any security protocols, such as authentication or encryption, and has been exploited by governments, cyber mercenaries, and criminals to intercept and read SMS messages. As a result, many network operators have placed firewalls in order to protect users. However, there are no mandates or security requirements placed on the operators, so there is no mechanism to ensure that the public is safe.

Many companies treat your ownership of your phone number as a primary security authentication mechanism, or secondary through SMS two-factor authentication. An attacker could use SS7 attacks to intercept text messages and then gain access to your bank account, medical records, and other important accounts. Nefarious actors can also use SS7 attacks to track a target’s precise location anywhere in the world

These vulnerabilities make SS7 a public safety issue. EFF strongly believes that it is in the best interest of the public for telecommunications companies to secure their SS7 networks and publicly audit them, while also moving to more secure technologies as soon as possible.

Why SS7 Isn’t Secure

SS7 was standardized in the late 1970s and early 1980s, at a time when communication relied primarily on landline phones. During that era, the telecommunications industry was predominantly controlled by corporate monopolies. Because the large telecoms all trusted each other there was no incentive to focus on the security of the network. SS7 was developed when modern encryption and authentication methods were not in widespread use. 

In the 1990s and 2000s new protocols were introduced by the European Telecommunication Standards Institute (ETSI) and the telecom standards bodies to support mobile phones with services they need, such as roaming, SMS, and data. However, security was still not a concern at the time. As a result, SS7 presents significant cybersecurity vulnerabilities that demand our attention. 

SS7 can be accessed through telecommunications companies and roaming hubs. To access SS7, companies (or nefarious actors) must have a “Global Title,” which is a phone number that uniquely identifies a piece of equipment on the SS7 network. Each phone company that runs its own network has multiple global titles. Some telecommunications companies lease their global titles, which is how malicious actors gain access to the SS7 network. 

Concerns about potential SS7 exploits are primarily discussed within the mobile security industry and are not given much attention in broader discussions about communication security. Currently, there is no way for end users to detect SS7 exploitation. The best way to safeguard against SS7 exploitation is for telecoms to use firewalls and other security measures. 

With the rapid expansion of the mobile industry, there is no transparency around any efforts to secure our communications. The fact that any government can potentially access data through SS7 without encountering significant security obstacles poses a significant risk to dissenting voices, particularly under authoritarian regimes.

Some people in the telecommunications industry argue that SS7 exploits are mainly a concern for 2G and 3G networks. It’s true that 4G and 5G don’t use SS7—they use the Diameter protocol—but Diameter has many of the same security concerns as SS7, such as location tracking. What’s more, as soon as you roam onto a 3G or 2G network, or if you are communicating with someone on an older network, your communications once again go over SS7. 

FCC Requests Comments on SS7 Security 

Recently, the FCC issued a request for comments on the security of SS7 and Diameter networks within the U.S. The FCC asked whether the security efforts of telecoms were working, and whether auditing or intervention was needed. The three large US telecoms (Verizon, T-Mobile, and AT&T) and their industry lobbying group (CTIA) all responded with comments stating that their SS7 and Diameter firewalls were working perfectly, and that there was no need to audit the phone companies’ security measures or force them to report specific success rates to the government. However, one dissenting comment came from Cybersecurity and Infrastructure Security Agency (CISA) employee Kevin Briggs. 

We found the comments by Briggs, CISA’s top expert on telecom network vulnerabilities, to be concerning and compelling. Briggs believes that there have been successful, unauthorized attempts to access network user location data from U.S. providers using SS7 and Diameter exploits. He provides two examples of reports involving specific persons that he had seen: the tracking of a person in the United States using Provide Subscriber Information (PSI) exploitation (March 2022); and the tracking of three subscribers in the United States using Send Routing Information (SRI) packets (April 2022).  

This is consistent with reporting by Gary Miller and Citizen Lab in 2023, where they state: “we also observed numerous requests sent from networks in Saudi Arabia to geolocate the phones of Saudi users as they were traveling in the United States. Millions of these requests targeting the international mobile subscriber identity (IMSI), a number that identifies a unique user on a mobile network, were sent over several months, and several times per hour on a daily basis to each individual user.”

Briggs added that he had seen information describing how in May 2022, several thousand suspicious SS7 messages were detected, which could have masked a range of attacks—and that he had additional information on the above exploits as well as others that go beyond location tracking, such as the monitoring of message content, the delivery of spyware to targeted devices, and text-message-based election interference.

As a senior CISA official focused on telecom cybersecurity, Briggs has access to information that the general public is not aware of. Therefore his comments should be taken seriously, particularly in light of the concerns expressed by Senator Wyden in his letter to the President, referenced a non-public, independent, expert report commissioned by CISA, and alleged that CISA was “actively hiding information about [SS7 threats] from the American people.” The FCC should investigate these claims, and keep Congress and the public informed about exploitable weaknesses in the telecommunication networks we all use.

These warnings should be taken seriously and their claims should be investigated. The telecoms should submit the results of their audits to the FCC and CISA so that the public can have some reassurance that their security measures are working as they say they are. If the telecoms’ security measures aren’t enough, as Briggs and Miller suggest, then the FCC must step in and secure our national telecommunications network. 

Cooper Quintin

Platforms Have First Amendment Right to Curate Speech, As We’ve Long Argued, Supreme Court Said, But Sends Laws Back to Lower Court To Decide If That Applies To Other Functions Like Messaging

1 week 6 days ago

Social media platforms, at least in their most common form, have a First Amendment right to curate the third-party speech they select for and recommend to their users, and the government’s ability to dictate those processes is extremely limited, the U.S. Supreme Court stated in its landmark decision in Moody v. NetChoice and NetChoice v. Paxton, which were decided together. 

The cases dealt with Florida and Texas laws that each limited the ability of online services to block, deamplify, or otherwise negatively moderate certain user speech.  

Yet the Supreme Court did not strike down either law—instead it sent both cases back to the lower courts to determine whether each law could be wholly invalidated rather than challenged only with respect to specific applications of each law to specific functions. 

The Supreme Court also made it clear that laws that do not target the editorial process, such as competition laws, would not be subject to the same rigorous First Amendment standards, a position EFF has consistently urged

This is an important ruling and one that EFF has been arguing for in courts since 2018. We’ve already published our high-level reaction to the decision and written about how it bears on pending social media regulations. This post is a more thorough, and much longer, analysis of the opinion and its implications for future lawsuits. 

A First Amendment Right to Moderate Social Media Content 

 The most important question before the Supreme Court, and the one that will have the strongest ramifications beyond the specific laws being challenged here, is whether social media platforms have their own First Amendment rights, independent of their users’ rights, to decide what third-party content to present in their users’ feeds, recommend, amplify, deamplify, label, or block.  The lower courts in the NetChoice cases reached opposite conclusions, with the 11th Circuit considering the Florida law finding a First Amendment right to curate, and the 5th Circuit considering the Texas law refusing to do so. 

The Supreme Court appropriately resolved that conflict between the two appellate courts and answered this question yes, treating social media platforms the same as other entities that compile, edit, and curate the speech of others, such as bookstores, newsstands, art galleries, parade organizers, and newspapers.  As Justice Kagan, writing for the court’s majority, wrote, “the First Amendment offers protection when an entity engaging in expressive activity, including compiling and curating others’ speech, is directed to accommodate messages it would prefer to exclude.”   

As the Supreme Court explained,  

Deciding on the third-party speech that will be included in or excluded from a compilation—and then organizing and presenting the included items—is expressive activity of its own. And that activity results in a distinctive expressive product. When the government interferes with such editorial choices—say, by ordering the excluded to be included—it alters the content of the compilation. (It creates a different opinion page or parade, bearing a different message.) And in so doing—in overriding a private party’s expressive choices—the government confronts the First Amendment. 

The court thus chose to apply the line of precedent from  Miami Herald Co. v. Tornillo —in which the Supreme Court in 1973 struck down a law that required newspapers that endorsed a candidate for office to provide space to that candidate’s opponents to reply—and rejected the line of precedent from PruneYard Shopping Center v. Robins—a 1980 case in which the Supreme Court ruled that  a state court decision that the California Constitution required a particular shopping center to let  a group set up a table and collect signatures when it allowed other groups to do so did not violate the First Amendment. 

In Moody, the Supreme Court explained that the latter rule applied only to situations in which the host itself was not engaged in an inherently expressive activity. That is, a social media platform deciding what user generated content to select and recommend to its users is inherently expressive, but a shopping center deciding who gets to table on your private property is not. 

So, the Supreme Court said, the 11th Circuit got it right and the 5th Circuit did not. Indeed, the 5th Circuit got it very wrong. In the Supreme Court’s words, the 5th Circuit’s opinion “rests on a serious misunderstanding of First Amendment precedent and principle.” 

This is also the position EFF has been making in courts since at least 2018. As we wrote then, “The law is clear that private entities that operate online platforms for speech and that open those platforms for others to speak enjoy a First Amendment right to edit and curate the content. The Supreme Court has long held that private publishers have a First Amendment right to control the content of their publications. Miami Herald Co. v. Tornillo, 418 U.S. 241, 254-44 (1974).” 

This is an important rule in several contexts in addition to the state must-carry laws at issue in these cases. The same rule will apply to laws that restrict the publication and recommendation of lawful speech by social media platforms, or otherwise interfere with content moderation. And it will apply to civil lawsuits brought by those whose content has been removed, demoted, or demonetized. 

Applying this rule, the Supreme Court concluded that Texas’s law could not be constitutionally applied against Facebook’s Newsfeed and YouTube’s homepage. (The Court did not specifically address Florida’s law since it was writing in the context of identifying the 5th Circuit’s errors.)

Which Services Have This First Amendment Right? 

But the Supreme Court’s ruling doesn’t make clear which other functions of which services enjoy this First Amendment right to curate. The Supreme Court specifically analyzed only Facebook’s Newsfeed and YouTube’s homepage. It did not analyze any services offered by other platforms or other functions offered through Facebook, like messaging or event management. 

The opinion does, however, identify some factors that will be helpful in assessing which online services have the right to curate. 

  • Targeting and customizing the publication of user-generated content is protected, whether by algorithm or otherwise, pursuant to the company’s own content rules, guidelines, or standards. The Supreme Court specified that it was not assessing whether the same right would apply to personalized curation decisions made algorithmically solely based on user behavior online without any reference to a site’s own standards or guidelines. 
  • Content moderation such as labeling user posts with warnings, disclaimers, or endorsements for all users, or deletion of posts, again pursuant to a site’s own rules, guidelines, or standards, is protected. 
  • The combination of multifarious voices “to create a distinctive expressive offering” or have a “particular expressive quality” based on a set of beliefs about which voices are appropriate or inappropriate, a process that is often “the product of a wealth of choices,” is protected. 
  • There is no threshold of selectivity a service must surpass to have curatorial freedom, a point we argued in our amicus brief. "That those platforms happily convey the lion’s share of posts submitted to them makes no significant First Amendment difference,” the Supreme Court said. Courts should not focus on the ratio of rejected to accepted posts in deciding whether the right to curate exists: “It is as much an editorial choice to convey all speech except in select categories as to convey only speech within them.” 
  • Curatorial freedom exists even when no one is likely to view a platform’s editorial decisions as their endorsement of the ideas in posts they choose to publish. As the Supreme Court said, “this Court has never hinged a compiler’s First Amendment protection on the risk of misattribution.” 

Considering these factors, the First Amendment right will apply to a wide range of social media services, what the Supreme Court called “Facebook Newsfeed and its ilk” or “its near equivalents.” But its application is less clear to messaging, e-commerce, event management, and infrastructure services.

The Court, Finally, Seems to Understand Content Moderation 

Also noteworthy is that in concluding that content moderation is protected First Amendment activity, the Supreme Court showed that it finally understands how content moderation works. It accurately described the process of how social media platforms decide what any user sees in their feed. For example, it wrote:

In constructing certain feeds, those platforms make choices about what third-party speech to display and how to display it. They include and exclude, organize and prioritize—and in making millions of those decisions each day, produce their own distinctive compilations of expression. 

and 

In the face of that deluge, the major platforms cull and organize uploaded posts in a variety of ways. A user does not see everything—even everything from the people she follows—in reverse-chronological order. The platforms will have removed some content entirely; ranked or otherwise prioritized what remains; and sometimes added warnings or labels. Of particular relevance here, Facebook and YouTube make some of those decisions in conformity with content-moderation policies they call Community Standards and Community Guidelines. Those rules list the subjects or messages the platform prohibits or discourages—say, pornography, hate speech, or misinformation on select topics. The rules thus lead Facebook and YouTube to remove, disfavor, or label various posts based on their content. 

This comes only a year after Justice Kagan, who wrote this opinion, remarked of the Supreme Court during another oral argument that, “These are not, like, the nine greatest experts on the internet.” In hindsight, that statement seems more of a comment on her colleagues’ understanding than her own. 

Importantly, the Court has now moved beyond the idea that content moderation is largely passive and indifferent, a concern that had been raised after the Court used that language to describe the process in last term’s case, Twitter v. Taamneh. It is now clear that in the Taamneh case, the court was referring to Twitter’s passive relationship with ISIS, in that Twitter treated it like any other account holder, a relationship that did not support the terrorism aiding and abetting claims made in that case. 

Supreme Court Suggests Competition Law to Address Undue Market Influences 

Another important element of the Supreme Court’s analysis is its treatment of the posited rationale for both states’ speech restrictions: the need to improve or better balance the marketplace of ideas. Both laws were passed in response to perceived censorship of conservative voices, and the states sought to eliminate this perceived political bias from the platform’s editorial practices.  

The Supreme Court found that this was not a sufficiently important reason to limit speech, as is required under First Amendment scrutiny: 

However imperfect the private marketplace of ideas, here was a worse proposal—the government itself deciding when speech was imbalanced, and then coercing speakers to provide more of some views or less of others. . . . The government may not, in supposed pursuit of better expressive balance, alter a private speaker’s own editorial choices about the mix of speech it wants to convey. 

But, as EFF has consistently urged in its amicus briefs, in these cases and others, that ruling does not leave states without any way of addressing harms caused by the market dominance of certain services.   

So, it is very heartening to see the Supreme Court point specifically to competition law as an alternative. In the Supreme Court’s words, “Of course, it is critically important to have a well-functioning sphere of expression, in which citizens have access to information from many sources. That is the whole project of the First Amendment. And the government can take varied measures, like enforcing competition laws, to protect that access." 

While not mentioned, we think this same reasoning supports many data privacy laws as well.  

Nevertheless, the Court Did Not Strike Down Either Law

Despite this analysis, the Supreme Court did not strike down either law. Rather, it sent the cases back to the lower courts to decide whether the lawsuits were proper facial challenges to the law.  

A facial challenge is a lawsuit that argues that a law is unconstitutional in every one of its applications. Outside of the First Amendment, facial challenges are permissible only if there is no possible constitutional application of the law or, as the courts say, the law “lacks a plainly legitimate sweep.” However, in First Amendment cases, a special rule applies: a law may be struck down as overbroad if there are a substantial number of unconstitutional applications relative to the law’s permissible scope. 

To assess whether a facial challenge is proper, a court is thus required to do a three-step analysis. First, a court must identify a law’s “sweep,” that is, to whom and what actions it applies. Second, the court must then identify which of those possible applications are unconstitutional. Third, the court must then both quantitatively and qualitatively compare the constitutional and unconstitutional applications–principal applications of the law, that is, the ones that seemed to be the law’s primary targets, may be given greater weight in that balancing. The court will strike down the law only if the unconstitutional applications are substantially greater than the constitutional ones.  

The Supreme Court found that neither court conducted this analysis with respect to either the Florida or Texas law. So, it sent both cases back down so the lower courts could do so. Its First Amendment analysis set forth above was to guide the courts in determining which applications of the laws would be unconstitutional. The Supreme Court finds that the Texas law cannot be constitutionally applied to Facebook’s Newsfeed of YouTube’s homepage—but the lower court now needs to complete the analysis. 

While these limitations on facial challenges have been well established for some time, the Supreme Court’s focus on them here was surprising because blatantly unconstitutional laws are challenged facially all the time.  

Here, however, the Supreme Court was reluctant to apply its First Amendment analysis beyond large social media platforms like Facebook’s Newsfeed and its close equivalents. The Court was also unsure whether and how either law would be applied to scores of other online services, such as email, direct messaging, e-commerce, payment apps, ride-hailing apps, and others. It wants the lower courts to look at those possible applications first. 

This decision thus creates a perverse incentive for states to pass laws that by their language broadly cover a wide range of activities, and in doing so make a facial challenge more difficult.

For example, the Florida law defines covered social media platforms as "any information service, system, Internet search engine, or access software provider that does business in this state and provides or enables computer access by multiple users to a computer server, including an Internet platform or a social media site” which has either gross annual revenues of at least $100 million or at least 100 million monthly individual platform participants globally.

Texas HB20, by contrast, defines “social media platforms,” as “an Internet website or application that is open to the public, allows a user to create an account, and enables users to communicate with other users for the primary purpose of posting information, comments, messages, or images,” and specifically excludes ISPs, email providers, online services that are nor primarily composed of user-generated content, and to which the social aspects are incidental to a service’s primary purpose.  

Does this Make the First Amendment Analysis “Dicta”? 

Typically, language in a higher court’s opinion that is necessary to its ultimate ruling is binding on lower courts, while language that is not necessary is merely persuasive “dicta.” Here, the Supreme Court’s ruling was based on the uncertainty about the propriety of the facial challenge, and not the First Amendment issues directly. So, there is some argument that the First Amendment analysis is persuasive but not binding precedent. 

However, the Supreme Court could not responsibly remand the case back to the lower courts to consider the facial challenge question without resolving the split in the circuits, that is, the vastly different ways in which the 5th and 11th Circuits analyzed whether social media content curation is protected by the First Amendment. Without that guidance, neither court would know how to assess whether a particular potential application of the law was constitutional or not. The Supreme Court’s First Amendment analysis thus seems quite necessary and is arguably not dicta. 

 And even if the analysis is merely persuasive, six of the justices found that the editorial and curatorial freedom cases like Miami Herald Co v. Tornillo applied. At a minimum, this signals how they will rule on the issue when it reaches them again. It would be unwise for a lower court to rule otherwise, at least while those six justices remain on the Supreme Court. 

What About the Transparency Mandates

Each law also contains several requirements that the covered services publish information about their content moderation practices. Only one type of these provisions was part of the cases before the Supreme Court, a provision from each law that required covered platforms to provide the user with notice and an explanation of certain content moderation decisions.

Heading into the Supreme Court, it was unclear what legal standard applied to these speech mandates. Was it the undue burden standard, from a case called Zauderer v. Office of Disciplinary Counsel, that applies to mandated noncontroversial and factual disclosures in advertisements and other forms of commercial speech, or the strict scrutiny standard that applies to other mandated disclosures?

The Court remanded this question with the rest of the case. But it did imply, without elaboration, that the Zauderer “undue burden” standard each of the lower courts applied was the correct one.

Tidbits From the Concurring Opinions 

All nine justices on the Supreme Court questioned the propriety of the facial challenges to the laws and favored remanding the cases back to the lower courts. So, officially the case was a unanimous 9-0 decision. But there were four separate concurring opinions that revealed some differences in reasoning, with the most significant difference being that Justices Alito, Thomas, and Gorsuch disagreed with the majority’s First Amendment analysis.

Because a majority of the Supreme Court, five justices, fully supported the First Amendment analysis discussed above, the concurrences have no legal effect. There are, however, some interesting tidbits in them that give hints as to how the justices might rule in future cases.

  • Justice Barrett fully joined the majority opinion. She wrote a separate concurrence to emphasize that the First Amendment issues may play out much differently for services other than Facebook’s Newsfeed and YouTube’s homepage. She expressed a special concern for algorithmic decision-making that does not carry out the platform’s editorial policies. She also noted that a platform’s foreign ownership might affect whether the platform has First Amendment rights, a statement that pretty much everyone assumes is directed at TikTok. 
  • Justice Jackson agreed with the majority that the Miami Herald line of cases was the correct precedent and that the 11th Circuit’s interpretation of the law was correct, whereas the 5th Circuit’s was not. But she did not agree with the majority decision to apply the law to Facebook’s Newsfeed and YouTube’s home page. Rather, the lower courts should do that. She emphasized that the law might be applied differently to different functions of a single service.
  • Justice Alito, joined by Thomas and Gorsuch, emphasized his view that the majority’s First Amendment analysis is nonbinding dicta. He criticized the majority for undertaking the analysis on the record before it. But since the majority did so, he expressed his disagreement with it. He disputed that the Miami Herald line of cases was controlling and raised the possibility that the common carrier doctrine, whereby social media would be treated more like telephone companies, was the more appropriate path. He also questioned whether algorithmic moderation reflects any human’s decision-making and whether community moderation models reflect a platform’s editorial decisions or viewpoints, as opposed to the views of its users.
  • Justice Thomas fully agreed with Justice Alito but wrote separately to make two points. First, he repeated a long-standing belief that the Zauderer “undue burden” standard, and indeed the entire commercial speech doctrine, should be abandoned. Second, he endorsed the common carrier doctrine as the correct law. He also expounded on the dangers of facial challenges. Lastly, Justice Thomas seems to have moved off, at least a little, his previous position that social media platforms were largely neutral pipes that insubstantially engaged with user speech.

How the NetChoice opinion will be viewed by lower courts and what influence it will have on state legislatures and Congress, which continue to seek to interfere with content moderation processes, remains to be seen. 

But the Supreme Court has helpfully resolved a central question and provided a First Amendment framework for analyzing the legality of government efforts to dictate what content social media platforms should or should not publish. 

 

 

 

David Greene

Decoding the Courts’ Digital Decisions | EFFector 36.9

2 weeks 1 day ago

Instead of relaxing for the summer, EFF is in first gear defending your rights online! Catch up on what we're doing with the latest issue of our EFFector newsletter. This time we're sharing updates regarding California law enforcement illegally sharing drivers' location data out-of-state, the heavy burden Congress has to meet to justify a TikTok ban, and the latest Supreme Court ruling regarding platforms first amendment right to dictate what speech they host on their platforms.

It can feel overwhelming to stay up to date, but we've got you covered with our EFFector newsletter! You can read the full issue here, or subscribe to get the next one in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:

LISTEN ON YouTube

EFFECTOR 36.9 - Decoding The Courts' Digital Decisions

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

34 Years Supporting the Wild and Weird World Online

2 weeks 2 days ago

Oh the stories I could tell you about EFF's adventures anchoring the digital rights movement. Clandestine whistleblowers. Secret rooms. Encryption cracking. Airships over mass spying facilities. Even appearances from a badger, a purple dinosaur, and an adorable toddler dancing to Prince. EFF emerged as a proud friend to creators and users alike in this wild and weird world online—and we’re still at it.

Thank you for supporting EFF in our mission to ensure that technology supports freedom, justice, and innovation for all people of the world.

Today the Electronic Frontier Foundation commemorates its 34th anniversary of battling for your digital freedom. It’s important to glean wisdom from where we have been, but at EFF we're also strong believers that this storied past helps us build a positive future. Central to our work is supporting the unbounded creativity on the internet and the people who are, even today, imagining what a better world looks like.

That’s why EFF’s lawyers, activists, policy analysts, and technologists have been on your side since 1990. I’ve seen magical things happen when you—not the companies or governments around you—can determine how you engage with technology. When those stars align, social movements can thrive, communities can flourish, and the internet’s creativity blossoms.

The web plays a crucial role in lifting up the causes you believe in, whatever they may be. These transformative moments are only possible when there is ample space for your privacy, your creativity, and your ability to express yourself freely. No matter where threats may arise, know that EFF is by your side armed with unparalleled expertise and the will to defend the public interest.

I am deeply thankful for people like you who support internet freedom and who value EFF’s role in the movement. It’s a team effort.

One More Day for Summer Treats

Leading up to EFF’s anniversary today, we’ve been having some fun with campfire tales from The Encryptids. We reimagined folktales about cryptids, like Bigfoot and the jackalope, from the perspective of creatures who just want what we all want: a privacy-protective, creative web that lifts users up with technology that respects critical rights and freedoms!

As EFF’s 34th birthday gift to you, I invite you to join EFF for just $20 today and you’ll get two limited-time gifts featuring The Encryptids. On top of that, Craig Newmark Philanthropies will match up to $30,000 for your first year as a monthly or annual Sustaining Donor! Many thanks to Craig—founder of Craigslist and a persistent supporter of digital freedom—for making this possible.

Join EFF

For the Future of Privacy, Security, & Free Expression

We at EFF take our anniversary as an opportunity to applaud our partners, celebrate supporters like you, and appreciate our many successes for privacy and free expression. But we never lose sight of the critical job ahead. Thank you for supporting EFF in our mission to ensure that technology supports freedom, justice, and innovation for all people of the world.

Cindy Cohn

To Sixth Circuit: Government Officials Should Not Have Free Rein to Block Critics on Their Social Media Accounts When Used For Governmental Purposes

2 weeks 3 days ago

Legal intern Danya Hajjaji was the lead author of this post.

The Sixth Circuit must carefully apply a new “state action” test from the U.S. Supreme Court to ensure that public officials who use social media to speak for the government do not have free rein to infringe critics’ First Amendment rights, EFF and the Knight First Amendment Institute at Columbia University said in an amicus brief.

The Sixth Circuit is set to re-decide Lindke v. Freed, a case that was recently remanded from the Supreme Court. The lawsuit arose after Port Huron, Michigan resident Kevin Lindke left critical comments on City Manager James Freed's Facebook page. Freed retaliated by blocking Lindke from being able to view, much less continue to leave critical comments on, Freed’s public profile. The dispute turned on the nature of Freed’s Facebook account, where updates on his government engagements were interwoven with personal posts.

Public officials who use social media as an extension of their office engage in “state action,” which refers to acting on the government’s behalf. They are bound by the First Amendment and generally cannot engage in censorship, especially viewpoint discrimination, by deleting comments or blocking citizens who criticize them. While social media platforms are private corporate entities, government officials who operate interactive online forums to engage in public discussions and share information are bound by the First Amendment.

The Sixth Circuit initially ruled in Freed’s favor, holding that no state action exists due to the prevalence of personal posts on his Facebook page and the lack of government resources, such as staff members or taxpayer dollars, used to operate it.  

The case then went to the U.S. Supreme Court, where EFF and the Knight Institute filed a brief urging the Court to establish a functional test that finds state action when a government official uses a social media account in furtherance of their public duties, even if the account is also sometimes used for personal purposes.

The U.S. Supreme Court crafted a new two-pronged state action test: a government official’s social media activity is state action if 1) the official “possessed actual authority to speak” on the government’s behalf and 2) “purported to exercise that authority” when speaking on social media. As we wrote when the decision came out, this state action test does not go far enough in protecting internet users who intereact with public officials online. Nevertheless, the Court has finally provided further guidance on this issue as a result.

Now that the case is back in the Sixth Circuit, EFF and the Knight Institute filed a second brief endorsing a broad construction of the Supreme Court’s state action test.

The brief argues that the test’s “authority” prong requires no more than a showing, either through written law or unwritten custom, that the official had the authority to speak on behalf of the government generally, irrespective of the medium of communication—whether an in-person press conference or social media. It need not be the authority to post on social media in particular.

For high-ranking elected officials (such as presidents, governors, mayors, and legislators) courts should not have a problem finding that they have clear and broad authority to speak on government policies and activities. The same is true for heads of government agencies who are also generally empowered to speak on matters broadly relevant to those agencies. For lower-ranking officials, courts should consider the areas of their expertise and whether their social media posts in question were related to subjects within, as the Supreme Court said, their “bailiwick.”

The brief also argues that the test’s “exercise” prong requires courts to engage in, in the words of the Supreme Court, a “fact-specific undertaking” to determine whether the official was speaking on social media in furtherance of their government duties.

This element is easily met where the social media account is owned, created, or operated by the office or agency itself, rather than the official—for example, the Federal Trade Commission’s @FTC account on X (formerly Twitter).

But when an account is owned by the person and is sometimes used for non-governmental purposes, courts must look to the content of the posts. These include those posts from which the plaintiff’s comments were deleted, or any posts the plaintiff would have wished to see or comment on had the official not blocked them entirely. Former President Donald Trump is a salient example, having routinely used his legacy @realDonaldTrump X account, rather than the government-created and operated account @POTUS, to speak in furtherance of his official duties while president.

However, it is often not easy to differentiate between personal and official speech by looking solely at the posts themselves. For example, a social media post could be either private speech reflecting personal political passions, or it could be speech in furtherance of an official’s duties, or both. If this is the case, courts must consider additional factors when assessing posts made to a mixed-use account. These factors can be an account’s appearance, such as whether government logos were used; whether government resources such as staff or taxpayer funds were used to operate the social media account; and the presence of any clear disclaimers as to the purpose of the account.

EFF and the Knight Institute also encouraged the Sixth Circuit to consider the crucial role social media plays in facilitating public participation in the political process and accountability of government officials and institutions. If the Supreme Court’s test is construed too narrowly, public officials will further circumvent their constitutional obligations by blocking critics or removing any trace of disagreement from any social media accounts that are used to support and perform their official duties.

Social media has given rise to active democratic engagement, while government officials at every level have leveraged this to reach their communities, discuss policy issues, and make important government announcements. Excessively restricting any member of the public’s viewpoints threatens public discourse in spaces government officials have themselves opened as public political forums.

Sophia Cope
Checked
59 minutes 58 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed