EFF to Federal Trial Court: Section 230’s Little-Known Third Immunity for User-Empowerment Tools Covers Unfollow Everything 2.0

2 months 2 weeks ago

EFF along with the ACLU of Northern California and the Center for Democracy & Technology filed an amicus brief in a federal trial court in California in support of a college professor who fears being sued by Meta for developing a tool that allows Facebook users to easily clear out their News Feed.

Ethan Zuckerman, a professor at the University of Massachusetts Amherst, is in the process of developing Unfollow Everything 2.0, a browser extension that would allow Facebook users to automate their ability to unfollow friends, groups, or pages, thereby limiting the content they see in their News Feed.

This type of tool would greatly benefit Facebook users who want more control over their Facebook experience. The unfollowing process is tedious: you must go profile by profile—but automation makes this process a breeze. Unfollowing all friends, groups, and pages makes the News Feed blank, but this allows you to curate your News Feed by refollowing people and organizations you want regular updates on. Importantly, unfollowing isn’t the same thing as unfriending—unfollowing takes your friends’ content out of your News Feed, but you’re still connected to them and can proactively navigate to their profiles.

As Louis Barclay, the developer of Unfollow Everything 1.0, explained:

I still remember the feeling of unfollowing everything for the first time. It was near-miraculous. I had lost nothing, since I could still see my favorite friends and groups by going to them directly. But I had gained a staggering amount of control. I was no longer tempted to scroll down an infinite feed of content. The time I spent on Facebook decreased dramatically. Overnight, my Facebook addiction became manageable.

Prof. Zuckerman fears being sued by Meta, Facebook’s parent company, because the company previously sent Louis Barclay a cease-and-desist letter. Prof. Zuckerman, with the help of the Knight First Amendment Institute at Columbia University, preemptively sued Meta, asking the court to conclude that he has immunity under Section 230(c)(2)(B), Section 230’s little-known third immunity for developers of user-empowerment tools.

In our amicus brief, we explained to the court that Section 230(c)(2)(B) is unique among the immunities of Section 230, and that Section 230’s legislative history supports granting immunity in this case.

The other two immunities—Section 230(c)(1) and Section 230(c)(2)(A)—provide direct protection for internet intermediaries that host user-generated content, moderate that content, and incorporate blocking and filtering software into their systems. As we’ve argued many times before, these immunities give legal breathing room to the online platforms we use every day and ensure that those companies continue to operate, to the benefit of all internet users. 

But it’s Section 230(c)(2)(B) that empowers people to have control over their online experiences outside of corporate or government oversight, by providing immunity to the developers of blocking and filtering tools that users can deploy in conjunction with the online platforms they already use.

Our brief further explained that the legislative history of Section 230 shows that Congress clearly intended to provide immunity for user-empowerment tools like Unfollow Everything 2.0.

Section 230(b)(3) states, for example, that the statute was meant to “encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services,” while Section 230(b)(4) states that the statute was intended to “remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material.” Rep. Chris Cox, a co-author of Section 230, noted prior to passage that new technology was “quickly becoming available” that would help enable people to “tailor what we see to our own tastes.”

Our brief also explained the more specific benefits of Section 230(c)(2)(B). The statute incentivizes the development of a wide variety of user-empowerment tools, from traditional content filtering to more modern social media tailoring. The law also helps people protect their privacy by incentivizing the tools that block methods of unwanted corporate tracking such as advertising cookies, and block stalkerware deployed by malicious actors.

We hope the district court will declare that Prof. Zuckerman has Section 230(c)(2)(B) immunity so that he can release Unfollow Everything 2.0 to the benefit of Facebook users who desire more control over how they experience the platform.

Sophia Cope

EFF to Supreme Court: Strike Down Texas’ Unconstitutional Age Verification Law

2 months 2 weeks ago
New Tech Doesn’t Solve Old Problems With Age-Gating the Internet

WASHINGTON, D.C.—The Electronic Frontier Foundation (EFF), the Woodhull Freedom Foundation, and TechFreedom urged the Supreme Court today to strike down HB 1181, a Texas law that unconstitutionally restricts adults’ access to sexual content online by requiring them to verify their age. 

Under HB 1181, signed into law last year, any website that Texas decides is composed of “one-third” or more of “sexual material harmful to minors” is forced to collect age-verifying personal information from all visitors. When the Supreme Court reviews a case challenging the law in its next term, its ruling could have major consequences for the freedom of adults to safely and anonymously access protected speech online. 

"Texas’ age verification law robs internet users of anonymity, exposes them to privacy and security risks, and blocks some adults entirely from accessing sexual content that’s protected under the First Amendment,” said EFF Staff Attorney Lisa Femia. “Applying longstanding Supreme Court precedents, other courts have consistently held that similar age verification laws are unconstitutional. To protect freedom of speech online, the Supreme Court should clearly reaffirm those correct decisions here.”  

In a flawed ruling last year, the Fifth Circuit of Appeals upheld the Texas law, diverging from decades of legal precedent that correctly recognized online ID mandates as imposing greater burdens on our First Amendment rights than in-person age checks. As EFF explains in its friend-of-the-court brief, there is nothing about HB 1181 or advances in technology that have lessened the harms the law’s age verification mandate imposes on adults wishing to exercise their constitutional rights. 

First, the Texas law forces adults to submit personal information over the internet to access entire websites, not just specific sexual materials. Second, compliance with the law will require websites to retain this information, exposing their users to a variety of anonymity, privacy, and security risks not present when briefly flashing an ID card to a cashier. Third, while sharing many of the same burdens as document-based age verification, newer technologies like “age estimation” introduce their own problems—and are unlikely to satisfy the requirements of HB 1181 anyway. 

"Sexual freedom is a fundamental human right critical to human dignity and liberty," said Ricci Levy, CEO of the Woodhull Freedom Foundation. "By requiring invasive age verification, this law chills protected speech and violates the rights of consenting adults to access lawful sexual content online.” 

Today’s friend-of-the-court brief is only the latest entry in EFF’s long history of fighting for freedom of speech online. In 1997, EFF participated as both plaintiff and co-counsel in ACLU v. Reno, the landmark Supreme Court case that established speech on the internet as meriting the highest standard of constitutional protection. And in the last year alone, EFF has urged courts to reject state censorship, throw out a sweeping ban on free expression, and stop the government from making editorial decisions about content on social media. 

For the brief: https://www.eff.org/document/fsc-v-paxton-eff-amicus-brief

For more on HB 1181: https://www.eff.org/deeplinks/2024/05/eff-urges-supreme-court-reject-texas-speech-chilling-age-verification-law

Contact:  LisaFemiaStaff Attorneylfemia@eff.org
Hudson Hongo

Prison Banned Books Week: Being in Jail Shouldn’t Mean Having Nothing to Read

2 months 2 weeks ago

Across the United States, nearly every state’s prison system offers some form of tablet access to incarcerated people, many of which boast of sizable libraries of eBooks. Knowing this, one might assume that access to books is on the rise for incarcerated folks. Unfortunately, this is not the case. A combination of predatory pricing, woefully inadequate eBook catalogs, and bad policies restricting access to paper literature has exacerbated an already acute book censorship problem in U.S. prison systems.

New data collected by the Prison Banned Books Week campaign focuses on the widespread use of tablet devices in prison systems, as well as their pricing structure and libraries of eBooks. Through a combination of interviews with incarcerated people and a nationwide FOIA campaign to uncover the details of these tablet programs, this campaign has found that, despite offering access to tens of thousands of eBooks, prisons’ tablet programs actually provide little in the way of valuable reading material. The tablets themselves are heavily restricted, and typically only designed by one of two companies: Securus and ViaPath. The campaign also found that the material these programs do provide may not be accessible to many incarcerated individuals.

“We might as well be rummaging the dusty old leftovers in some thrift store or back alley dumpster.”

Limited, Censored Selections at Unreasonable Prices

Many companies that offer tablets to carceral facilities advertise libraries of several thousand books. But the data reveals that a huge proportion of these books are public domain texts taken directly from Project Gutenberg. While Project Gutenberg is itself laudable for collecting freely accessible eBooks, and its library contains many of the “classics” of Western literary canon, a massive number of its texts are irrelevant and outdated. As Shawn Y., an incarcerated interviewee in Pennsylvania put it, “Books are available for purchase through the Securus systems, but most of the bookworms here [...] find the selection embarrassingly thin, laughable even. [...] We might as well be rummaging the dusty old leftovers in some thrift store or back alley dumpster.”

These limitations on eBook selections exacerbate the already widespread censorship of physical reading materials, based on a variety of factors including books being deemed “harmful” content, determinations based on the book’s vendor (which, reports indicate, can operate as a ban on publishers), and whether the incarcerated person obtained advance permission from a prison administrator. Such censorial decisionmaking undermines incarcerated individuals’ right to receive information.

These costs are a barrier that deprive those in carceral facilities from developing and maintaining a connection with life outside prison walls.

Some facilities charge $0.99 or more per eBook—despite their often meager, antiquated selections. While this may not seem exorbitant to many people, a recent estimate of average hourly wages for incarcerated people in the US is $0.63 per hour. And these otherwise free eBooks can often cost much more: Larry, an individual incarcerated in Pennsylvania, explains, “[s]ome of the prices for other books [are] extremely outrageous.” In Larry’s facility, “[s]ome of those tablet prices range over twenty dollars and even higher.”

Even if one can afford to rent these eBooks, they may have to pay for the tablets required to read them. For some incarcerated individuals, these costs can be prohibitive: procurement contracts in some states appear to require incarcerated people to pay upwards of $99 to use them. These costs are a barrier that deprive those in carceral facilities from developing and maintaining a connection with life outside prison walls.

Part of a Trend Toward Inadequate Digital Replacements

The trend of eliminating physical books and replacing them with digital copies accessible via tablets is emblematic of a larger trend from physical to digital that is occurring throughout our carceral system. These digital copies are not adequate substitutes. One of the hallmarks of tangible physical items is access: someone can open a physical book and read it when, how, and where they want. That’s not the case with the tablet systems prisons are adopting, and worryingly this trend has also extended to such personal items as incarcerated individual's personal mail.

EFF is actively litigating to defend incarcerated individuals’ rights to access and receive tangible reading materials with our ABO Comix lawsuit. There, we—along with the Knight First Amendment Institute and Social Justice Legal Foundation—are fighting a San Mateo County (California) policy that bans those in San Mateo jails from receiving physical mail. Our complaint explains that San Mateo’s policy requires the friends and families of those jailed in its facilities to send their letters to a private company that scans them, destroys the physical copy, and retains the scan in a searchable database—for at least seven years after the intended recipient leaves the jail’s custody. Incarcerated people can only access the digital copies through a limited number of shared tablets and kiosks in common areas within the jails.

Just as incarcerated peoples’ reading materials are censored, so is their mail when physical letters are replaced with digital facsimiles. Our complaint details how ripping open, scanning, and retaining mail has impeded the ability of those in San Mateo’s facilities to communicate with their loved ones, as well as their ability to receive educational and religious study materials. These digital replacements are inadequate both in and of themselves and because the tablets needed to access them are in short supply and often plagued by technical issues. Along with our free expression allegations, our complaint also alleges that the seizing, searching, and sharing of data from and about their letters violates the rights of both senders and recipients against unreasonable searches and seizures.

Our ABO Comix litigation is ongoing. We are hopeful that the courts will recognize the free expression and privacy harms to incarcerated individuals and those who communicate with them that come from digitizing physical mail. We are also hopeful, on the occasion of this Prison Banned Books Week, for an end to the censorship of incarcerated individuals’ reading materials: restricting what some of us can read harms us all.

Related Cases: A.B.O Comix, et al. v. San Mateo County
Will Greenberg

Square Peg, Meet Round Hole: Previously Classified TikTok Briefing Shows Error of Ban

2 months 2 weeks ago

A previously classified transcript reveals Congress knows full well that American TikTok users engage in First Amendment protected speech on the platform and that banning the application is an inadequate way to protect privacy—but it banned TikTok anyway.

The government submitted the partially redacted transcript as part of the ongoing litigation over the federal TikTok ban (which the D.C. Circuit just heard arguments about this week). The transcript indicates that that members of Congress and law enforcement recognize that Americans are engaging in First Amendment protected speech—the same recognition a federal district court made when it blocked Montana’s TikTok ban from going into effect. They also agreed that adequately protecting Americans’ data requires comprehensive consumer privacy protections.

Yet, Congress banned TikTok anyway, undermining our rights and failing to protect our privacy.

No Indication of Actual Harm, No New Arguments

The members and officials didn’t make any particularly new points about the dangers of TikTok. Further, they repeatedly characterized their fears as hypothetical. The transcript is replete with references to the possibility of the Chinese government using TikTok to manipulate the content Americans’ see on the application, including to shape their views on foreign and domestic issues. For example, the official representing the DOJ expressed concern that the public and private data TikTok users generate on the platform is

potentially at risk of going to the Chinese government, [and] being used now or in the future by the Chinese government in ways that could be deeply harmful to tens of millions of young people who might want to pursue careers in government, who might want to pursue careers in the human rights field, and who one day could end up at odds with the Chinese Government’s agenda.  

There is no indication from the unredacted portions of the transcript that this is happening. This DOJ official went on to express concern “with the narratives that are being consumed on the platform,” the Chinese government’s ability to influence those narratives, and the U.S. government’s preference for “responsible ownership” of the platform through divestiture.

At one point, Representative Walberg even suggested that “certain public policy organizations” that oppose the TikTok ban should be investigated for possible ties to ByteDance (the company that owns TikTok). Of course, the right to oppose an ill-conceived ban on a popular platform goes to the very reason the U.S. has a First Amendment.

Congress banned TikTok anyway, undermining our rights and failing to protect our privacy.

Americans’ Speech and Privacy Rights Deserved More

Rather than grandstanding about investigating opponents of the TikTok ban, Congress should spend its time considering the privacy and free speech arguments of those opponents. Judging by the (redacted) transcript, the committee failed to undertake that review here.

First, the First Amendment rightly subjects bans like this one for TikTok to extraordinarily exacting judicial scrutiny. That is true even with foreign propaganda, which Americans have a well-established First Amendment right to receive. And it’s ironic for the DOJ to argue that banning an application which people use for self-expression—a human right—is necessary to protect their ability to advance human rights.

Second, if Congress wants to stop the Chinese government from potentially acquiring data about social media users, it should pass comprehensive consumer privacy legislation that regulates how all social media companies can collect, process, store, and sell Americans’ data. Otherwise, foreign governments and adversaries will still be able to acquire Americans’ data by stealing it, or by using a straw purchaser to buy it.

It’s especially jarring to read that a foreign government’s potential collection of data supposedly justifies banning an application, given Congress’s recent renewal of an authority—Section 702 of the Foreign Intelligence Surveillance Act—under which the U.S. government actually collects massive amounts of Americans’ communications— and which the FBI immediately directed its agents to abuse (yet again).

EFF will continue fighting for TikTok users’ First Amendment rights to express themselves and to receive information on the platform. We will also continue urging Congress to drop these square peg, round hole approaches to Americans’ privacy and online expression and pass comprehensive privacy legislation that offers Americans genuine protection from the invasive ways any company uses data. While Congress did not fully consider the First Amendment and privacy interests of TikTok users, we hope the federal courts will.

Brendan Gilligan

Strong End-to-End Encryption Comes to Discord Calls

2 months 2 weeks ago

We’re happy to see that Discord will soon start offering a form of end-to-end encryption dubbed “DAVE” for its voice and video chats. This puts some of Discord’s audio and video offerings in line with Zoom, and separates it from tools like Slack and Microsoft Teams, which do not offer end-to-end encryption for video, voice, or any other communications on those apps. This is a strong step forward, and Discord can do even more to protect its users’ communications.

End-to-end encryption is used by many chat apps for both text and video offerings, including WhatsApp, iMessage, Signal, and Facebook Messenger. But Discord operates differently than most of those, since alongside private and group text, video, and audio chats, it also encompasses large scale public channels on individual servers operated by Discord. Going forward, audio and video will be end-to-end encrypted, but text, including both group channels and private messages, will not.

When a call is end-to-end encrypted, you’ll see a green lock icon. While it's not required to use the service, Discord also offers a way to optionally verify that the strong encryption a call is using is not being tampered with or eavesdropped on. During a call, one person can pull up the “Voice Privacy Code,” and send it over to everyone else on the line—preferably in a different chat app, like Signal—to confirm no one is compromising participants’ use of end-to-end encryption. This is a way to ensure someone is not impersonating someone and/or listening in to a conversation.

By default, you have to do this every time you initiate a call if you wish to verify the communication has strong security. There is an option to enable persistent verification keys, which means your chat partners only have to verify you on each device you own (e.g. if you sometimes call from a phone and sometimes from a computer, they’ll want to verify for each).

Key management is a hard problem in both the design and implementation of cryptographic protocols. Making sure the same encryption keys are shared across multiple devices in a secure way, as well as reliably discovered in a secure way by conversation partners, is no trivial task. Other apps such as Signal require some manual user interaction to ensure the sharing of key-material across multiple devices is done in a secure way. Discord has chosen to avoid this process for the sake of usability, so that even if you do choose to enable persistent verification keys, the keys on separate devices you own will be different.

While this is an understandable trade-off, we hope Discord takes an extra step to allow users who have heightened security concerns the ability to share their persistent keys across devices. For the sake of usability, they could by default generate separate keys for each device while making sharing keys across them an extra step. This will avoid the associated risk of your conversation partners seeing you’re using the same device across multiple calls. We believe making the use of persistent keys easier and cross-device will make things safer for users as well: they will only have to verify the key for their conversation partners once, instead of for every call they make.

Discord has performed the protocol design and implementation of DAVE in a solidly transparent way, including publishing the protocol whitepaper, the open-source library, commissioning an audit from well-regarded outside researchers, and expanding their bug-bounty program to include rewarding any security researchers who report a vulnerability in the DAVE protocol. This is the sort of transparency we feel is required when rolling out encryption like this, and we applaud this approach.

But we’re disappointed that, citing the need for content moderation, Discord has decided not to extend end-to-end encryption offerings to include private messages or group chats. In a statement to TechCrunch, they reiterated they have no further plans to roll out encryption in direct messages or group chats.

End-to-end encrypted video and audio chats is a good step forward—one that too many messaging apps lack. But because protection of our text conversations is important and because partial encryption is always confusing for users, Discord should move to enable end-to-end encryption on private text chats as well. This is not an easy task, but it’s one worth doing.

Thorin Klosowski

Canada’s Leaders Must Reject Overbroad Age Verification Bill

2 months 3 weeks ago

Canadian lawmakers are considering a bill, S-210, that’s meant to benefit children, but would sacrifice the security, privacy, and free speech of all internet users.

First introduced in 2023, S-210 seeks to prevent young people from encountering sexually explicit material by requiring all commercial internet services that “make available” explicit content to adopt age verification services. Typically, these services will require people to show government-issued ID to get on the internet. According to bill authors, this is needed to prevent harms like the “development of pornography addiction” and “the reinforcement of gender stereotypes and the development of attitudes favorable to harassment and violence…particularly against women.”

The motivation is laudable, but requiring people of all ages to show ID to get online won’t help women or young people. If S-210 isn't stopped before it reaches the third reading and final vote in the House of Commons, Canadians will be forced to a repressive and unworkable age verification regulation. 

Flawed Definitions Would Encompass Nearly the Entire Internet 

The bill’s scope is vast. S-210 creates legal risk not just for those who sell or intentionally distribute sexually explicit materials, but also for those who just transmit it–knowingly or not.

Internet infrastructure intermediaries, which often do not know the type of content they are transmitting, would also be liable, as would all services from social media sites to search engines and messaging platforms. Each would be required to prevent access by any user whose age is not verified, unless they can claim the material is for a “legitimate purpose related to science, medicine, education or the arts,” or by implementing age verification. 

Basic internet infrastructure shouldn’t be regulating content at all, but S-210 doesn’t make the distinction. When these large services learn they are hosting or transmitting sexually explicit content, most will simply ban or remove it outright, using both automated tools and hasty human decision-making. History shows that over-censorship is inevitable. When platforms seek to ban sexual content, over-censorship is very common.

Rules banning sexual content usually hurt marginalized communities and groups that serve them the most. That includes organizations that provide support and services to victims of trafficking and child abuse, sex workers, and groups and individuals promoting sexual freedom.

Promoting Dangerous Age Verification Methods 

S-210 notes that “online age-verification technology is increasingly sophisticated and can now effectively ascertain the age of users without breaching their privacy rights.”

This premise is just wrong. There is currently no technology that can verify users’ ages while protecting their privacy. The bill does not specify what technology must be used, leaving it for subsequent regulation. But the age verification systems that exist are very problematic. It is far too likely that any such regulation would embrace tools that retain sensitive user data for potential sale or harms like hacks and lack guardrails preventing companies from doing whatever they like with this data once collected.

We’ve said it before: age verification systems are surveillance systems. Users have no way to be certain that the data they’re handing over is not going to be retained and used in unexpected ways, or even shared to unknown third parties. The bill asks companies to maintain user privacy and destroy any personal data collected but doesn’t back up that suggestion with comprehensive penalties. That’s not good enough.

Companies responsible for storing or processing sensitive documents like drivers’ licenses can encounter data breaches, potentially exposing not only personal data about users, but also information about the sites that they visit.

Finally, age-verification systems that depend on government-issued identification exclude altogether Canadians who do not have that kind of ID.

Fundamentally, S-210 leads to the end of anonymous access to the web. Instead, Canadian internet access would become a series of checkpoints that many people simply would not pass, either by choice or because the rules are too onerous.

Dangers for Everyone, But This Can Be Stopped

Canada’s S-210 is part of a wave of proposals worldwide seeking to gate access to sexual content online. Many of the proposals have similar flaws. Canada’s S-210 is up there with the worst. Both Australia and France have paused the rollout of age verification systems, because both countries found that these systems could not sufficiently protect individuals’ data or address the issues of online harms alone. Canada should take note of these concerns.

It's not too late for Canadian lawmakers to drop S-210. It’s what has to be done to protect the future of a free Canadian internet. At the very least, the bill’s broad scope must be significantly narrowed to protect user rights.

Paige Collings

Las demandas de derechos humanos contra Cisco pueden avanzar (otra vez)

2 months 3 weeks ago
Google and Amazon – You Should Take Note of Your Own Aiding and Abetting Risk 

EFF has long pushed companies that provide powerful surveillance tools to governments to take affirmative steps to avoid aiding and abetting human rights abuses. We have also worked to ensure they face consequences when they do not.

Last week, the U.S. Court of Appeals for the Ninth Circuit helped this cause, by affirming its powerful 2023 decision that aiding and abetting liability in U.S. courts can apply to technology companies that provide sophisticated surveillance systems that are used to facilitate human rights abuses.  

The specific case is against Cisco and arises out of allegations that Cisco custom-built tools as part of the Great Firewall of China to help the Chinese government target members of disfavored groups, including the Falun Gong religious minority.  The case claims that those tools were used to help identify individuals who then faced horrific consequences, including wrongful arrest, detention, torture, and death.  

We did a deep dive analysis of the Ninth Circuit panel decision when it came out in 2023. Last week, the Ninth Circuit rejected an attempt to have that initial decision reconsidered by the full court, called en banc review. While the case has now survived Ninth Circuit review and should otherwise be able to move forward in the trial court, Cisco has indicated that it intends to file a petition for U.S. Supreme Court review. That puts the case on pause again. 

Still, the Ninth Circuit’s decision to uphold the 2023 panel opinion is excellent news for the critical, though slow moving, process of building accountability for companies that aid repressive governments. The 2023 opinion unequivocally rejected many of the arguments that companies use to justify their decision to provide tools and services that are later used to abuse people. For instance, a company only needs to know that its assistance is helping in human rights abuses; it does not need to have a purpose to facilitate abuse. Similarly, the fact that a technology has legitimate law enforcement uses does not immunize the company from liability for knowingly facilitating human rights abuses.

EFF has participated in this case at every level of the courts, and we intend to continue to do so. But a better way forward for everyone would be if Cisco owned up to its actions and took steps to make amends to those injured and their families with an appropriate settlement offer, like Yahoo! did in 2007. It’s not too late to change course, Cisco.

And as EFF noted recently, Cisco isn’t the only company that should take note of this development. Recent reports have revealed the use (and misuse) of Google and Amazon services by the Israeli government to facilitate surveillance and tracking of civilians in Gaza. These reports raise serious questions about whether Google and Amazon  are following their own published statements and standards about protecting against the use of their tools for human rights abuses. Unfortunately, it’s all too common for companies to ignore their own human rights policies, as we highlighted in a recent brief about notorious spyware company NSO Group.

The reports about Gaza also raise questions about whether there is potential liability against Google and Amazon for aiding and abetting human rights abuses against Palestinians. The abuses by Israel have now been confirmed by the International Court of Justice, among others, and the longer they continue, the harder it is going to be for the companies to claim that they had no knowledge of the abuses. As the Ninth Circuit confirmed, aiding and abetting liability is possible even though these technologies are also useful for legitimate law enforcement purposes and even if the companies did not intend them to be used to facilitate human rights abuses. 

The stakes are getting higher for companies. We first call on Cisco to change course, acknowledge the victims, and accept responsibility for the human rights abuses it aided and abetted.  

Second, given the current ongoing abuses in Gaza, we renew our call for Google and Amazon to first come clean about their involvement in human rights abuses in Gaza and, where necessary, make appropriate changes to avoid assisting in future abuses.

Finally, for other companies looking to sell surveillance, facial recognition, and other potentially abusive tools to repressive governments – we’ll be watching you, too.   

Related Cases: Doe I v. Cisco
Cindy Cohn

Senate Vote Could Give Helping Hand To Patent Trolls

2 months 3 weeks ago

Update 9/26/24: The hearing and scheduled committee vote on PERA and PREVAIL was canceled. Supporters can continue to register their opposition via our action, as these bills may still be scheduled for a vote later in 2024. 

Update 9/20/24: The Senate vote scheduled for Thursday, Sep. 19 has been rescheduled for Thursday, Sep. 26. 

A patent on crowdfunding. A patent on tracking packages. A patent on photo contests. A patent on watching an ad online. A patent on computer bingo. A patent on upselling

These are just a few of the patents used to harass software developers and small companies in recent years. Thankfully, they were tossed out by U.S. courts, thanks to the landmark 2014 Supreme Court decision in Alice v. CLS Bank. The Alice ruling  has effectively ended hundreds of lawsuits where defendants were improperly sued for basic computer use. 

Take Action

Tell Congress: No New Bills For Patent Trolls

Now, patent trolls and a few huge corporate patent-holders are upset about losing their bogus patents. They are lobbying Congress to change the rules–and reverse the Alice decision entirely. Shockingly, they’ve convinced the Senate Judiciary Committee to vote this Thursday on two of the most damaging patent bills we’ve ever seen.

The Patent Eligibility Restoration Act (PERA, S. 2140) would overturn Alice, enabling patent trolls to extort small business owners and even hobbyists, just for using common software systems to express themselves or run their businesses. PERA would also overturn a 2013 Supreme Court case that prevents most kinds of patenting of human genes.

Meanwhile, the PREVAIL Act (S. 2220) seeks to severely limit how the public can challenge bad patents at the patent office. Challenges like these are one of the most effective ways to throw out patents that never should have been granted in the first place. 

This week, we need to show Congress that everyday users and creators won’t stand for laws that actually expand avenues for patent abuse.

The U.S. Senate must not pass new legislation to allow the worst patent scams to expand and flourish. 

Take Action

Tell Congress: No New Bills For Patent Trolls

Joe Mullin

Desvelando la represión en Venezuela: Un legado de vigilancia y control estatal

2 months 3 weeks ago

The post was written by Laura Vidal (PhD), independent researcher in learning and digital rights.

This is part two of a series. Part one on surveillance and control around the July election is here.

Over the past decade, the government in Venezuela has meticulously constructed a framework of surveillance and repression, which has been repeatedly denounced by civil society and digital rights defenders in the country. This apparatus is built on a foundation of restricted access to information, censorship, harassment of journalists, and the closure of media outlets. The systematic use of surveillance technologies has created an intricate network of control.

Security forces have increasingly relied on digital tools to monitor citizens, frequently stopping people to check the content of their phones and detaining those whose devices contain anti-government material. The country’s digital identification systems, Carnet de la Patria and Sistema Patria—established in 2016 and linked to social welfare programs—have also been weaponized against the population by linking access to essential services with affiliation to the governing party. 

Censorship and internet filtering in Venezuela became omnipresent ahead of the recent election period. The government blocked access to media outlets, human rights organizations, and even VPNs—restricting access to critical information. Social media platforms like X (formerly Twitter) and WhatsApp were also  targeted—and are expected to be regulated—with the government accusing these platforms of aiding opposition forces in organizing a “fascist coup d’état” and spreading “hate” while promoting a “civil war.”

The blocking of these platforms not only limits free expression but also serves to isolate Venezuelans from the global community and their networks in the diaspora, a community of around 9 million people. The government's rhetoric, which labels dissent as "cyberfascism" or "terrorism," is part of a broader narrative that seeks to justify these repressive measures while maintaining a constant threat of censorship, further stifling dissent.

Moreover, there is a growing concern that the government’s strategy could escalate to broader shutdowns of social media and communication platforms if street protests become harder to control, highlighting the lengths to which the regime is willing to go to maintain its grip on power.

Fear is another powerful tool that enhances the effectiveness of government control. Actions like mass arrests, often streamed online, and the public display of detainees create a chilling effect that silences dissent and fractures the social fabric. Economic coercion, combined with pervasive surveillance, fosters distrust and isolation—breaking down the networks of communication and trust that help Venezuelans access information and organize.

This deliberate strategy aims not just to suppress opposition but to dismantle the very connections that enable citizens to share information and mobilize for protests. The resulting fear, compounded by the difficulty in perceiving the full extent of digital repression, deepens self-censorship and isolation. This makes it harder to defend human rights and gain international support against the government's authoritarian practices.

Civil Society’s Response

Despite the repressive environment, civil society in Venezuela continues to resist. Initiatives like Noticias Sin Filtro and El Bus TV have emerged as creative ways to bypass censorship and keep the public informed. These efforts, alongside educational campaigns on digital security and the innovative use of artificial intelligence to spread verified information, demonstrate the resilience of Venezuelans in the face of authoritarianism. However, the challenges remain extensive.

The Inter-American Commission on Human Rights (IACHR) and its Special Rapporteur for Freedom of Expression (SRFOE) have condemned the institutional violence occurring in Venezuela, highlighting it as state terrorism. To be able to comprehend the full scope of this crisis it is paramount to understand that this repression is not just a series of isolated actions but a comprehensive and systematic effort that has been building for over 15 years. It combines elements of infrastructure (keeping essential services barely functional), blocking independent media, pervasive surveillance, fear-mongering, isolation, and legislative strategies designed to close civic space. With the recent approval of a law aimed at severely restricting the work of non-governmental organizations, the civic space in Venezuela faces its greatest challenge yet.

The fact that this repression occurs amid widespread human rights violations suggests that the government's next steps may involve an even harsher crackdown. The digital arm of government propaganda reaches far beyond Venezuela’s borders, attempting to silence voices abroad and isolate the country from the global community. 

The situation in Venezuela is dire, and the use of technology to facilitate political violence represents a significant threat to human rights and democratic norms. As the government continues to tighten its grip, the international community must speak out against these abuses and support efforts to protect digital rights and freedoms. The Venezuelan case is not just a national issue but a global one, illustrating the dangers of unchecked state power in the digital age.

However, this case also serves as a critical learning opportunity for the global community. It highlights the risks of digital authoritarianism and the ways in which governments can influence and reinforce each other's repressive strategies. At the same time, it underscores the importance of an organized and resilient civil society—in spite of so many challenges—as well as the power of a network of engaged actors both inside and outside the country. 

These collective efforts offer opportunities to resist oppression, share knowledge, and build solidarity across borders. The lessons learned from Venezuela should inform global strategies to safeguard human rights and counter the spread of authoritarian practices in the digital era.

An open letter, organized by a group of Venezuelan digital and human rights defenders, calling for an end to technology-enabled political violence in Venezuela, has been published by Access Now and remains open for signatures.

Guest Author

The New U.S. House Version of KOSA Doesn’t Fix Its Biggest Problems

2 months 3 weeks ago

An amended version of the Kids Online Safety Act (KOSA) that is being considered this week in the U.S. House is still a dangerous online censorship bill that contains many of the same fundamental problems of a similar version the Senate passed in July. The changes to the House bill do not alter that KOSA will coerce the largest social media platforms into blocking or filtering a variety of entirely legal content, and subject a large portion of users to privacy-invasive age verification. They do bring KOSA closer to becoming law, and put us one step closer to giving government officials dangerous and unconstitutional power over what types of content can be shared and read online. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

Reframing the Duty of Care Does Not Change Its Dangerous Outcomes

For years now, digital rights groups, LGBTQ+ organizations, and many others have been critical of KOSA's “duty of care.” While the language has been modified slightly, this version of KOSA still creates a duty of care and negligence standard of liability that will allow the Federal Trade Commission to sue apps and websites that don’t take measures to “prevent and mitigate” various harms to minors that are vague enough to chill a significant amount of protected speech.  

The biggest shift to the duty of care is in the description of the harms that platforms must prevent and mitigate. Among other harms, the previous version of KOSA included anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors, “consistent with evidence-informed medical information.” The new version drops this section and replaces it with the "promotion of inherently dangerous acts that are likely to cause serious bodily harm, serious emotional disturbance, or death.” The bill defines “serious emotional disturbance” as “the presence of a diagnosable mental, behavioral, or emotional disorder in the past year, which resulted in functional impairment that substantially interferes with or limits the minor’s role or functioning in family, school, or community activities.”  

Despite the new language, this provision is still broad and vague enough that no platform will have any clear indication about what they must do regarding any given piece of content. Its updated list of harms could still encompass a huge swathe of entirely legal (and helpful) content about everything from abortion access and gender-affirming care to drug use, school shootings, and tackle football. It is still likely to exacerbate the risks of children being harmed online because it will place barriers on their ability to access lawful speech—and important resources—about topics like addiction, eating disorders, and bullying. And it will stifle minors who are trying to find their own supportive communities online.  

Kids will, of course, still be able to find harmful content, but the largest platforms—where the most kids are—will face increased liability for letting any discussion about these topics occur. It will be harder for suicide prevention messages to reach kids experiencing acute crises, harder for young people to find sexual health information and gender identity support, and generally, harder for adults who don’t want to risk the privacy- and security-invasion of age verification technology to access that content as well.  

As in the past version, enforcement of KOSA is left up to the FTC, and, to some extent, state attorneys general around the country. Whether you agree with them or not on what encompasses a “diagnosable mental, behavioral, or emotional disorder,”  the fact remains that KOSA's flaws are as much about the threat of liability as about the actual enforcement. As long as these definitions remain vague enough that platforms have no clear guidance on what is likely to cross the line, there will be censorship—even if the officials never actually take action. 

The previous House version of the bill stated that “A high impact online company shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors.” The new version slightly modifies this to say that such a company "shall create and implement its design features to reasonably prevent and mitigate the following harms to minors.” These language changes are superficial; this section still imposes a standard that requires platforms to filter user-generated content and imposes liability if they fail to do so “reasonably.” 

House KOSA Edges Closer to Harmony with Senate Version 

Some of the latest amendments to the House version of KOSA bring it closer in line with the Senate version which passed a few months ago (not that this improves the bill).  

This version of KOSA lowers the bar, set by the previous House version, that determines  which companies would be impacted by KOSA’s duty of care. While the Senate version of KOSA does not have such a limitation (and would affect small and large companies alike), the previous House version created a series of tiers for differently-sized companies. This version has the same set of tiers, but lowers the highest bar from companies earning $2.5 billion in annual revenue, or having 150 million annual users, to companies earning $1 billion in annual revenue, or having 100 million annual users.  

This House version also includes the “filter bubble” portion of KOSA which was added to the Senate version a year ago. This requires any “public-facing website, online service, online application, or mobile application that predominantly provides a community forum for user-generated content” to provide users with an algorithm that uses a limited set of information, such as search terms and geolocation, but not search history (for example). This section of KOSA is meant to push users towards a chronological feed. As we’ve said before, there’s nothing wrong with online information being presented chronologically for those who want it. But just as we wouldn’t let politicians rearrange a newspaper in a particular order, we shouldn’t let them rearrange blogs or other websites. It’s a heavy-handed move to stifle the editorial independence of web publishers.   

Lastly, the House authors have added language  that the bill would have no actual effect on how platforms or courts interpret the law, but which does point directly to the concerns we’ve raised. It states that, “a government entity may not enforce this title or a regulation promulgated under this title based upon a specific viewpoint of any speech, expression, or information protected by the First Amendment to the Constitution that may be made available to a user as a result of the operation of a design feature.” Yet KOSA does just that: the FTC will have the power to force platforms to moderate or block certain types of content based entirely on the views described therein.  

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA Remains an Unconstitutional Censorship Bill 

KOSA remains woefully underinclusive—for example, Google's search results will not be impacted regardless of what they show young people, but Instagram is on the hook for a broad amount of content—while making it harder for young people in distress to find emotional, mental, and sexual health support. This version does only one important thing—it moves KOSA closer to passing in both houses of Congress, and puts us one step closer to enacting an online censorship regime that will hurt free speech and privacy for everyone.

Jason Kelley

KOSA’s Online Censorship Threatens Abortion Access

2 months 3 weeks ago

For those living in one of the 22 states where abortion is banned or heavily restricted, the internet can be a lifeline. It has essential information on where and how to access care, links to abortion funds, and guidance on ways to navigate potential legal risks. Activists use the internet to organize and build community, and reproductive healthcare organizations rely on it to provide valuable information and connect with people in need.

But both Republicans and Democrats in Congress are now actively pushing for federal legislation that could cut youth off from these vital healthcare resources and stifle online abortion information for adults and kids alike.

This summer, the U.S. Senate passed the Kids Online Safety Act (KOSA), a bill that would grant the federal government and state attorneys general the power to restrict online speech they find objectionable in a misguided and ineffective attempt to protect kids online. A number of organizations have already sounded the alarm on KOSA’s danger to online LGBTQ+ content, but the hazards of the bill don’t stop there.

KOSA puts abortion seekers at risk. It could easily lead to censorship of vital and potentially life-saving information about sexual and reproductive healthcare. And by age-gating the internet, it could result in websites requiring users to submit identification, undermining the ability to remain anonymous while searching for abortion information online.

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

Abortion Information Censored

As EFF has repeatedly warned, KOSA will stifle online speech. It gives government officials the dangerous and unconstitutional power to decide what types of content can be shared and read online. Under one of its key censorship provisions, KOSA would create what the bill calls a “duty of care.” This provision would require websites, apps, and online platforms to comply with a vague and overbroad mandate to prevent and mitigate “harm to minors” in all their “design features.”

KOSA contains a long list of harms that websites have a duty to protect against, including emotional disturbance, acts that lead to bodily harm, and online harassment, among others. The list of harms is open for interpretation. And many of the harms are so subjective that government officials could claim any number of issues fit the bill.

This opens the door for political weaponization of KOSA—including by anti-abortion officials. KOSA is ambiguous enough to allow officials to easily argue that its mandate includes sexual and reproductive healthcare information. They could, for example, claim that abortion information causes emotional disturbance or death, or could lead to “sexual exploitation and abuse.” This is especially concerning given the anti-abortion movement’s long history of justifying abortion restrictions by claiming that abortions cause mental health issues, including depression and self-harm (despite credible research to the contrary).

As a result, websites could be forced to filter and block such content for minors, despite the fact that minors can get pregnant and are part of the demographic most likely to get their news and information from social media platforms. By blocking this information, KOSA could cut off young people’s access to potentially life-saving sexual and reproductive health resources. So much for protecting kids.

KOSA’s expansive and vague censorship requirements will also affect adults. To avoid liability and the cost and hassle of litigation, websites and platforms are likely to over-censor potentially covered content, even if that content is otherwise legal. This could lead to the removal of important reproductive health information for all internet users, adults included.

A Tool For Anti-Choice Officials

It’s important to remember that KOSA’s “duty of care” provision would be defined and enforced by the presidential administration in charge, including any future administration that is hostile to reproductive rights. The bill grants the Federal Trade Commission, majority-controlled by the President’s party, the power to develop guidelines and to investigate or sue any websites or platforms that don’t comply. It also grants the Executive Branch the power to form a Kids Online Safety Council to further identify “emerging or current risks of harms to minors associated with online platforms.”

Meanwhile, KOSA gives state attorneys general, including those in abortion-restrictive states, the power to sue under its other provisions, many of which intersect with the “duty of care.” As EFF has argued, this gives state officials a back door to target and censor content they don’t like, including abortion information.

It’s also directly foreseeable that anti-abortion officials would use KOSA in this way. One of the bill’s co-sponsors, Senator Marsha Blackburn (R-TN), has touted KOSA as a way to censor online content on social issues, claiming that children are being “indoctrinated” online. The Heritage Foundation, a politically powerful organization that espouses anti-choice views, also has its eyes on KOSA. It has been lobbying lawmakers to pass the bill and suggesting that a future administration could fill the Kids Online Safety Council with “representatives who share pro-life values.”

This all comes at a time when efforts to censor abortion information online are at a fever pitch. In abortion-restrictive states, officials have already been eagerly attempting to erase abortion from the internet. Lawmakers in both South Carolina and Texas have introduced bills to censor online abortion information, though neither effort has yet to be successful. The National Right to Life Committee has also created a model abortion law aimed at restricting abortion rights in a variety of ways, including digital access to information.

KOSA Hurts Anonymity Online

KOSA will also push large and important parts of the internet behind age gates. In order to determine which users are minors, online services will likely impose age verification systems, which require everyone—both adults and minors—to verify their age by providing identifying information, oftentimes including government-issued ID or other personal records.

This is deeply problematic for maintaining access to reproductive care. Age verification undermines our First Amendment right to remain anonymous online by requiring users to confirm their identity before accessing webpages and information. It would chill users who do not wish to share their identity from accessing or sharing online abortion resources, and put others’ identities at increased risk of exposure.

In a post-Roe United States, in which states are increasingly banning, restricting, and prosecuting abortions, the ability to anonymously seek and share abortion information online is more important than ever. For people living in abortion-restrictive states, searching and sharing abortion information online can put you at risk. There have been multiple instances of law enforcement agencies using digital evidence, including internet history, in abortion-related criminal cases. We’ve also seen an increase in online harassment and doxxing of healthcare professionals, even in more abortion-protective states.

Because of this, many organizations, including EFF, have tried to help people take steps to protect privacy and anonymity online. KOSA would undercut those efforts. While it’s true that our online ecosystem is already rich with private surveillance, age verification adds another layer of mass data collection. Online ID checks require adults to upload data-rich, government-issued identifying documents to either the website or a third-party verifier, creating a potentially lasting record of their visit to the website.

For abortion seekers taking steps to protect their anonymity and avoid this pervasive surveillance, this would make things all the more difficult. Using a public computer or creating anonymous profiles on social networks won’t keep you safe if you have to upload ID to access the information you need.

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

We Can Still Stop KOSA From Passing

KOSA has not yet passed the House, so there’s still time to stop it. But the Senate vote means that the House could bring it up for a vote at any time, and the House has introduced its own similarly flawed version of KOSA. If we want to protect access to abortion information online, we must organize now to stop KOSA from passing.

Lisa Femia

Unveiling Venezuela’s Repression: Surveillance and Censorship Following July’s Presidential Election

2 months 3 weeks ago

The post was written by Laura Vidal (PhD), independent researcher in learning and digital rights.

This is part one of a series. Part two on the legacy of Venezuela’s state surveillance is here.

As thousands of Venezuelans took to the streets across the country to demand transparency in July’s election results, the ensuing repression has been described as the harshest to date, with technology playing a central role in facilitating this crackdown.

The presidential elections in Venezuela marked the beginning of a new chapter in the country’s ongoing political crisis. Since July 28th, a severe backlash against demonstrations has been undertaken by the country’s security forces, leading to 20 people killed. The results announced by the government, in which they claimed a re-election of Nicolás Maduro, have been strongly contested by political leaders within Venezuela as well as by the Organization of American States (OAS),  and governments across the region

In the days following the election, the opposition—led by candidates Edmundo González Urrutia and María Corina Machado—challenged the National Electoral Council’s (CNE) decision to award the presidency to Maduro. They called for greater transparency in the electoral process, particularly regarding the publication of the original tally sheets, which are essential for confirming or contesting the election results. At present, these original tally sheets remain unpublished.

In response to the lack of official data, the coalition supporting the opposition—known as Comando con Venezuelapresented the tally sheets obtained by opposition witnesses on the night of July 29th. These were made publicly available on an independent portal named “Presidential Results 2024,” accessible to any internet user with a Venezuelan identity card.

The government responded with repression and numerous instances of technology-supported repression and violence. The surveillance and control apparatus saw intensified use, such as increased deployment of VenApp, a surveillance application originally launched in December 2022 to report failures in public services. Promoted by President Nicolás Maduro as a means for citizens to report on their neighbors, VenApp has been integrated into the broader system of state control, encouraging citizens to report activities deemed suspicious by the state and further entrenching a culture of surveillance.

Additional reports indicated the use of drones across various regions of the country. Increased detentions and searches at airports have particularly impacted human rights defenders, journalists, and other vulnerable groups. This has been compounded by the annulment of passports and other forms of intimidation, creating an environment where many feel trapped and fearful of speaking out.

The combined effect of these tactics is the pervasive sense that it is safer not to stand out. Many NGOs have begun reducing the visibility of their members on social media, some individuals have refused interviews, have published documented human rights violations under generic names, and journalists have turned to AI-generated avatars to protect their identities. People are increasingly setting their social media profiles to private and changing their profile photos to hide their faces. Additionally, many are now sending information about what is happening in the country to their networks abroad for fear of retaliation. 

These actions often lead to arbitrary detentions, with security forces publicly parading those arrested as trophies, using social media materials and tips from informants to justify their actions. The clear intent behind these tactics is to intimidate, and they have been effective in silencing many. This digital repression is often accompanied by offline tactics, such as marking the residences of opposition figures, further entrenching the climate of fear.

However, this digital aspect of repression is far from a sudden development. These recent events are the culmination of years of systematic efforts to control, surveil, and isolate the Venezuelan population—a strategy that draws from both domestic decisions and the playbook of other authoritarian regimes. 

In response, civil society in Venezuela continues to resist; and in August, EFF joined more than 150 organizations and individuals in an open letter highlighting the technology-enabled political violence in Venezuela. Read more about this wider history of Venezuela’s surveillance and civil society resistance in part two of this series, available here

 

Guest Author

The Climate Has a Posse – And So Does Political Satire

2 months 3 weeks ago

Greenwashing is a well-worn strategy to try to convince the public that environmentally damaging activities aren’t so damaging after all. It can be very successful precisely because most of us don’t realize it’s happening.

Enter the Yes Men, skilled activists who specialize in elaborate pranks that call attention to corporate tricks and hypocrisy. This time, they’ve created a website – wired-magazine.com—that looks remarkably like Wired.com and includes, front and center, an op-ed from writer (and EFF Special Adviser) Cory Doctorow. The op-ed, titled “Climate change has a posse” discussed the “power and peril” of a new “greenwashing” emoji designed by renowned artist Shepard Fairey:

First, we have to ask why in hell Unicode—formerly the Switzerland of tech standards—decided to plant its flag in the greasy battlefield of eco-politics now. After rejecting three previous bids for a climate change emoji, in 2017 and 2022, this one slipped rather suspiciously through the iron gates.

Either the wildfire smoke around Unicode’s headquarters in Silicon Valley finally choked a sense of ecological urgency into them, or more likely, the corporate interests that comprise the consortium finally found a way to appease public contempt that was agreeable to their bottom line.

Notified of the spoof, Doctorow immediately tweeted his joy at being included in a Yes Men hoax.

Wired.com was less pleased. An attorney for its corporate parent, Condé  Nast (CDN) demanded the Yes Men take the site down and transfer the domain name to CDN, claiming trademark infringement and misappropriation of Doctorow’s identity, with a vague reference to copyright infringement thrown in for good measure.

As we explained in our response on the Yes Men’s behalf, Wired’s heavy-handed reaction was both misguided and disappointing. Their legal claims are baseless given the satirical, noncommercial nature of the site (not to mention Doctorow’s implicit celebration of it after the fact). And frankly, a publication of Wired’s caliber should be celebrating this form of political speech, not trying to shut it down.

Hopefully Wired and CDN will recognize this is not a battle they want or need to fight. If not, EFF stands ready to defend the Yes Men and their critical work.

Corynne McSherry

NextNav’s Callous Land-Grab to Privatize 900 MHz

2 months 3 weeks ago

The 900 MHz band, a frequency range serving as a commons for all, is now at risk due to NextNav’s brazen attempt to privatize this shared resource. 

Left by the FCC for use by amateur radio operators, unlicensed consumer devices, and industrial, scientific, and medical equipment, this spectrum has become a hotbed for new technologies and community-driven projects. Millions of consumer devices also rely on the range, including baby monitors, cordless phones, IoT devices, garage door openers. But NextNav would rather claim these frequencies, fence them off, and lease them out to mobile service providers. This is just another land-grab by a corporate rent-seeker dressed up as innovation. 

EFF and hundreds of others have called on the FCC to decisively reject this proposal and protect the open spectrum as a commons that serves all.

NextNav’s Proposed 'Band-Grab'

NextNav wants the FCC to reconfigure the 902-928 MHz band to grant them exclusive rights to the majority of the spectrum. The country's airwaves are separated into different sections for different devices to communicate, like dedicated lanes on a highway. This proposal would not only give NextNav their own lane, but expanded operating region, increased broadcasting power, and more leeway for radio interference emanating from their portions of the band. All of this points to more power for NextNav at everyone else’s expense.

This land-grab is purportedly to implement a Positioning, Navigation and Timing (PNT) network to serve as a US-specific backup of the Global Positioning System(GPS). This plan raises red flags off the bat. 

Dropping the “global” from GPS makes it far less useful for any alleged national security purposes, especially as it is likely susceptible to the same jamming and spoofing attacks as GPS.

NextNav itself admits there is also little commercial demand for PNT. GPS works, is free, and is widely supported by manufacturers. If Nextnav has a grand plan to implement a new and improved standard, it was left out of their FCC proposal. 

What NextNav did include however is its intent to resell their exclusive bandwidth access to mobile 5G networks. This isn't about national security or innovation; it's about a rent-seeker monopolizing access to a public resource. If NextNav truly believes in their GPS backup vision, they should look to parts of the spectrum already allocated for 5G.

Stifling the Future of Open Communication

The open sections of the 900 MHz spectrum are vital for technologies that foster experimentation and grassroots innovation. Amateur radio operators, developers of new IoT devices, and small-scale operators rely on this band.

One such project is Meshtastic, a decentralized communication tool that allows users to send messages across a network without a central server. This new approach to networking offers resilient communication that can endure emergencies where current networks fail.

This is the type of innovation that actually addresses crises raised by Nextnav, and it’s happening in the part of the spectrum allocated for unlicensed devices while empowering communities instead of a powerful intermediary. Yet, this proposal threatens to crush such grassroots projects, leaving them without a commons in which they can grow and improve.

This isn’t just about a set of frequencies. We need an ecosystem which fosters grassroots collaboration, experimentation, and knowledge building. Not only do these commons empower communities, they avoid a technology monoculture unable to adapt to new threats and changing needs as technology progresses.

Invention belongs to the public, not just to those with the deepest pockets. The FCC should ensure it remains that way.

FCC Must Protect the Commons

NextNav’s proposal is a direct threat to innovation, public safety, and community empowerment. While FCC comments on the proposal have closed, replies remain open to the public until September 20th. 

The FCC must reject this corporate land-grab and uphold the integrity of the 900 MHz band as a commons. Our future communication infrastructure—and the innovation it supports—depends on it.

You can read our FCC comments here.

Rory Mir

We Called on the Oversight Board to Stop Censoring “From the River to the Sea” — And They Listened

2 months 3 weeks ago

Earlier this year, the Oversight Board announced a review of three cases involving different pieces of content on Facebook that contained the phrase “From the River to the Sea.” EFF submitted to the consultation urging Meta to make individualized moderation decisions on this content rather than a blanket ban as the phrase can be a historical call for Palestinian liberation and not an incitement of hatred in violation with Meta’s community standards.

We’re happy to see that the Oversight Board agreed. In last week’s decision, the Board found that the three pieces of examined content did not break Meta’s rules on “Hate Speech, Violence and Incitement or Dangerous Organizations and Individuals.” Instead, these uses of the phrase “From the River to the Sea” were found to be an expression of solidarity with Palestinians and not an inherent call for violence, exclusion, or glorification of designated terrorist group Hamas. 

The Oversight Board decision follows Meta’s original action to keep the content online. In each of the three cases, users appealed to Meta to remove the content but the company’s automated tools dismissed the appeals for human review and kept the content on Facebook. Users subsequently appealed to the Board and called for the content to be removed. The material included a comment that used the hashtag #fromtherivertothesea, a video depicting floating watermelon slices forming the phrases “From the River to the Sea” and “Palestine will be free,” and a reshared post declaring support for the Palestinian people.

As we’ve said many times, content moderation at scale does not work. Nowhere is this truer than on Meta services like Facebook and Instagram where the vast amount of material posted has incentivized the corporation to rely on flawed automated decision-making tools and inadequate human review. But this is a rare occasion where Meta’s original decision to carry the content and the Oversight Board’s subsequent decision supporting this upholds our fundamental right to free speech online. 

The tech giant must continue examining content referring to “From the River to the Sea” on an individualized basis, and we continue to call on Meta to recognize its wider responsibilities to the global user base to ensure people are free to express themselves online without biased or undue censorship and discrimination.

Paige Collings

Stopping the Harms of Automated Decision Making | EFFector 36.12

2 months 4 weeks ago

Curious about the latest digital rights news? Well, you're in luck! In our latest newsletter we cover topics including the Department of Homeland Security's use of AI in the immigration system, the arrest of Telegram’s CEO Pavel Durov, and a victory in California where we helped kill a bill that would have imposed mandatory internet ID checks.

It can feel overwhelming to stay up to date, but we've got you covered with our EFFector newsletter! You can read the full issue here, or subscribe to get the next one in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:

LISTEN ON YouTube

EFFECTOR 36.12 - Stopping The Harms Of Automated Decision Making

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Britain Must Call for Release of British-Egyptian Activist and Coder Alaa Abd El Fattah

2 months 4 weeks ago

As British-Egyptian coder, blogger, and activist Alaa Abd El Fattah enters his fifth year in a maximum security prison outside Cairo, unjustly charged for supporting online free speech and privacy for Egyptians and people across the Middle East and North Africa, we stand with his family and an ever-growing international coalition of supporters in calling for his release.

Alaa has over these five years endured beatings and solitary confinement. His family at times were denied visits or any contact with him. He went on a seven-month hunger strike in protest of his incarceration, and his family feared that he might not make it.

But global attention on his plight, bolstered by support from British officials in recent years, ultimately led to improved prison conditions and family visitation rights.

But let’s be clear: Egypt’s long-running retaliation against Alaa for his activism is a travesty and an arbitrary use of its draconian, anti-speech laws. He has spent the better part of the last 10 years in prison. He has been investigated and imprisoned under every Egyptian regime that has served in his lifetime. The time is long overdue for him to be freed.

Over 20 years ago Alaa began using his technical skills to connect coders and technologists in the Middle East to build online communities where people could share opinions and speak freely and privately. The role he played in using technology to amplify the messages of his fellow Egyptians—as well as his own participation in the uprising in Tahrir Square—made him a prominent global voice during the Arab Spring, and a target for the country’s successive repressive regimes, which have used antiterrorism laws to silence critics by throwing them in jail and depriving them of due process and other basic human rights.

Alaa is a symbol for the principle of free speech in a region of the world where speaking out for justice and human rights is dangerous and using the power of technology to build community is criminalized. But he has also come to symbolize the oppression and cruelty with which the Egyptian government treats those who dare to speak out against authoritarianism and surveillance.

Egyptian authorities’ relentless, politically motivated pursuit of Alaa is an egregious display of abusive police power and lack of due process. He was first arrested and detained in 2006 for participating in a demonstration. He was arrested again in 2011 on charges related to another protest. In 2013 he was arrested and detained on charges of organizing a protest. He was eventually released in 2014, but imprisoned again after a judge found him guilty in absentia.

What diplomatic price has Egypt paid for denying the right of consular access to a British citizen? And will the Minister make clear there will be serious diplomatic consequences if access is not granted immediately and Alaa is not released and reunited with his family? - David Lammy

That same year he was released on bail, only to be re-arrested when he went to court to appeal his case. In 2015 he was sentenced to five years in prison and released in 2019. But he was re-arrested in a massive sweep of activists in Egypt while on probation and charged with spreading false news and belonging to a terrorist organization for sharing a Facebook post about human rights violations in prison. He was sentenced in 2021, after being held in pre-trial detention for more than two years, to five years in prison. September 29 will mark five years that he has spent behind bars.

While he’s been in prison an anthology of his writing, which was translated into English by anonymous supporters, was published in 2021 as You Have Not Yet Been Defeated, and he became a British citizen through his mother, the rights activist and mathematician Laila Soueif, that December.

Protesting his conditions, Alaa shaved his head and went on hunger strike beginning in April 2022. As he neared the third month of his hunger strike, former UK foreign secretary Liz Truss said she was working hard to secure his release. Similarly, then-PM Rishi Sunak wrote in a letter to Alaa’s sister, Sanaa Seif, that “the government is deeply committed to doing everything we can to resolve Alaa's case as soon as possible."

David Lammy, then a Member of Parliament and now Britain’s foreign secretary, asked Parliament in November 2022, “what diplomatic price has Egypt paid for denying the right of consular access to a British citizen? And will the Minister make clear there will be serious diplomatic consequences if access is not granted immediately and Alaa is not released and reunited with his family?” Lammy joined Alaa’s family during a sit-in outside of the Foreign Office.

When the UK government’s promises failed to come to fruition, Alaa escalated his hunger strike in the runup to the COP27 gathering. At the same time, a coordinated campaign led by his family and supported by a number of international organizations helped draw global attention to his plight, and ultimately led to improved prison conditions and family visitation rights.

But although Alaa’s conditions have improved and his family visitation rights have been secured, he remains wrongfully imprisoned, and his family fears that the Egyptian government has no intention of releasing him.

With Lammy, now UK Foreign Minister, and a new Labour government in place in the UK, there is renewed hope for Alaa’s release. Keir Starmer, Labour Leader and the new prime minister, has voiced his support for Fattah’s release.

The new government must make good on its pledge to defend British values and interests, and advocate for the release of its British citizen Alaa Fattah. We encourage British citizens to write to their MP (external link) and advocate for his release. His continued detention is debased. Egypt should face the sole of shoes around the world until Fattah is freed.

Karen Gullo

School Monitoring Software Sacrifices Student Privacy for Unproven Promises of Safety

3 months ago

Imagine your search terms, key-strokes, private chats and photographs are being monitored every time they are sent. Millions of students across the country don’t have to imagine this deep surveillance of their most private communications: it’s a reality that comes with their school districts’ decision to install AI-powered monitoring software such as Gaggle and GoGuardian on students’ school-issued machines and accounts. As we demonstrated with our own Red Flag Machine, however, this software flags and blocks websites for spurious reasons and often disproportionately targets disadvantaged, minority and LGBTQ youth.

The companies making the software claim it’s all done for the sake of student safety: preventing self-harm, suicide, violence, and drug and alcohol abuse. While a noble goal, given that suicide is the second highest cause of death among American youth 10-14 years old, no comprehensive or independent studies have shown an increase in student safety linked to the usage of this software. Quite to the contrary: a recent comprehensive RAND research study shows that such AI monitoring software may cause more harm than good.

That study also found that how to respond to alerts is left to the discretion of the school districts themselves. Due to a lack of resources to deal with mental health, schools often refer these alerts to law enforcement officers who are not trained and ill-equipped to deal with youth mental crises. When police respond to youth who are having such episodes, the resulting encounters can lead to disastrous results. So why are schools still using the software–when a congressional investigation found a need for “federal action to protect students’ civil rights, safety, and privacy”? Why are they trading in their students’ privacy for a dubious-at-best marketing claim of safety?

Experts suggest it's because these supposed technical solutions are easier to implement than the effective social measures that schools often lack resources to implement. I spoke with Isabelle Barbour, a public health consultant who has experience working with schools to implement mental health supports. She pointed out that there are considerable barriers to families, kids, and youth accessing health care and mental health supports at a community level. There is also a lack of investment in supporting schools to effectively address student health and well-being. This leads to a situation where many students come to school with needs that have been unmet and these needs impact the ability of students to learn. Although there are clear and proven measures that work to address the burdens youth face, schools often need support (time, mental health expertise, community partners, and a budget) to implement these measures. Edtech companies market largely unproven plug-and-play products to educational professionals who are stretched thin and seeking a path forward to help kids. Is it any wonder why schools sign contracts which are easy to point to when questioned about what they are doing with regard to the youth mental health epidemic?

One example: Gaggle in marketing to school districts claims to have saved 5,790 student lives between 2018 and 2023, according to shaky metrics they themselves designed. All the while they keep the inner-workings of their AI monitoring secret, making it difficult for outsiders to scrutinize and measure its effectiveness.

We give Gaggle an “F”

Reports of the errors and inability of the AI flagging to understand context keep popping up. When the Lawrence, Kansas school district signed a $162,000 contract with Gaggle, no one batted an eye: It joined a growing number of school districts (currently ~1,500) nation-wide using the software. Then, school administrators called in nearly an entire class to explain photographs Gaggle’s AI had labeled as “nudity” because the software wouldn’t tell them:

“Yet all students involved maintain that none of their photos had nudity in them. Some were even able to determine which images were deleted by comparing backup storage systems to what remained on their school accounts. Still, the photos were deleted from school accounts, so there is no way to verify what Gaggle detected. Even school administrators can’t see the images it flags.”

Young journalists within the school district raised concerns about how Gaggle’s surveillance of students impacted their privacy and free speech rights. As journalist Max McCoy points out in his article for the Kansas Reflector, “newsgathering is a constitutionally protected activity and those in authority shouldn’t have access to a journalist’s notes, photos and other unpublished work.” Despite having renewed Gaggle’s contract, the district removed the surveillance software from the devices of student journalists. Here, a successful awareness campaign resulted in a tangible win for some of the students affected. While ad-hoc protections for journalists are helpful, more is needed to honor all students' fundamental right to privacy against this new front of technological invasions.

Tips for Students to Reclaim their Privacy

Students struggling with the invasiveness of school surveillance AI may find some reprieve by taking measures and forming habits to avoid monitoring. Some considerations:

  • Consider any school-issued device a spying tool. 
  • Don’t try to hack or remove the monitoring software unless specifically allowed by your school: it may result in significant consequences from your school or law enforcement. 
  • Instead, turn school-issued devices completely off when they aren’t being used, especially while at home. This will prevent the devices from activating the camera, microphone, and surveillance software.
  • If not needed, consider leaving school-issued devices in your school locker: this will avoid depending on these devices to log in to personal accounts, which will keep data from those accounts safe from prying eyes.
  • Don’t log in to personal accounts on a school-issued device (if you can avoid it - we understand sometimes a school-issued device is the only computer some students have access to). Rather, use a personal device for all personal communications and accounts (e.g., email, social media). Maybe your personal phone is the only device you have to log in to social media and chat with friends. That’s okay: keeping separate devices for separate purposes will reduce the risk that your data is leaked or surveilled. 
  • Don’t log in to school-controlled accounts or apps on your personal device: that can be monitored, too. 
  • Instead, create another email address on a service the school doesn’t control which is just for personal communications. Tell your friends to contact you on that email outside of school.

Finally, voice your concern and discomfort with such software being installed on devices you rely on. There are plenty of resources to point to, many linked to in this post, when raising concerns about these technologies. As the young journalists at Lawrence High School have shown, writing about it can be an effective avenue to bring up these issues with school administrators. At the very least, it will send a signal to those in charge that students are uncomfortable trading their right to privacy for an elusive promise of security.

Schools Can Do Better to Protect Students Safety and Privacy

It’s not only the students who are concerned about AI spying in the classroom and beyond. Parents are often unaware of the spyware deployed on school-issued laptops their children bring home. And when using a privately-owned shared computer logged into a school-issued Google Workspace or Microsoft account, a parent’s web search will be available to the monitoring AI as well.

New studies have uncovered some of the mental detriments that surveillance causes. Despite this and the array of First Amendment questions these student surveillance technologies raise, schools have rushed to adopt these unproven and invasive technologies. As Barbour put it: 

“While ballooning class sizes and the elimination of school positions are considerable challenges, we know that a positive school climate helps kids feel safe and supported. This allows kids to talk about what they need with caring adults. Adults can then work with others to identify supports. This type of environment helps not only kids who are suffering with mental health problems, it helps everyone.”

We urge schools to focus on creating that environment, rather than subjecting students to ever-increasing scrutiny through school surveillance AI.

Bill Budington

You Really Do Have Some Expectation of Privacy in Public

3 months ago

Being out in the world advocating for privacy often means having to face a chorus of naysayers and nihilists. When we spend time fighting the expansion of Automated License Plate Readers capable of tracking cars as they move, or the growing ubiquity of both public and private surveillance cameras, we often hear a familiar refrain: “you don’t have an expectation of privacy in public.” This is not true. In the United States, you do have some expectation of privacy—even in public—and it’s important to stand up and protect that right.

How is it possible to have an expectation of privacy in public? The answer lies in the rise of increasingly advanced surveillance technology. When you are out in the world, of course you are going to be seen, so your presence will be recorded in one way or another. There’s nothing stopping a person from observing you if they’re standing across the street. If law enforcement has decided to investigate you, they can physically follow you. If you go to the bank or visit a courthouse, it’s reasonable to assume you’ll end up on their individual video security system.

But our ever-growing network of sophisticated surveillance technology has fundamentally transformed what it means to be observed in public. Today’s technology can effortlessly track your location over time, collect sensitive, intimate information about you, and keep a retrospective record of this data that may be stored for months, years, or indefinitely. This data can be collected for any purpose, or even for none at all. And taken in the aggregate, this data can paint a detailed picture of your daily life—a picture that is more cheaply and easily accessed by the government than ever before.

Because of this, we’re at risk of exposing more information about ourselves in public than we were in decades past. This, in turn, affects how we think about privacy in public. While your expectation of privacy is certainly different in public than it would be in your private home, there is no legal rule that says you lose all expectation of privacy whenever you’re in a public place. To the contrary, the U.S. Supreme Court has emphasized since the 1960’s that “what [one] seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected.” The Fourth Amendment protects “people, not places.”  U.S. privacy law instead typically asks whether your expectation of privacy is something society considers “reasonable.”

This is where mass surveillance comes in. While it is unreasonable to assume that everything you do in public will be kept private from prying eyes, there is a real expectation that when you travel throughout town over the course of a day—running errands, seeing a doctor, going to or from work, attending a protest—that the entirety of your movements is not being precisely tracked, stored by a single entity, and freely shared with the government. In other words, you have a reasonable expectation of privacy in at least some of the uniquely sensitive and revealing information collected by surveillance technology, although courts and legislatures are still working out the precise contours of what that includes.

In 2018, the U.S. Supreme Court decided a landmark case on this subject, Carpenter v. United States. In Carpenter, the court recognized that you have a reasonable expectation of privacy in the whole of your physical movements, including your movements in public. It therefore held that the defendant had an expectation of privacy in 127 days worth of accumulated historical cell site location information (CSLI). The records that make up CSLI data can provide a comprehensive chronicle of your movements over an extended period of time by using the cell site location information from your phone.  Accessing this information intrudes on your private sphere, and the Fourth Amendment ordinarily requires the government to obtain a warrant in order to do so.

Importantly, you retain this expectation of privacy even when those records are collected while you’re in public. In coming to its holding, the Carpenter court wished to preserve “the degree of privacy against government that existed when the Fourth Amendment was adopted.” Historically, we have not expected the government to secretly catalogue and monitor all of our movements over time, even when we travel in public. Allowing the government to access cell site location information contravenes that expectation. The court stressed that these accumulated records reveal not only a person’s particular public movements, but also their “familial, political, professional, religious, and sexual associations.”

As Chief Justice John Roberts said in the majority opinion:

“Given the unique nature of cell phone location records, the fact that the information is held by a third party does not by itself overcome the user’s claim to Fourth Amendment protection. Whether the Government employs its own surveillance technology . . . or leverages the technology of a wireless carrier, we hold that an individual maintains a legitimate expectation of privacy in the record of his physical movements as captured through [cell phone site data]. The location information obtained from Carpenter’s wireless carriers was the product of a search. . . .

As with GPS information, the time-stamped data provides an intimate window into a person’s life, revealing not only his particular movements, but through them his “familial, political, professional, religious, and sexual associations.” These location records “hold for many Americans the ‘privacies of life.’” . . .  A cell phone faithfully follows its owner beyond public thoroughfares and into private residences, doctor’s offices, political headquarters, and other potentially revealing locales. Accordingly, when the Government tracks the location of a cell phone it achieves near perfect surveillance, as if it had attached an ankle monitor to the phone’s user.”

As often happens in the wake of a landmark Supreme Court decision, there has been some confusion among lower courts in trying to determine what other types of data and technology violate our expectation of privacy when we’re in public. There are admittedly still several open questions: How comprehensive must the surveillance be? How long of a time period must it cover? Do we only care about backward-looking, retrospective tracking? Still, one overall principle remains certain: you do have some expectation of privacy in public.

If law enforcement or the government wants to know where you’ve been all day long over an extended period of time, that combined information is considered revealing and sensitive enough that police need a warrant for it. We strongly believe the same principle also applies to other forms of surveillance technology, such as automated license plate reader camera networks that capture your car’s movements over time. As more and more integrated surveillance technologies become the norm, we expect courts will expand existing legal decisions to protect this expectation of privacy.

It's crucial that we do not simply give up on this right. Your location over time, even if you are traversing public roads and public sidewalks, is revealing. More revealing than many people realize. If you drive from a specific person’s house to a protest, and then back to that house afterward—what can police infer from having those sensitive and chronologically expansive records of your movement? What could people insinuate about you if you went to a doctor’s appointment at a reproductive healthcare clinic and then drove to a pharmacy three towns away from where you live? Scenarios like this involve people driving on public roads or being seen in public, but we also have to take time into consideration. Tracking someone’s movements all day is not nearly the same thing as seeing their car drive past a single camera at one time and location.

The courts may still be catching up with the law and technology, but that doesn’t mean it’s a surveillance free-for-all just because you’re in the public. The government still has important restrictions against tracking our movement over time and in public even if you find yourself out in the world walking past individual security cameras. This is why we do what we do, because despite the naysayers, someone has to continue to hold the line and educate the world on how privacy isn’t dead.

Matthew Guariglia

EFF & 140 Other Organizations Call for an End to AI Use in Immigration Decisions

3 months ago

EFF, Just Futures Law, and 140 other groups have sent a letter to Secretary Alejandro Mayorkas that the Department of Homeland Security (DHS) must stop using artificial intelligence (AI) tools in the immigration system. For years, EFF has been monitoring and warning about the dangers of automated and so-called “AI-enhanced” surveillance at the U.S.-Mexico border. As we’ve made clear, algorithmic decision-making should never get the final say on whether a person should be policed, arrested, denied freedom, or, in this case, are worthy of a safe haven in the United States.  

The letter is signed by a wide range of organizations, from civil liberties nonprofits to immigrant rights groups, to government accountability watchdogs, to civil society organizations. Together, we declared that DHS’s use of AI, defined by the White House as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments,” appeared to violate federal policies governing its responsible use, especially when it’s used as part of the decision-making regarding immigration enforcement and adjudications.

Read the letter here

The letter highlighted the findings from a bombshell report published by Mijente and Just Futures Law on the use of AI and automated decision-making by DHS and its sub-agencies, U.S. Citizenship and Immigration Services (USCIS), Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP). Despite laws, executive orders, and other directives to establish standards and processes for the evaluation, adoption, and use of AI by DHS—as well as DHS’s pledge that pledge that it “will not use AI technology to enable improper systemic, indiscriminate, or large-scale monitoring, surveillance or tracking of individuals”—the agency has seemingly relied on the loopholes for national security, intelligence gathering, and law enforcement to avoid compliance with those requirements. This completely undermines any supposed attempt on the part of the federal government to use AI responsibly and contain the technology’s habit of merely digitizing and accelerating decisions based preexisting on biases and prejudices. 

Even though AI is unproven in its efficacy, DHS has frenetically incorporated AI into many of its functions. These products are often a result of partnerships with vendors who have aggressively pushed the idea that AI will make immigration processing more efficient, more objective and less biased

Yet the evidence begs to differ, or, at best, is mixed.  

As the report notes, studies, including those conducted by the government, have recognized that AI has often worsened discrimination due to the reality of “garbage in, garbage out.” This phenomenon was visible in Amazon’s use—and subsequent scrapping—of AI to screen résumés, which highlighted male applicants more often because the data on which the program had been trained included more applications from men. The same pitfalls arises in predictive policing products, something EFF categorically opposes, which often “predicts” crimes more likely to occur in Black and Brown neighborhoods due to the prejudices embedded in the historical crime data used to design that software. Furthermore, AI tools are often deficient when used in complex contexts, such as the morass that is immigration law. 

In spite of these grave concerns, DHS has incorporated AI decision-making into many levels of its operation with without taking the necessary steps to properly vet the technology. According to the report, AI technology is part of USCIS’s process to determine eligibility for immigration benefit or relief, credibility in asylum applications, and public safety or national security threat level of an individual. ICE uses AI to automate its decision-making on electronic monitoring, detention, and deportation. 

At the same time, there is a disturbing lack of transparency regarding those tools. We urgently need DHS to be held accountable for its adoption of opaque and untested AI programs promulgated by those with a financial interest in the proliferation of the technology. Until DHS adequately addresses the concerns raised in the letter and report, the Department should be prohibited from using AI tools. 

Hannah Zhao
Checked
1 hour 42 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed