Science Rebels Take on Major Publishers

3 weeks 2 days ago

Over 40 leading scientists have resigned from the prestigious journal Neuroimage last month, protesting an inequitable publishing model built on gatekeeping and false scarcity.

Academic publishing is fundamental to the advancement of modern science. It facilitates expert collaboration and testing, ideally leading to new innovation, including life-saving medical research. Too often, however, cutting edge research is trapped behind paywalls, effectively inaccessible to the people and institutions that cannot afford the fees. Many libraries, even in wealthier universities in the U.S. and United Kingdom, are drowning in constantly growing waves of subscription costs. The result is stifled progress for everyone, with the inequity most sharply felt in poorer nations.

Publishers claim that they need those high fees to keep the operation running. Government and institutions fund the actual research, unpaid volunteers review the articles and, thanks to the internet, the cost of hosting and distributing the journal articles is trivial. The result has been that traditional publishing giants, such as RELX’s subsidiary Elsevier, are extracting exorbitant profit margins—wider than tech giants such as Apple, Google and Amazon.

Academics have been pushing back on this absurd model for some time. Many institutions and researchers are boycotting these major publishers, while investing in open access alternatives. Just last August, President Biden moved the fight forward by setting a deadline of 2025 for all publicly funded research to be immediately accessible to the public

With this change in tide, publishers began to offer a so-called compromise:  an open access ransom that requires researchers to pay publishing fees to make their work accessible. In the case of Neuroimage, the fee was $3,450 and the publisher refused pleas to reduce it. The result is the academics who write the work being published must choose between covering these unreasonable profits themselves, or having their work locked behind a paywall.  

But academics refused to accept this outrageous policy. The journal's entire academic board, including esteemed professors from institutions such as Oxford University, King's College London, and Cardiff University, resigned en masse in response.

They called these charges what they are: unethical. This collective act of defiance sends a powerful message that the academic community will no longer tolerate being preyed upon by publishers that value excessive profits over the advancement of science.

EFF supports the call on scientists to to turn their back on publishing giants like Elsevier and seek publication in non-profit open access journals instead. By collectively withdrawing the work which props up these exploitative publishers, the academic community can create a seismic shift in the publishing landscape towards a long-overdue model that supports  equity and data access.

Rory Mir

Court Accepts EFF’s Amicus Brief on the Right to Publish Code in Tornado Cash Case

3 weeks 2 days ago

Protecting the First Amendment rights of coders to develop and publish code is a core EFF value. It’s also one where we’ve played a central role in developing the law. So, we were happy that the court in the Tornado Cash lawsuit dismissed a government objection and accepted our amicus brief in support of the plaintiffs.

The case, Van Loon v Department of Treasury, arises from the U.S. Treasury Department’s decision to put Tornado Cash, an open-source project, on its specially designed nationals (SDN) list. As a result of that listing, the project was taken down from GitHub. Developers were deeply concerned that they might face charges and fines for simply participating in it.

Our brief draws on decades of legal and practical experience with open-source developers to explain why putting the open-source project on a sanctions list could set a dangerous precedent and violate First Amendment protections for code development. 

EFF long ago helped establish that publishing computer code is protected speech under the First Amendment. As we explained to the court, whenever the government seeks to restrict our speech—including code—based upon its content, its actions are subject to the highest level of scrutiny and must be done in the least speech-restrictive means available. The Treasury Department Office of Foreign Assets Control (OFAC) failed this test when it included the open-source project in placing “Tornado Cash” on the SDN list.

By way of background, the SDN designation made it a crime to interact with any part of Tornado Cash, which the government knew included the open-source project published and collectively developed by dozens of developers on GitHub. The SDN listing not only chilled those developers’ speech, it also understandably caused GitHub to remove Tornado Cash code from its website. Importantly, reaching the GitHub repository was not necessary to address concerns about the approximately one-third of the actual uses of the Tornado Cash software on the Ethereum blockchain that  were allegedly illegal.

We understand and support the important government interests the SDN list is intended to address. But no matter how justifiable its goals, the government’s actions still must be properly tailored to avoid causing fear, confusion and censorship of scientific development and academic exchanges. OFAC did not do so for Tornado Cash. For example, as we explained to the court, the removal of the code from GitHub hindered our client, Professor Matthew Green, from using it in his classes at the Johns Hopkins Information Security Institute that study privacy-enhancing technologies..

And the government’s subsequent actions didn’t remedy matters. After we wrote to the Treasury Department about our concerns, the agency did not narrow its definition in the SDN order, but instead added in its public FAQs that U.S. persons would not be prohibited from copying open-source code and making it available online for others to view, “as well as discussing, teaching about, or including open-source code in written publications, such as textbooks, absent additional facts.

While this announcement removed the immediate threat of legal action against Professor Green and others for specific educational uses, or basic copying, making available, and teaching the extant code, it did not resolve the chilling effects of the listing on open-source developers or GitHub. This is especially true since OFAC did not specify what “additional facts” might create severe criminal liability. As a result, developers still do not have clear notice about whether they can, for instance, take a piece of the code and use it in another program, including another kind of mixer. Predictably, there has been little, if any, further development or use of the open-source project.

Open-source developers and GitHub are owed more under First Amendment. That the code may be later used by others to take illegal actions does not undermine protections of their speech. The SDN listing creates serious civil and criminal penalties, yet the government has still provided no clear guidance as to what would run afoul of its prohibition on Tornado Cash. 

At the same time, the government has demonstrated that it could clarify what the listing means for open-source developers and that less restrictive alternatives exist. It had the ability to tailor its prohibition to its legitimate concerns about, for instance, the Tornado Cash mixer being used to launder Ethereum coins from a North Korean hacking group. The government could, at the very least, have clarified at the outset that it would not be applied to the open-source project hosted on GitHub or that it would only be applied to actual transactions, not the publication of the code itself. 

There are many other important and interesting legal arguments in the case, and the parties in the lawsuit and other amici have addressed them. EFF focused on this issue because it is foundational to a free and open Internet, with ramifications including, but also far beyond, cryptocurrency and mixers.

We are urging the court to recognize these actions for what they are—unconstitutional violations of the First Amendment—and hold the government to the appropriate higher standard. 

Cindy Cohn

As Platforms Decay, Let’s Put Users First

3 weeks 3 days ago

The net’s long decline into “five giant websites, each filled with screenshots of the other four” isn’t a mystery. Nor was it by any means a forgone conclusion. Instead, we got here through a series of conscious actions by big businesses and lawmakers that put antitrust law into a 40-year coma. Well, now antitrust is rising from its slumber and we have work for it to do.

As regulators and lawmakers think about making the internet a better place for human beings,their top priority should be restoring power to users. The internet’s promise was that it would remove the barriers that stood in all our way; distance, sure, but also the barriers thrown up by large corporations and oppressive states. But the companies gained a toehold in that environment of lowered barriers, turned right around, and put up fresh barriers of their own. That trapped billions of us on platforms that many of us do not like but feel we can’t leave. 

Platforms follow a predictable lifecycle: first, they offer their end-users a good deal. Early Facebook users got a feed consisting solely of updates from the people they cared about, and promises of privacy. Early Google searchers got result screens filled with Google’s best guess at what they were searching for, not ads. Amazon once made it easy to find the product you were looking for, without making you wade through five screens’ worth of “sponsored” results. 

The good deal for users is only temporary. Platforms today use a combination of tools, including taking advantage of collective action problems, “Most Favored Nation” clauses, collusive back-room deals to block competitors, computer crime laws, and Digital Rights Management to lock their users in. Once those users are firmly in hand, the platforms degrade what made users choose the platform in the first place, making the deal worse for them in order to attract business customers. So instead of showing you the things you asked for, your time and attention is sold to businesses by platforms.

For example, Facebook broke its promise not to spy on users, created a massive commercial surveillance system, and sold cheap, reliable targeting to advertisers. Google broke its promise not to pollute its search engine with ads and offered great deals to advertisers. Amazon offered below-cost shipping and returns to platform sellers and later shifted the cost onto those sellers. YouTube offered reliable, lucrative income streams to performers and many responded by building their businesses on the platform. 

The good deal for business customers is no more permanent than the good deal for end users; once business customers are likewise dependent on a platform, the platform’s generosity ends and it starts clawing back value for its shareholders. Facebook rigs its ad market to rip off publishers and advertisers, Apple hikes the fees it charges app makers, and Amazon follows suit, until more than half the price of the third-party goods you buy on Amazon is consumed by junk fees; Google fires 12,000 workers after a stock buyback that would have paid all their salaries for the next 27 years, even as its search quality degrades and its results pages are overrun by fraud.  A platform where nearly all the value has been withdrawn from users and business customers lives in a state of fragile equilibrium, trapped in a cycle of degrading service, increased regulatory scrutiny after privacy violations, moderation scandals, and other highly public failings, and a widening gyre of scandals and bad press. Its value is only in its monopoly. 

Amidst this bumper crop of tech scandals, lawmakers and regulators are seeking ways to protect internet users. That’s a good priority to have: platforms will rise and fall. They should, in fact, if they fail to offer anything of actual value to their users. They don’t need our protection. It’s users we should be thinking of. 

Protecting users from platform degradation starts with giving users control: control over their digital lives. Users deserve to be protected from deceptive and abusive platform rules, users deserve to have alternatives to the platforms they use now. Finally, users deserve the right to use those alternatives, without having to pay a heavy price caused by artificial technical or legal barriers to switching.

Two Big Picture Principles For Protecting Users From Platform Degradation

We're in the midst of a long-overdue move to regulate and legislate over platform abuses. Each platform has its own technical ins-and-outs, and so any policy to protect users of a given platform must employ a finely detailed analysis to make sure it does what it’s supposed to do. Experts like EFF live in these details, but ultimately even those details are determined by big picture concerns. 

Below, we set out two of those big-picture principles that we think are both critical to protecting users and, compared to many other options, easier to implement.  

How these principles are implemented could vary—they may be embodied directly in legislation if done carefully, enforced in specific settlements with regulators or in litigation, or, better yet, voluntarily adopted by platforms, technologists,  or developers, or enforced by investors or other kinds of funders (nonprofit or municipal, for instance).  They are intended as a framework for evaluating those fine-grained solutions, not to judge whether they are technically effective, but rather, whether they are effective ways to build a public interest internet.

Principle 1: End-to-End: Connecting Willing Listeners With Willing Speakers

The End-to-End Principle is a bedrock idea underpinning the internet, the idea that the role of a network is to reliably deliver data from willing senders to willing receivers. This idea has taken on many guises over the decades since it was formalized in 1981.  A familiar example is the idea of network neutrality, which states that your ISP should send you the data you ask for as quickly and reliably as it can. For example, a neutral ISP uses the same “best-effort” to deliver the videos you request, while a non-neutral ISP might only use best-effort to deliver the videos from its own affiliated streaming service, while slowing down the videos served by its rivals. 

You signed up with your ISP to get the content and connections you want, not the ones the ISP’s investors wish you’d asked for. 

We think that a version of the end-to-end principle has a role to play in the “service layer” of the internet as well, whether that’s in social media, search, e-commerce, or email. Some examples include:

  • Social media: If you subscribe to someone’s feed, you should see the things they post. Performers shouldn’t have to outguess and work around the opaque rules of a content recommendation system to get the fruits of their creative labor in front of the people who asked to see them. Recommendation systems have a place, but there should always be a way for social media users to see the updates posted by the people they care enough about to follow, without having to wade through posts by people the platform wants (or is being paid to) promote.
  • Search: If a search engine has an exact match for the thing you’re searching for—for example, a verified local merchant listing or a single document with the exact title you’re seeking, or a website whose name matches your search term, that result should be at the top of the results screen—not multiple ads for lookalike businesses, or worse, scam sites pretending to be lookalike businesses and paying to go above the best match.
  • E-Commerce: If an e-commerce platform has an exact match for the product you’ve searched for—either by name or part/model number—that result should be the top result for your search, above the platform’s own equivalent products, or “sponsored” results for lookalike products.
  • Email: If you mark a sender as trusted, their email should never go to a spam folder (however, it’s fine to add warnings about malicious attachments or links to messages flagged by scanners). It should be easy to mark a sender as trusted. Senders in your address book should automatically be trusted.
Principle 2: Right of Exit: Treating Bad Platforms As Damage and Routing Around Them

Plenty of people don’t like big platforms but feel they can’t leave. Social media users are locked in thanks to the “collective action problem” of convincing all their friends to leave and agreeing on where to go. Performers and creators are locked in because their audiences can’t follow them to new platforms without losing the media they’ve paid for (and audiences can’t leave because the creators they enjoy are stuck on the platforms). 

Making it easier for platform users to go elsewhere has two important effects: it disciplines platform owners who are tempted to shift value from users to themselves, because they know that making their platforms worse, such as by allowing harassment and scams, or by increasing surveillance, raising prices or accepting invasive advertising,  will precipitate a mass exodus of users who can leave without paying a high price. 

Just as important: if platforms aren’t disciplined by this threat, then users can leave, treating the bad platform as damage and routing around it. That is the part of the free market that these companies always try to forget: consumers are supposed to be able to vote with their feet. But if you will lose contact with your friends and family if you leave a terrible service, you can't really choose the better option. 

What’s more, a world where hopping platforms is easy is a world where tinkerers, co-ops, and startups have a reason to build alternatives that users can hop to.

Social media: “Interoperable” social media platforms connect to one another, allowing users to exchange messages and participate in communities from “federated” servers that each have their own management, business model, and policies. Services based on the open ActivityPub service (like Mastodon) are designed to make switching easy: users need only export their list of followers and the accounts they follow and upload them to another server, and all those social connections are re-established with just a few clicks (Bluesky, a new service that boasts of its federation capabilities but is thus far limited to a single server, has a similar function). This ease of switching makes users less reliant on server operators. If your server operator fosters a hostile community, or simply pulls the plug on their server, you can easily re-establish yourself on any other server without sacrificing your social connections. Existing data-protection laws like the CCPA and GDPR already require online service providers to turn over your data on demand; that data should include the files needed to switch from one server to another. Regulators seeking to improve the moderation practices of large social media platforms should also be securing the right of exit for platform users. Sure—let’s make the big platforms better, but let’s also make it easier to walk away from them.

DRM-locked media: Most of the media sold by online stores is encumbered with “Digital Rights Management” technology that prevents the people who buy it from playing it back using unauthorized tools. This locks audiences to platforms because breaking up with the platform means throwing away the media you’ve purchased. It also locks performers and creators to those platforms, unable to switch to rivals that treat them better and pay them more, because their audiences can’t follow them without forfeiting those earlier purchases. Applied to media, the “right of exit” would require platforms to facilitate communications between buyers and creators of media. A creator who switched to a rival platform could use this facility to provide all purchasers of their works with download codes for a new platform, which would be forwarded by the old platform to the creator’s customers. Likewise, customers who switched to another store could send messages via the platform to creators asking for download codes on the new service.

What Do These Two Principles Give Us?   Reasonable Administrability

For decades, failures in tech regulation have been blamed on technology’s speed and regulation’s slowness—we’re told that regulators just can’t keep up with tech.

But tech regulation doesn’t have to be slow. Some proposed tech regulations—like rules requiring platforms to provide full explanations for content moderation and account suspension decisions, or to prevent bullying and harassment—will always be slow, because they are “fact-intensive.” 

A rule requiring action on harassment needs: a definition of harassment, an assessment of whether a given action constitutes harassment, and an assessment of whether the action taken was sufficient under the rule. These are the kinds of things that people of good faith could argue about for years, and that people of bad faith could argue about forever

Furthermore, even once definitions of these things are agreed on, it will take a long time to see the effects in practice and evaluate if they are working as intended.

By contrast, “end-to-end” and “right of exit” are easy to administer, because you will know instantly if the principles are being followed. If we tell social media platforms that they must deliver posts to your followers, we can tell whether that’s happening by making some test posts and checking whether they’re delivered. Same for a rule requiring search tools and e-commerce sites to prioritize exact matches—just do a search and see if the exact match is at the top of the screen.

Likewise for right-of-exit: if a Mastodon user claims they weren’t given the data needed to transfer to another server, the question can be easily settled by the old server owner handing over that data. 

There are many important policy priorities that are fact-intensive and hard to administer.. The more administrable our other policies are, the more capacity we’ll have to dive into those urgent, gnarly problems.

Avoiding the Creation of Capital Moats

Rules that are expensive to comply with can be a gift to big companies. If we make copyright filters mandatory for online services, we are effectively saying that no one can create an online service unless they have $100 million lying around to stand up a filter. Or we are saying that startups should have to pay those companies rents to use their filter technology. If we make rules that assume that every online service is a Big Tech company, we make it impossible to shrink Big Tech.

End-to-end and right of exit are cheap policies, cheap enough to apply to small companies, individual hobbyists, and also very large, incumbent firms.

Services like Mastodon are already end-to-end: complying with an end-to-end rule would simply require Mastodon server operators not to rip out the existing end-to-end feature; rather, any recommendation/ranking system for Mastodon would have to exist alongside of the current system. For large services like Instagram, Facebook and Twitter, end-to-end would mean restoring the simplest form of feed: a feed just of the accounts you actively chose to follow.

Same for right of exit: this is already supported in modern, federated systems, so no new work would have to be done by people running these server tools; rather, their only burden would be to respond in a timely fashion to user requests for their data. These are handled automatically, but might require manual work if the operator decides to kick a user off the service or shut the service down.

For Big Tech incumbents, adding a right of exit means implementing an open standard that already has reference libraries and implementations. 

A Public-Interest Internet

We want a web where users are in control. That means a web where we freely choose our online services from a wide menu and stay with them because we like them, not because we can’t afford to leave. We want a web where you get the things you ask for, not the things that corporate shareholders would prefer that you’d asked for. We want a web where willing listeners and willing speakers, willing sellers and willing buyers, willing makers, and willing audiences are all able to transact and communicate without worrying about their relationships being held hostage or disrupted to cram “sponsored posts” into their eyeballs.

Platform decay is the result of firms undisciplined by either competition or regulation and thus free to abuse their users and business customers without the fear of defection or punishment. Creating policies to give platform users a fair shake and the ability to leave will require a lot of attention to detail—but all that detail needs to be guided by principles, north stars that help keep us on the path to a better internet.

Cory Doctorow

Suit by Renowned Saudi Human Rights Activist Details Harms Caused by Export of U.S. Cybersurveillance Technology and Training to Repressive Regimes

3 weeks 4 days ago
“Companies that employ spyware on behalf of oppressive governments must be held accountable for the resulting human rights abuses.”

PORTLAND, OR — The Electronic Frontier Foundation (EFF), the Center for Justice & Accountability (CJA), and Foley Hoag LLP on Monday filed an amended complaint with the U.S. District Court for the District of Oregon on behalf of renowned Saudi human rights activist Loujain Alhathloul against three former members of the U.S. national security establishment and their former employer, DarkMatter Group, an Emirati cyber-surveillance company. 

“With authoritarianism encroaching around the globe, we must be more vigilant than ever in protecting human rights advocates from threats to their digital security,” said EFF Civil Liberties Director David Greene. “Companies that employ spyware on behalf of oppressive governments must be held accountable for the resulting human rights abuses.”

For the past decade, Ms. Alhathloul, a nominee for the 2019 and 2020 Nobel Peace Prize, has been a powerful advocate for women’s rights in Saudi Arabia. She was at the forefront of the public campaign advocating for women’s right to drive in Saudi Arabia and has been a vocal critic of the country’s male guardianship system. 

The amended complaint alleges that DarkMatter Group, an arm of the United Arab Emirates (UAE) security services, recruited defendants Baier, Adams, and Gericke, former members of the U.S. national security establishment, to target perceived dissidents as part of the UAE’s broader cooperation with Saudi Arabia. Defendants utilized U.S. cybersurveillance technology, along with their U.S. intelligence training, to assist the UAE security services’ persecution of Ms. Alhathloul, as well as that of other human rights activists, by hacking into her iPhone, surveilling her movements, and exfiltrating her confidential communications. Following the hack, Ms. Alhathloul was arbitrarily detained by the UAE’s security services and forcibly rendered to Saudi Arabia, where she was imprisoned and tortured. Today, Ms. Alhathloul is no longer in prison, but she is currently subject to a travel ban and unable to leave Saudi Arabia. 

“Our sister Loujain has been through an unimaginable ordeal for her defense of women’s rights. She has been tortured and sentenced as a terrorist, kidnapped, and forcibly disappeared. All these violations have taken place with the use of spyware technologies,” said Alia Alhathloul, Loujain’s sister. “This trial is of the utmost importance to seek redress and justice, as Loujain and her activism should be celebrated – not repressed.”

CJA Senior Staff Attorney Claret Vargas stated, “This lawsuit shows the very real human rights violations that can occur when former U.S. intelligence and military officials sell their knowledge and services to foreign oppressive regimes, which use these tools to carry out their repressive policies.” 

For the amended complaint:

For more on this case: 

Contacts: EFF Civil Liberties Director David Greene,

Contact:  DavidGreeneCivil Liberties
Josh Richman

The Kids Online Safety Act is Still A Huge Danger to Our Rights Online

1 month ago

Congress has resurrected the Kids Online Safety Act (KOSA), a bill that would increase surveillance and restrict access to information in the name of protecting children online. KOSA was introduced in 2022 but failed to gain traction, and today its authors, Sens. Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN), have reintroduced it with slight modifications. Though some of these changes were made in response to over 100 civil society organizations and LGBTQ+ rights groups’ criticisms of the bill, its latest version is still troubling. Today’s version of KOSA would still require surveillance of anyone sixteen and under. It would put the tools of censorship in the hands of state attorneys general, and would greatly endanger the rights, and safety, of young people online. And KOSA’s burdens will affect adults, too, who will likely face hurdles to accessing legal content online as a result of the bill.



KOSA Still Requires Filtering and Blocking of Legal Speech

Online child safety is a complex issue, but KOSA attempts to boil it down to a single solution. The bill holds platforms liable if their designs and services do not “prevent and mitigate” a list of societal ills: anxiety, depression, eating disorders, substance use disorders, physical violence, online bullying and harassment, sexual exploitation and abuse, and suicidal behaviors. Additionally, platforms would be responsible for patterns of use that indicate or encourage addiction-like behaviors. 

Deciding what designs or services lead to these problems would primarily be left up to the Federal Trade Commission and 50 individual state attorneys general to decide. Ultimately, this puts platforms that serve young people in an impossible situation: without clear guidance regarding what sort of design or content might lead to these harms, they would likely censor any discussions that could make them liable. To be clear: though the bill’s language is about “designs and services,” the designs of a platform are not causing eating disorders. As a result, KOSA would make platforms liable for the content they show minors, full stop. It will be based on vague requirements that any Attorney General could, more or less, make up. 

Attorneys General Would Decide What Content is Dangerous To Young People

KOSA’s co-author, Sen. Blackburn of Tennessee, has referred to education about race discrimination as “dangerous for kids.” Many states have agreed, and recently moved to limit public education about the history of race, gender, and sexuality discrimination. If KOSA passes, platforms are likely to preemptively block conversations that discuss these topics, as well as discussions about substance use, suicide, and eating disorders. As we’ve written in our previous commentary on the bill, KOSA could result in loss of access to information that a majority of people would agree is not dangerous. Again, issues like substance abuse, eating disorders, and depression are complex societal issues, and there is not clear agreement on their causes or their solutions. To pick just one example: in some communities, safe injection sites are seen as part of a solution to substance abuse; in others, they are seen as part of the problem. Under KOSA, could a platform be sued for displaying content about them—or about needle exchanges, naloxone, or other harm reduction techniques? 

The latest version of KOSA tries, but ultimately fails, to address this problem in two ways: first, by clarifying that the bill shouldn’t stop a platform or its users from “providing resources for the prevention or mitigation” of its listed harms; and second, by adding that claims under the law should be consistent with evidence-informed medical information. 

Unfortunately, were an Attorney General to claim that content about trans healthcare (for example) poses risks to minors’ health, they would have no shortage of ‘evidence-informed' medical information on which to base their assertion. Numerous states have laws on the books claiming that gender-affirming care for trans youth is child abuse. In an article for the American Conservative titled “How Big Tech Turns Kids Trans,” the authors point to numerous studies that indicate gender-affirming care is dangerous, despite leading medical groups recognizing the medical necessity of treatments for gender dysphoria. In the same article, the authors laud KOSA, which would prohibit “content that poses risks to minors’ physical and mental health.” 

The same issue exists on both sides of the political spectrum. KOSA is ambiguous enough that an Attorney General who wanted to censor content regarding gun ownership, or Christianity, could argue that it has harmful effects on young people. 



KOSA Would Still Lead to Age Verification On Platforms

Another change to KOSA comes in response to concerns that the law would lead to age verification requirements for platforms. For a platform to know whether or not it is liable for its impact on minors, it must, of course, know whether or not minors use its platform, and who they are. Age verification mandates create many issues — in particular, they undermine anonymity by requiring all users to upload identity verification documentation and share private data, no matter their age. Other types of “age assurance” tools such as age estimation also require users to upload biometric information such as their photos, and have accuracy issues. Ultimately, no method is sufficiently reliable, offers complete coverage of the population, and has respect for the protection of individuals' data and privacy and their security.  France’s National Commission on Informatics and Liberty, CNIL, reached this conclusion in a recent analysis of current age verification methods. 

In response to these concerns, KOSA’s authors have made two small changes, but they’re unlikely to stop platforms from implementing age verification. Earlier versions would have held platforms liable if they “knew or should have known” that an impacted user was sixteen years of age or younger. The latest version of KOSA adds “reasonableness” to this requirement, holding platforms liable if they “know or reasonably should know” a user is a minor. But legally speaking, this doesn't result in giving platforms any better guidance. 

The second change is to add explicit language that age verification is not required under the “Privacy Protections” section of the bill. The bill now states that a covered platform is not required to implement an age gating or age verification functionality. But there is essentially no outcome where sites don’t implement age verification. There’s no way for platforms to block nebulous categories of content for minors without explicitly requiring age verification. If a 16-year-old user truthfully identifies herself, the law will hold platforms liable, unless they filter and block content. If a 16-year-old user identifies herself as an adult, and the platform does not use age verification, then it will still be held liable, because it should have “reasonably known” the user’s age. 

A platform could, alternatively, skip age verification and simply institute blocking and filtering of certain types of content for all users regardless of age—which would be a terrible blow for speech online for everyone. So despite these bandaids on the bill, it still leaves platforms with no choices except to institute heavy-handed censorship and age verification requirements. These impacts would affect not just young people, but every user of the platform. 

There Are Better Ways to Fix The Internet

While we appreciate that lawmakers have responded to concerns raised about the bill, its main requirements—that platforms must “prevent and mitigate” complex issues that researchers don’t even agree the platforms are responsible for in the first place—will lead to a more siloed, and more censored, internet. We also stand by our previous criticisms of KOSA—that it unreasonably buckets all young people into a single category, and that it requires surveillance of minors by parents. They  remain troubling aspects of the law. 

There is no question that some elements of social media today are toxic to users. Companies want users to spend as much time on their platforms as possible, because they make money from targeted ad sales, and these ad sales are fueled by invasive data collection. EFF has long supported stronger competition laws and comprehensive data privacy legislation in part because they can open the field to competitors to today’s social media options, and force platforms to innovate, offering more user choice. If users are unhappy with the content or design of current platforms, they should be able to move to other options that offer different forms of content moderation, better privacy protections, and other features that improve the experience for everyone, including young people. 

KOSA would not enhance the ability of users to choose where they spend their time. Instead, it would shrink the number of options, by making strict requirements that only today’s largest, most profitable platforms could follow. It would solidify today’s Big Tech giants, while forcing them to collect more private data on all users. It would force them to spy on young people, and it would hand government the power to limit what topics they can see and discuss online. 

It is not a safety bill—it is a surveillance and censorship bill. Please tell your Senators and representatives not to pass it. 



Jason Kelley

Why Is the U.S. Solicitor General Trying To Change The Law To Benefit Patent Trolls?

1 month ago

For more than two decades now, developers and users of software have been plagued by a flood of bad patents. Software patents that describe everyday practices like watching an ad online, publishing nutrition information, meeting people nearby, or teaching a language class continue to be issued, and low-quality patents get used in hundreds of lawsuits every year. 

Government officials should be working to reduce, not increase, the burden that low-quality patent lawsuits impose on innovators. So we’re concerned and dismayed by recent briefs filed by the U.S. Solicitor General, asking the Supreme Court to reexamine and throw out the best legal defenses regular people have against “patent trolls”—companies that don’t make products or provide services, but simply use patents to sue and threaten others.

A Sensible Framework Worth Keeping 

To truly stop patent trolls, we’ll need wholesale reform, including legislative change. But the current framework of rules governing Section 101 of the U.S. patent laws, including the Supreme Court’s 2014 CLS Bank v. Alice decision, were important victories for common-sense patent reform.

The Alice decision made clear that you can’t simply add generic computer language to basic ideas and get a patent. The ruling has been consistently applied to get the worst-of-the-worst software patents kicked out of the system. For the most part, it allows courts to state, clearly and correctly, that these patents are a form of abstract idea, and should be thrown out at an early stage of litigation. A win under the Alice rules spares the targets of patent trolls not just from an unjust trial, but from an invasive and expensive discovery process, fueled by a patent that never should have been issued in the first place. 

The Alice ruling, combined with another Supreme Court decision called Mayo Collaborative Services v. Prometheus Labs, has been a big step forward. EFF’s “Saved by Alice” project highlights how small businesses have protected themselves from trolls, when patent law was on their side. 

“Not A New Idea”

The U.S. Solicitor General represents the views of the federal government to the Supreme Court. The office typically argues dozens of cases each year before the Supreme Court, making it one of the most influential offices in American law. The current Solicitor General, Elizabeth Prelogar, was nominated by President Joseph Biden, and confirmed by the Senate in 2021. 

The Supreme Court sometimes asks the Solicitor General to weigh in on which cases it should and should not take up. Since the Supreme Court takes fewer than 100 cases per year, out of the several thousand petitions it receives, those opinions are important. 

Last month, we were dismayed to see the Solicitor General’s office take a position so clearly contrary to the public interest in a patent case. 

Interactive Wearables LLC v. Polar Electro involves a ridiculous patent that was properly thrown out by a district court, with that decision being upheld on appeal. U.S. Patent No. 9,668,016, claims an “apparatus and method for providing information in conjunction with media content.” Its named inventor is Alexander Poltorak, the longtime CEO of General Patent Corporation, “an intellectual property firm focusing on intellectual property strategy and valuation, IP licensing and enforcement.”

Interactive Wearables sued Polar, a maker of GPS-enabled smartwatches, in 2019. Polar filed a motion to dismiss, explaining how the patent troll’s incredibly broad claims should clearly be thrown out under Alice: 

The idea is simple. The patents explain someone might be watching a TV show, find the show enjoyable, and desire to know more about the show – e.g., the name of the show. The patents are directed to the idea of providing that information while the person is watching the show. This is not a new idea; it is something TV Guide has done for decades. The only advancement the patents teach is using generic computer components to implement the abstract idea.

Interactive Wearables tried to save their patent on several grounds, including the idea that it’s “wearable.” The patent troll’s complaint argued that “the first watch that wirelessly paired with a cellphone… was not released until 2006.” 

The judge overseeing this case did not allow the patent to be saved simply because its owner pointed out, correctly, that it’s essentially a piece of science fiction. He found that the patent’s descriptions “are not specific enough to address any specific improvement or solution,” and that there was simply no “inventive concept,” and found it invalid.

This open-and-shut dismissal was upheld by an appeals court in a one-sentence order. 

The Solicitor General Sides With A Patent Troll 

Every judge that reviewed this case saw clearly there was no invention here. The named inventor on this patent, Alexander Poltorak, didn’t invent anything at all. That isn’t surprising, since there’s scant evidence Poltorak actually made smart watches or any other technology. Rather, he has a built a decades-long career as a patent licensing professional, who specializes in accusing others of patent infringement. He has owned General Patent Corporation, and other patent troll entities, as well. 

So it’s disappointing to see the Solicitor General file a brief that suggests the district court judge and three appeals court judges who saw through this “invention” should be overturned. The Solicitor General accepts the view that Poltorak’s contributions are meaningful, should be reviewed by the Supreme Court, and that whoever owns Interactive Wearables LLC should be given another chance to sue companies that make real technology. 

This view stands starkly against the public interest. To be clear, the Solicitor General is attempting to rescue this troll’s patent. The Solicitor General echoes Interactive Wearables’ outsized views of its own importance, stating that: 

earlier content players did not have a way for users to view information about the content, like the title of a song or the name of a show, while content was playing… The remote control for Interactive’s player purports to address that shortcoming by incorporating a screen that can display information about the content being played. 

The SG argues that the Supreme Court’s Alice and Mayo decision merely limit patents to “the scientific, technological, and industrial arts.” The brief argues that “patent-ineligible abstract ideas” do not include “quintessentially technological inventions, like the improved content player that the patentee claimed in Interactive.”

The judge who threw out the Interactive Wearables patents “placed undue emphasis on considerations of novelty, obviousness, and enablement,” issues that should be considered in other areas of patent law, not Section 101, the brief continues.

This argument mirrors the kinds of proposed legislation we’ve seen recently, proposed by extremists who want everything to be patentable. The Solicitor General is arguing that simply because the Interactive Wearables patent talks about technology, it should pass through Section 101. 

Alice And Section 101 Protect The Public Interest

There’s a simple reason that patent owners, including patent trolls, want Section 101 to be useless: they’ll make more a lot more money. When judges effectively use Section 101, it leads to early dismissals on the worst patents—that’s why patent trolls, and other owners of massive patent hoards, absolutely want to avoid it. 

Just by getting the decision postponed to be resolved in another section of patent law—exactly what the Solicitor General suggests here—is a big win for trolls. Then their targets will have to go through more expensive litigation, including discovery and claim construction. A patent troll’s settlement demand, whether it’s $50,000 or $500,000, will begin to look like the “cheap” way out of a case, compared to the cost of a jury trial that can run into the millions of dollars. 

Moreover, there’s simply no lack of clarity to be resolved here. As EFF testified directly to Congress in 2019, the law of Section 101 is clearer than it’s ever been under the Alice-Mayo framework. The rules on abstract patents are beneficial to the public, and not to those who abuse the patent system. That’s why so many are trying to overturn it. 

The Supreme Court should not cooperate. It should reject this petition. 

Furthermore, the Solicitor General should file briefs in patent cases that actually side with the people making and using technology—not with patent trolls. We hope the Interactive Wearables LLC brief, in which top lawyers of the Biden Administration worked overtime to give new life to an abusive patent lawsuit, is a one-time footnote in the history of innovation. 

Update May 15, 2023: The Supreme Court declined to hear this case.  

Documents related to this case: 

Joe Mullin

EFF to Congress: Oppose the EARN IT Act and the STOP CSAM Act

1 month ago

The Senate Judiciary Committee is about to debate multiple bills that will lead to peoples’ private messages being scanned and reported to the government. We oppose these bills, and  we have sent a letter urging the Committee to vote No. 

Take Action

Protect Our Privacy—Stop "EARN IT"

On Thursday, May 4, 2023, the committee will consider S. 1207, the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2023 (EARN IT Act), and S. 1199, the Strengthening Transparency and Obligation to Protect Children Suffering from Abuse and Mistreatment Act of 2023 (STOP CSAM Act). 

EFF strongly opposed both the original and amended versions of EARN IT from the two previous Congresses, and we are concerned to see some of the same problems in the text of the current bills. 

We’re far from alone. EFF and the Center for Democracy and Technology are also part of a coalition of 132 LBGTQ+ and human rights organizations who have signed a group letter opposing EARN IT. 

As in the previous Congresses, the sponsors of these bills say their legislation is intended to protect children from online sexual exploitation—an important and laudable goal. Unfortunately, these bills threaten the privacy, security, and free expression of digital communications for all users, including children. Giving states and private litigants the power to threaten private companies with criminal prosecution and costly civil litigation unless they scan all of users’ private messages shows blatant disregard for the millions of law abiding people who depend on secure messaging to safely communicate. Military families, survivors of domestic violence, victims of identity theft and many others: there are many people for whom true end-to-end encryption is vital for personal safety and peace of mind.

At EFF, we’ve steadfastly opposed public officials who have called to undermine encryption. Strong encryption isn’t in tension with protecting vulnerable people and children—it’s vital for real public safety. Join us by telling your Senator to oppose both of these flawed bills.

Take Action

Protect Our Privacy—Stop "EARN IT"

India McKinney

Podcast Episode: Dr. Seuss Warned Us

1 month ago

Dr. Seuss wrote a story about a Hawtch-Hawtcher Bee-Watcher whose job it is to watch his town’s one lazy bee, because “a bee that is watched will work harder, you see.” But that doesn’t seem to work, so another Hawtch-Hawtcher is assigned to watch the first, and then another to watch the second... until the whole town is watching each other watch a bee. Privacy info. This embed will serve content from


You can also find this episode on the Internet Archive.

To Federal Trade Commissioner Alvaro Bedoya, the story—which long predates the internet—is a great metaphor for why we must be wary of workplace surveillance, and why we need to strengthen our privacy laws. Bedoya has made a career of studying privacy, trust, and competition, and wishes for a world in which we can do, see, and read what we want, living our lives without being held back by our identity, income, faith, or any other attribute. In that world, all our interactions with technology —from social media to job or mortgage applications—are on a level playing field. 

Bedoya speaks with EFF’s Cindy Cohn and Jason Kelley about how fixing the internet should allow all people to live their lives with dignity, pride, and purpose.

In this episode, you’ll learn about: 

  • The nuances of work that “bossware,” employee surveillance technology, can’t catch. 
  • Why the Health Insurance Portability Accountability Act (HIPAA) isn’t the privacy panacea you might think it is. 
  • Making sure that one-size-fits-all privacy rules don’t backfire against new entrants and small competitors. 
  • How antitrust fundamentally is about small competitors and working people, like laborers and farmers, deserving fairness in our economy. 

Alvaro Bedoya was nominated by President Joe Biden, confirmed by the U.S. Senate, and sworn in May 16, 2022 as a Commissioner of the Federal Trade Commission; his term expires in September 2026. Bedoya was the founding director of the Center on Privacy & Technology at Georgetown University Law Center, where he was also a visiting professor of law. He has been influential in research and policy at the intersection of privacy and civil rights, and co-authored a 2016 report on the use of facial recognition by law enforcement and the risks that it poses. He previously served as the first Chief Counsel to the Senate Judiciary Subcommittee on Privacy, Technology and the Law after its founding in 2011, and as Chief Counsel to former U.S. Sen. Al Franken (D-MN); earlier, he was an associate at the law firm WilmerHale. A naturalized immigrant born in Peru and raised in upstate New York, Bedoya previously co-founded the Esperanza Education Fund, a college scholarship for immigrant students in the District of Columbia, Maryland, and Virginia. He also served on the Board of Directors of the Hispanic Bar Association of the District of Columbia. He graduated summa cum laude from Harvard College and holds a J.D. from Yale Law School, where he served on the Yale Law Journal and received the Paul & Daisy Soros Fellowship for New Americans.


One of my favorite Dr. Seuss stories is about this town called Hawtch Hawtch. So, in the town of Hawtch Hawtch, there's a town bee and you know, they presumably make honey, but the Hawtch Hawtcher one day realize that the bee that is watched will work harder you see? And so they hire a Hawtch Hawtcher to be on bee watching watch, but then you know, the bee isn't really doing much more than it normally is doing. And so they think, oh, well, the Hawtch Hawtcher is not watching hard enough. And so they hire another hot hocher to be on bee watcher watcher watch, I think is what Dr. Seuss calls it. And so there's this wonderful drawing of 12 Hawtch Hawtchers, you know, each one and either watching, watching watch, or actually, you know, the first one's watching the bee and, and the whole thing is just completely absurd.

That’s FTC Commissioner Alvaro Bedoya describing his favorite Dr. Seuss story – which he says works perfectly as a metaphor for why we need to be wary of workplace surveillance, and strengthen our privacy laws.

I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

And I’m Jason Kelley. EFF’s Associate Director of Digital Strategy. This is our podcast, How to Fix the Internet.

Our guest today is Alvaro Bedoya. He’s served as a commissioner for the Federal Trade Commission since May of 2022, and before that he was the founding director of the Center on Privacy & Technology at Georgetown University Law Center, where he was also a visiting professor of law. So he thinks a lot about many of the issues we’re also passionate about at EFF – trust, privacy, competition, for example – and about how these issues are all deeply intertwined

We decided to start with our favorite question: What does the world look like if we get this stuff right?

For me, I think it is a world where you wake up in the morning, live your life and your ability to do what you want to do. See what you wanna see. Read what you wanna read and live the life that you want to live is unconnected to who you are in a good way.

In other words, what you look like, what side of the tracks you're from, how much money you have. Your gender, your gender identity, your sexuality, your religious beliefs, that those things don't hold you down in any way, and that you can love those things and have those things be a part of your life. But that they only empower you and help you. I think it's also a world… we see the great parts of technology. You know, one of the annoying things of having worked in privacy for so long is that you're often in this position where you have to talk about how technology hurts people. Technology can be amazing, right?

Mysterious, wonderful, uh, empowering. And so I think this is a world where those interactions are defined by those positive aspects of technology. And so for me, when I think about where those things go wrong, sorry, falling into old tropes here, but thinking about it positively, increasingly, people are applying for jobs online. They're applying for mortgages online. They are doing all these capital letter decisions that are now mediated by technology.

And so this world is also a world where, again, you are treated fairly in those decisions and you don't have to think twice about, hold on a second, I just applied for a loan. I just applied for a job, you know, I just applied for a mortgage. Is my zip code going to be used against me? Is my social media profile, you know, that reveals my interests gonna be used against me. Is my race gonna be used against me? In this world, none of that happens, and you can focus on preparing for that job interview and finding the right house for you and your family, finding the right rental for you and your family.

Now, I think it's also a world where you can start a small business without fear that the simple fact that you're not connected to a bigger platform or a bigger brand won't be used against you, where you have a level playing field to win people over.

I think that's great. You know, leveling the playing field is one of the original things that we were hoping, you know, that digital technologies could do. It also makes me think of that old New Yorker thing, you know, on the internet, no one knows you're a dog.

(Laughs) Right.

In some ways I think the vision is on the internet. You know, again, I don't think that people should leave the other parts of their lives behind when they go on the internet. Your identity matters, but that it doesn't, the fact that you're a dog doesn't mean you can't play. I'm probably butchering that poor cartoon too much.

No, I don't. I don't think you are, but I don't know why it did, but it reminded me of one other thing, which is in this world, you, you go to a. Whether it's at home in your basement like I am now, you know, or in your car or at an office, uh, uh, at a business. And you have a shot at working with pride and dignity where every minute of your work isn't measured and quantified. Where you have the ability to focus on the work rather than the surveillance of that work and the judgments that other people might make around that minute surveillance and, and you can focus on the work itself. I think too often people don't recognize the strangeness of the fact that when you watch tv, when you watch a streaming site, when you watch cable, when you go shopping, all of that stuff is protected by privacy law. And yet most of us spend a good part of our waking hours working and there are. Really no federal, uh, uh, worker privacy protections. That, for me is, is one of the biggest gaps in our sectoral privacy system that we've yet to confront.

But the world that you wanted me to talk about definitely is a world where you can go to work and do that work with dignity and pride, uh, without minute surveillance of everything you.

Yeah. And I think inherent in that is this, you know, this, this observation that, you know, being watched all the time doesn't work as a matter of humanity, right? It's a human rights issue to be watched all the time. I mean, that's why when they build prisons, right, it's the panopticon, right? That's where that idea comes from, is this idea that people who have lost their liberty get watched all the time.

So that has to be a part of building this better future, a space where, you know, we’re not being watched all the time. And I think you're exactly right that we kind of have this gigantic hole in people's lives, which is their work lives where it's not only that people don't have enough freedom right now, it's actually headed in the other direction. I know this is something that we think about a lot, especially Jason does at EFF.

Yeah, I mean we, we write quite a bit about Boss Ware. We've done a variety of research into Boss Ware technology. I wonder if you could talk a little bit about maybe like some concrete examples that you've seen where that technology is sort of coming to fruition, if you will. Like it's being used more and more and, and why we need to, to tackle it, because I think a lot of people probably, uh, listening to this aren't, aren't as familiar with it as they could be.

And at the top of this episode we heard you describe your favorite Dr. Seuss tale – about the bees and the watchers, and the watchers watching the watchers, and so on to absurdity. Now can you tell us why you think that’s such an important image?

I think it's a valuable metaphor for the fact that a lot of this surveillance software may not offer as complete a picture as employers might think it does. It may not have the effect that employers think it does, and it may not ultimately do what people want it to do. And so I think that anyone who is thinking about using the software should ask hard questions about ‘is this actually gonna capture what I'm being told it will capture? Does it account for the 20% tasks of my workers' jobs?’ So, you know, there's always an 80/20 rule and so, you know, as with, as with work, most of what you do is one thing, but there's usually 20% that's another thing. And I think there's a lot of examples where that 20%, like, you know, occasionally using the bathroom right, isn't accounted for by the software. And so it looks like the employee’s slacking, but actually they're just being a human being. And so I would encourage people to ask hard questions about the sophistication of the software and how it maps onto the realities of work.

Yeah. That's a really accurate way for people to start to think about it because I think a lot of people really feel that. Um, if they can measure it, then it must be useful.


In my own experience, before I worked at EFF, I worked somewhere where, eventually, a sort of boss ware type tool was installed and it had no connection to the job I was doing.

That’s interesting.

It was literally disconnected.

Can you share the general industry?

It was software. I worked as a, I was in marketing for a software company and um, I was remote and it was remote way before p the pandemic. So, you know, there's sort of, I think boss ware has increased probably during the pandemic. I think we've seen that because people are worried that if you're not in the office, you're not working.


There's no evidence, boss wear can't give evidence that that's true. It can just give evidence in, you know, whether you're at your computer –

Right. Whether you're typing.

Whether you're typing. Yeah. And what happened in my scenario without going into too much detail was that it mattered what window I was in. and it didn't always, at first it was just like, are you at your computer for eight hours? And then it was, are you at your computer in these specific windows for eight hours? And then it was, are you typing in those specific windows for eight hours? The screws kept getting twisted, right, until I was actually at my computer for 12 hours to get eight hours of ‘productive’ work in, as it was called.

And so, yeah, I left that job. Obviously, I work at EFF now for a reason. And is was one of the things that I remember when I started at EFF, part of what I like about what we do is that we think about people's humanity in what they're doing and how that interacts with technology.

And I think boss ware is one of those areas where it doesn't, um, because it, it is so common for an employer to sort of disengage from the employee and sort of think of them as like a tool. It's, it's an area where it's easy for to install something or try to install something where that happens. So I'm glad you're working on it. It's definitely an issue.

Well, I'm thinking about it, you know, and it's certainly something I, I care about and, and I think, I think my hope is, My hope is that, um, you know, the pandemic was horrific. Is horrific. My hope is that one of the realizations coming out of it from so many people going remote is the realization that particularly for some jobs, you know, uh, um, a lot of us are lucky to have these jobs where a lot of our time turns.

Being able to think clearly and carefully about a, about something, and that's a luxury. Um, but particularly for those jobs, my, my suspicion is for an even broader range of jobs that this idea of a workday where you sit down, work eight hours and sit up, you know, and, and that is the ideal workday I don't think that's a maximally productive day, and I think there's some really interesting trials around the four-day work week, and my hope is that, you know, when my kids are older, that there will be a recognition that working harder, staying up later, getting up earlier, is not the best way to get the best work from people. And people need time to think. They need time to relax. They need time to process things. And so that is my hope that that is one of the realizations around it. But you're exactly right, Jason, is that one of my concerns around this software is that there's this idea that if it can be measured, it must be important. And I think you use a great example, speaking in general here, that of software that may presume that if you aren't typing, you're not working, or if you're not in a window, you're not working, when actually you might be doing the most important work. You know, jotting down notes, organizing your thoughts, that lets you do the best stuff as it were.

Music transition

I want to jump in for a little mid-show break to say thank you to our sponsor.

“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. So a tip of the hat to them for their assistance.

Now back to our conversation with Alvaro Bedoya.

Privacy issues are of course near and dear to our hearts at EFF and I know that's really the world you come out of as well. Although your perch is a little, a little different right now. We came to the conclusion that we can't address privacy if we don't address competition and antitrust issues. And I think you've come someplace similar perhaps, and I'd love for you to talk about how you think privacy and questions around competition and antitrust intertwine.

So I will confess, I don't know if I have figured it out, but I can offer a few thoughts. First of all, I think that a lot of the antitrust claims are not what they seem to be. When companies talk about how important it is to have gatekeeping around app stores because of privacy and this is one of the reasons I support the bills, I think it's Blumenthal Blackburn bill to, um, to change the way app stores are, are run and, and, and kick the tires on that gatekeeping model because I am skeptical about a lot of those pro-privacy, anti-antitrust claims, that is one thing. On the other hand, I do think we need to think carefully about the rules that are put in place, backfiring against new entrants and small competitors. And I think a lot of legislators and policy makers in the US and Europe appreciate this and are getting this right and institute a certain set of rules for bigger companies and different ones for smaller ones, I think one of the ways this can go wrong is when it's just about the size of the company rather than the size of the user base.

I think that if you are, you know, suddenly of a hundred million users that you're not a small company, even if you have, you know, a small number of employees, but I, I do think that those concerns are real and that that policy makers and people in my role need to think about the costs of privacy compliance in a way that does not inadvertently create an unlevel playing field for, for small competitors.

I will confess that sometimes things that appear to be, uh, um, antitrust problems are privacy problems in that they reflect legal gaps around the sectoral privacy framework that unfortunately has yet to be updated. So I think I can give one example where there was the recent merger of, uh, Amazon and One Medical, and, well, I can't go into the antitrust analysis that may or may not have occurred at the commission. I wrote a statement on the completion of the merger, which highlighted a gap that we have around the anonymization rule in our health privacy law. For example, people think that HIPAA is actually the Health Information Privacy Act. It's not, it's actually the Health Insurance Portability Accountability Act. And I think that little piece of common wisdom speaks to a broader gap in our understanding of health privacy. So I think a lot of people think HIPAA will protect their data and that it won't be used in other ways by their doctor, by whoever it is that has their HIPAA protected data. Well, it turns out that in 2000 when HHS promulgated. The privacy rule in good faith, it had a provision that said, Hey, look, we want to encourage the improvement in health services. We want to encourage health research and we want to encourage public health. And so we're gonna say that if you remove these, you know, 18 identifiers from health data, that it can be used for other purposes and if you look at the rule that was issued, the justification for it is that they want to promote public health.

Unfortunately, they did not put a use restriction on that. And so now, if any, doctor's practice, anyone covered by HIPAA, and I'm not gonna go into the rabbit hole of who is and who isn't, but if you're covered by HIPAA, All they need to do is remove those identifiers from the data.

And HHS is unfortunately very clear that you can essentially do a whole lot of things that have nothing to do with healthcare as long as you do that. And what I wrote in my statement is that would surprise most consumers. Frankly, it surprised me when I connected the dots.

What I'm hearing here, which I think is really important is, first of all, we start off by thinking that some of our privacy problems are really due to antitrust concerns, but what we learn pretty quickly when we're looking at this is, first of all, privacy is used frankly, as a blocker for common sense reforms that we might need, that these giants come in and they say, well, we're gonna protect people's privacy by limiting what apps are in the app store. And, and we need to look closely at that because it doesn't seem to be necessarily true.

So first of all, you have to watch out for the kind of fake privacy argument or the argument that the tech giants need to be protected because they're protecting our privacy and we need to really interrogate that. And at the bottom of it, it often comes down to the fact that we haven't really protected people's privacy as a legal matter, right? We, we, We ground ourselves in Larry Lessig, uh, four pillars of change, right? Code, norms, laws, and markets. And you know, what they're saying is, well, we have to protect, you know, essentially what is a non-market, but the, the tech giants, that markets will protect privacy and so therefore we can't introduce more competition. And I think at the bottom of this, what we find a lot is that it's, you know, the law should be setting the baseline, and then markets can build on top of that. But we've got things a little backwards. And I think that's especially true in health. It's, it's, it's very front and center for those of us who care about reproductive justice, who are looking at the way health insurance companies are now part and parcel of other data analysis companies. And the Amazon/One Medical one is, is another one of those that unless we get the privacy law right, it's gonna be hard to get at some of these other problems.

Yeah. And those are the three things that I think a lot about first, that those propri arguments that seem to cut against, uh, competition concerns are often not what they seem.

Second, that we do need to take into account how one size fits all privacy rules could backfire in a way that hurts, uh, small companies, small competitors, uh, who are the lifeblood of, uh, innovation and employment frankly. And, and lastly, Sometimes what we're actually seeing are gaps in our sectoral privacy system.

One of the things that I know you've, you've talked about a little bit is, um, you're calling it a return to fairness, and that's specifically talking about a piece of the FTC’s authority. And I wonder if you could talk about that a little more and how you see that fitting into a, a better world.

Sure. One of the best parts of this job, um, was having this need and opportunity to immerse myself in antitrust. So as a Senate staffer, I did a little bit of work on the Comcast, uh, NBC merger against, against that merger, uh, for my old boss, Senator Franken. But I didn't spend a whole lot of time on competition concerns. And so when I was nominated, I, you know, quite literally, you know, ordered antitrust treatises and read them cover to cover.


Well, sometimes it's wonderful and sometimes it's not. But in this case it was. And what you see is this complete two-sided story where on the one hand you have this really anodyne, efficiency-based description of antitrust, where it is about enforcing abstract laws and maximizing efficiency and the saying, you know antitrust is about protects competition, not competitors, and you so quickly lose sight of why we have antitrust laws and how we got them.

And so I didn't just read treatises on the law. I also read histories. And one of the things that you read and realize when you read those histories is that antitrust isn't about efficiency, antitrust is about people. And yes, it's about protecting competition, but the reason we have it is because of what happened to certain people. And so, you know, the Sherman Act, you listen to those floor debates, it is fascinating because first of all, everyone agrees as to what we want to do, what Congress wanted to do. Congress wanted to reign in the trust they wanted to reign in John Rockefeller, JP Morgan, the beef trust, the sugar trust, the steel trust. Not to mention, you know, the Rockefeller's Oil Trust. The most common concern on the floor of the Senate was what was happening to cattlemen because of concentration in meat packing plants and the prices they were getting when they brought their cattle to processors, and to market. And then you look at, uh, 1914, the Clayton Act again. There was outrage, true outrage about how those antitrust laws, you know, 10 out of the first 12 antitrust injunctions in our, in our country post-Sherman, were targeted at workers and not just any workers. They were targeted at rail car manufacturers in Pullman, where it was an integrated workforce and they were working extremely long hours for a pittance and wages, and they decided to strike.

And some of the first injunctions we saw in this country were used to. Their strike or how it was used against, uh, uh, I think they're called drayage men or dray men in New Orleans, port workers and dock workers in New New Orleans, who again, were working these 12 hour days for, for nothing in wages. And this beautiful thing happened in New Orleans where the entire city went on strike.

It was, I think it was 30 unions. It was like the typographical workers unions. And if you think that that refers to people typing on keyboards, it does. From the people typing on mechanical typewriters to the people, you know, unload loading ships in the dock of, in the port of New Orleans, everyone went on strike and they had this, this organization called the Amalgamated Working Men's Council. And um, and they went, they wanted a 10 hour, uh, uh, workday. They wanted overtime pay, and they wanted, uh, uh, union shops. They got two out of those three things. But, um, but I think it was the trade board was so unhappy with it that they, uh, persuaded federal prosecutors to sue under Sherman.

And it went before Judge Billings. And Judge Billings said, absolutely this is a violation of the antitrust laws. And the curious thing about Judge Billings decision is one of the first German decisions in a federal court, and he didn't cite for the proposition that the strike was a restraint on trade to restrain on trade law. He cited to much older decisions about criminal conspiracies and unions to justify his decision.

And so what I'm trying to say is over and over and over again, whenever, you know, you look at the actual history of antitrust laws, you know, it isn't about efficiency, it's about fairness. It is about how small competitors and working people, farmers, laborers, deserve a level playing field. And in 1890, 1914, 1936, 1950, this was what was front and center for Congress.

It's great to end with a deep dive into the original intent of Congress to protect ordinary people and fairness with antitrust laws, especially in this time when history and original intent are so powerful for so many judges. You know, it’s solid grounding for going forward. But I also appreciate how you mapped the history to see how that Congressional intent was perverted by the judicial branch almost from the very start.

This shows us where we need to go to set things right but also that it’s a difficult road. Thanks so much Alvaro.

Well, it's a rare privilege to get to complain about a former employer directly to a sitting FTC commissioner. So that was a very enjoyable conversation for me. It's also rare to learn something new about Dr. Seuss and a Dr. Seuss story, which we got to do. But as far as actual concrete takeaways go from that conversation, Cindy, what did you pull away from that really wide ranging discussion?

It’s always fun to talk to Alvaro. I loved his vision of a life lived with dignity and pride as the goal of our fixed internet. I mean those are good solid north stars, and from them we can begin to see how that means that we use technology in a way that, for example, allows workers to just focus on their work. And honestly, while that gives us dignity, it also stops the kind of mistakes we’re seeing like tracking keystrokes, or eye contact as secondary trackers that are feeding all kinds of discrimination.

So I really appreciate him really articulating, you know, what are the kinds of lives we wanna have. I also appreciate his thinking about the privacy gaps that get revealed as technology changes and, and the, the story of healthcare and how HIPAA doesn't protect us in the way that we'd hoped to protect us, in part because I think HIPAA didn't start off at a very good place, but as things have shifted and say, you know, one medical is being bought by Amazon, suddenly we see that the presumption of who your insurance provider was and what they might use that information for, has shifted a lot, and that the privacy law hasn't, hasn't kept up.

So I appreciate thinking about it from, you know, both of those perspectives, both, you know, what the law gets wrong and how technology can reveal gaps in the law.

Yeah. That really stood out for me as well, especially the parts where Alvero was talking about looking into the law in a way that he hadn't had to before. Like you say, because that is kind of what we do at EFF at least part of what we do. And it's nice to hear that we are sort of on the same page and that there are people in government doing that. There are people at EFF doing that. There are people all over, in different areas doing that. And that's what we have to do because technology does change so quickly and so much.

Yeah, and I really appreciate the deep dive he's done into antitrust law and, and revealing really the, the, the fairness is a deep, deep part of it. And this idea that it's only about efficiency and especially efficiency for consumers only. It's ahistorical. And that's a good thing for us all to remember since we, especially these days have a Supreme Court that is really, you know, likes history a lot and grounds and limits what it does in history. The history's on our side in terms of, you know, bringing competition law, frankly, to the digital age.

Well that’s it for this episode of How to Fix the Internet.

Thank you so much for listening. If you want to get in touch about the show, you can write to us at or check out the EFF website to become a member or donate, or look at hoodies, t-shirts, hats or other merch.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. You can find their names and links to their music in our episode notes, or on our website at

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

And How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time.

I’m Jason Kelley…

And I’m Cindy Cohn.


This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by its creators:

Lost track by airtone
Common ground by airtone
Probably shouldn’t by J Lang

Josh Richman

Stupid Patent of the Month: Trying to Get U.S. Patents On An AI Program

1 month ago

Only people can get patents. There’s a good reason for that, which is that the patent grant—a temporary monopoly granted by the government—is supposed to be given out only to “promote the progress of science and useful arts.” Just like monkeys can’t get a copyright on a photo,  because it doesn’t incentivize the monkey to take more photos, software can’t get patents, because it doesn’t respond to incentives. 

Stephen Thaler hasn’t gotten this memo, because he’s spent years trying to get copyrights and patents for his AI programs. And people do seem intrigued by the idea of AI getting intellectual property rights. Thaler is able to get significant press attention by promoting his misguided legal battles to get patents, and he has plenty of lawyers around the world interested in helping him. 

Thaler created an AI program he calls DABUS, and filed two patent applications claiming DABUS was the sole inventor. These applications were appropriately rejected by the U.S. Patent Office, rejected again by a district court judge when Thaler sued to get the patents, and rejected yet again by a panel of appeals judges. Still not satisfied, in March, Thaler petitioned the U.S. Supreme Court to take his case. He got support from some surprising quarters, including Lawrence Lessig, as noted in a Techdirt post about the Thaler case. 

Fortunately, on April 24, 2023, the Supreme Court declined to take Thaler’s case. That should put an end to his arguments for his AI patent applications once and for all. 

Thaler filed U.S. Application Nos. 16/524,350 (describing a “Neural Flame”) and 16/524,532 (describing a “Fractal Container”) in 2019, and listed “DABUS” as the inventor on both applications. He submitted a sworn inventorship statement on DABUS’ behalf, as well as a document assigning himself all of DABUS’ invention rights. 

“Thaler maintains that he did not contribute to the conception of these inventions and that any person having skill in the art could have taken DABUS’ output and reduced the ideas in the applications to practice,” the Federal Circuit opinion explains. 

But the Patent Act requires inventors to be “individuals,” which means “a human being, a person” in Supreme Court precedent. 

The Idea Of AI Patents Keeps Coming Up

The issue of AI invention won’t go away, because there’s a dedicated lobby of enthusiasts—and  patent lawyers who want to work for them—that wants to keep talking about it. The patent office is currently collecting public comments about the possibility of AI inventorship for the second time, having already done so in 2019

Why would anyone want AI to have inventorship rights in the first place? The amicus brief from a Chicago patent lawyers’ group, which supported Thaler’s case to take DABUS to the Supreme Court, holds a clue. They imagine a future in which: 

ownership can be partitioned in various ways between entities that developed the AI, provided training data to the AI, trained the AI, and used the AI to invent, to the extent that these entities are different. In some cases, such agreements will result in one entity owning 100% of inventions produced by the AI, but other allocations of ownership are possible.

Endless negotiations over slices of idea-ownership might be a win for the lawyers involved in those negotiations, but it’s a loss for everyone else. 

We don’t need property rights systems to govern everything. In fact, the public loses out when we do that. The thousands of software patents created by humans are already a mess, causing real problems for developers and users of actual software. Applications seeking to grant monopoly rights to computer programs created by an AI are a bad idea, which is why we’re giving Thaler’s patent applications our Stupid Patent of the Month award. 

Joe Mullin

At Congressional Hearing, PCLOB Members Suggest Bare Minimum of 702 Reforms

1 month ago

Last week, the House Judiciary Subcommittee on Crime and Federal Government Surveillance held a hearing on “Fixing FISA: How a Law Designed to Protect Americans Has Been Weaponized Against Them,” ahead of the December 2023 expiration of the Section 702 surveillance authority. The three witnesses, Michael E. Horowitz (Inspector General, U.S. Department of Justice), Sharon Bradford Franklin (Chair, U.S. Privacy and Civil Liberties Oversight Board), and Beth A. Williams (Board Member, U.S. Privacy and Civil Liberties Oversight Board) all sketched out their visions for the good, the bad, and the ugly about the invasive surveillance power.

The witnesses managed to use the hearing to sketch out a vision for what a minimally sufficient bill to reform Section 702 would look like. However, they were not nearly as skeptical as we are of the necessity of domestic law enforcement’s use of these powers–especially when the information collected under 702 could be obtained by law enforcement with a warrant through more traditional avenues. 

Section 702 allows the government to conduct surveillance inside the United States by vacuuming up digital communications so long as the surveillance is directed at foreigners currently located outside the United States. It also prohibits intentionally targeting Americans. Nevertheless, the NSA routinely (“incidentally”) acquires innocent Americans' communications without a probable cause warrant. Once collected, the FBI can search through this massive database of information by “querying” the communications of specific individuals.

Previously the FBI alone reported conducting up to 3.4 million warrantless searches of Section 702 data in 2021 using Americans’ identifiers. Congress and the FISA Court have imposed modest limitations on these backdoor searches, but according to several recent FISA Court opinions, the FBI has engaged in “widespread violations” of even these minimal privacy protections.

A just-published transparency report from the Office of the Director of National Intelligence (ODNI) includes a “recalculation” of these statistics, reporting instead just under 3 million searches for 2021, and around 120,000 and 800,000 for 2022 and 2020 respectively. The report says that a single cybersecurity investigation in 2021 involving attempts to “compromise critical infrastructure” led to “approximately 1.9 million queries related to potential victims—including U.S. persons—[and] accounted for the vast majority of the increase in U.S. person queries conducted by FBI over the prior year.” 

But we should be far from reassured by these revised estimates of warrantless, backdoor searches of the 702 databases. First, even the lowest reported figure—nearly 120,000 searches in 2022—is still a whole lot of warrantless searches of Americans’ private communications. Second, the methodology used in this new report requires additional scrutiny. For example, it says that the FBI’s new counting method includes “deduplication,” where “instances in which the same query term was run multiple times, whether by the same user or by different users” are apparently treated as only one search. There’s no reason to consider that the right way to count, though. If police conducted separate warrantless searches of a person’s house on Monday, Wednesday, and Friday, a court would likely treat that as three separate violations of the person’s Fourth Amendment rights.

Regardless of the exact numbers, the disturbing history of overreach is why it’s so urgent that civil society, concerned people, and lawmakers act to pass legislation that radically reforms Section 702 before we’re stuck with another 4 years of warrantless backdoor searches of U.S. data.

The Good Suggestions:

Chair of the PCLOB Sharon Bradford Franklin had three vital recommendations for the committee to consider before voting on legislation to renew Section 702.

  1. Reduce the volume of “incidental collection.” The act of reducing the volume of U.S. persons’ data being swept up by Section 702 would also involve getting an accurate estimate of just how wide-reaching the problem is, something on which we are incapable of getting accurate figures at the moment. 
  2. End backdoor searches of data on U.S. persons by requiring judicial review before domestic law enforcement agencies like the FBI are able to query information about individual U.S. persons, regardless of whether the search is reasonably likely to return information on foreign intelligence or is being used to gather evidence of a crime committed on U.S. soil. 
  3. Permanently revoke the now defunct authorization to conduct “abouts” collection which was paused by the NSA in 2017 amid civil liberties concerns. These are collections of information not sent to or from a target but are communications “about” or which make reference to a surveillance target. Franklin believes we should not rest easy on the NSA’s pause of this procedure, but should ban it explicitly in any 702 renewal legislation. 

These three suggestions are a good starting point, but much more work needs to be done to address the over-classification and government secrecy that hinders accountability, enables abuse, and prevents people from suing to address harms done by government surveillance.

The Bad Suggestions:

Government representatives are always quick to testify to the legitimacy and utility of these programs by vaguely referencing classified events or attacks that intelligence agencies thwarted thanks to this program. Part of the problems of over-classification and extreme secrecy is that we’re expected to take their word for it rather than be brought into the process of understanding whether and when these programs actually provide some utility and are not–like Section 215 of the USA FREEDOM Act–touted as absolutely necessary until their authorities expire with little to no pushback from the national security apparatus.

PLCOB member Beth Williams also suggested that Section 702 was not a “bulk” collection program because it required specific targeting of individuals for surveillance–a claim that EFF contests as being an absolute myth.

Even worse, Williams suggested Section 702 and its invasive surveillance capabilities–vacuuming up and reviewing communications, presumably with people overseas, should be used as a tool for vetting hopeful immigrants to the United States as well as being people vetted for government jobs. This might give immigration services the ability to audit entire communication histories before deciding whether an immigrant can enter the country. This is a particularly problematic situation that could cost someone entrance to the United States based on, for instance, their own or a friend’s political opinions—as happened to a Palestinian Harvard student when his social media account was reviewed when coming to the U.S. to start his semester.

Our 702 Reform Wishlist:

In addition to ending warrantless backdoor searchers, Section 702 also needs new measures of transparency to enable future audits and accountability of these secretive programs. FISA has long contained procedures for private parties to sue over surveillance that violates their rights, including a mechanism for considering classified evidence while preserving national security. But, in lawsuit after lawsuit, the executive branch has sought to avoid these procedures, and the judiciary, including the Supreme Court, has adopted cramped readings of the law that create a de facto national security exception to the Constitution. We need real accountability, and that includes the opportunity to contest surveillance in court.

Matthew Guariglia

Appeals Court Should Reconsider Letting The FBI Block Twitter’s Surveillance Transparency Report

1 month ago

Today, EFF and ACLU filed a brief in support of Twitter’s effort to get an appeals court to reconsider its dangerous opinion enforcing a government gag order on Twitter’s 2013 transparency report.

In this long-running and important case, Twitter tried to publish a report bringing much-needed transparency to the government’s use of FISA orders and national security letters, including specifying whether it had received any of these types of requests. However, without going to a court, the FBI told Twitter it could not publish the report as written. Twitter sued, and last month the federal Court of Appeals for Ninth Circuit upheld the FBI’s gag order.

The court’s opinion undermined at least a hundred years of First Amendment case law on “prior restraints,” the term for when government officials forbid private speech in advance. It is a bedrock of constitutional history that prior restraints are subject to unique—and uniquely demanding—protections designed to ensure that the government cannot act as an unreviewable censor and stifle individuals’ right to free speech.

But as we write in the brief, the court’s opinion in this case “carves out, for the first time, a whole category of prior restraints that receive no more scrutiny than subsequent punishments for speech—expanding officials’ power to gag virtually anyone who interacts with a government agency and wishes to speak publicly about that interaction.” This exception supposedly applies to “government restrictions on the disclosure of information transmitted confidentially as part of a legitimate government process,” including nondisclosure rules regarding national security requests like the ones Twitter wanted to discuss. Needless to say, this carveout goes against mountains of precedent from the Supreme Court and the Ninth Circuit itself.

The court’s exception to prior restraint rules for information people learn through “legitimate, confidential government processes” is also quite obviously dangerous to democratic oversight of the government. Americans learn information from processes the government considers “legitimate” every day, and the risks of the government suppressing this information are many. Incarcerated persons receive information from the government agencies that control virtually every facet of their lives—from living conditions to medical care. Similarly, the exception would seemingly cover all interactions with law enforcement, border officials, the Internal Revenue Service, the U.S. Post Office, and the courts. It conceivably applies to state and local governmental processes as well. Law enforcement would be able to prevent a witness to a crime from telling their family that they were interviewed, and a criminal suspect who was beaten by police officers during an otherwise legitimate interrogation could be more readily gagged from disclosing that interaction.

We applaud Twitter’s efforts to push back on this drastic rewriting of the First Amendment, and we hope the full Ninth Circuit will decide to rehear the case.

Related Cases: Twitter v. Holder
Andrew Crocker

Maine Gets Another (Necessary) Opportunity to Defund Its Local Intelligence Fusion Center

1 month ago

Maine State Senator Pinny Beebe-Center has introduced LD 1290 , or An Act to End the Maine Information and Analysis Center Program, a bill that would defund the Maine Information and Analysis Center (MIAC), also known as Maine’s only fusion center. EFF once again pleased to support this bill in hopes of defunding an unnecessary, intrusive, and often-harmful piece of the U.S. surveillance regime. You can read the full text of the bill here. A version of this bill managed to pass 88-54 out of the Maine House of Representatives in June 2021 before being defeated in the state senate. 

Fusion centers are yet another unnecessary cog in the surveillance state—and one that serves the intrusive function of coordinating surveillance activities and sharing information between federal law enforcement, the national security surveillance apparatus, and local and state police. Across the United States, there are at least 78 fusion centers that were formed by the Department of Homeland Security in the wake of the war on terror and the rise of post-9/11 mass surveillance. Since their creation, fusion centers have been hammered by politicians, academics, and civil society groups for their ineffectiveness, dysfunction, mission creep, and unregulated tendency to veer into political policing. As scholar Brendan McQuade wrote in his book Pacifying the Homeland: Intelligence Fusion and Mass Supervision:

“On paper, fusion centers have the potential to organize dramatic surveillance powers. In practice however, what happens at fusion centers is circumscribed by the politics of law enforcement. The tremendous resources being invested in counterterrorism and the formation of interagency intelligence centers are complicated by organization complexity and jurisdictional rivalries. The result is not a revolutionary shift in policing but the creation of uneven, conflictive, and often dysfunctional intelligence-sharing systems.”

An explosive 2023 report from Rutgers University’s Center for Security, Race and Rights also gives us more evidence of why these centers are invasive, secretive, and dangerous. In the report, researchers documented how New Jersey’s fusion center leveraged national security powers to spy almost exclusively on Muslim, Arab, and Black communities and push an already racially biased criminal justice system into overdrive through aggressive enforcement of misdemeanor and quality of life offenses. 

Moreover, in recent years, the dysfunction of fusion centers and the ease with which they sink into policing First Amendment-protected activities have been on full display. After a series of leaks that revealed communications from inside police departments, fusion centers, and law enforcement agencies across the country, MIAC came under particular scrutiny for sharing dubious intelligence generated by far-right wing social media accounts with local law enforcement. Specifically, the Maine fusion center helped perpetuate disinformation that stacks of bricks and stones had been strategically placed throughout a Black Lives Matter protest as part of a larger plan for destruction, and caused police to plan and act accordingly. This was, to put it plainly, a government intelligence agency spreading fake news that could have deliberately injured people exercising their First Amendment rights. This is in addition to a whistleblower lawsuit from a state trooper that alleged the fusion center routinely violated civil rights.  

Last year, as MIAC issued a state-mandated annual report, local researchers and activists like McQuade found the official reporting inadequate, and issued their own MIAC Shadow Report. In the shadow report, activists detailed the above stories, the lack of transparency of the fusion center and its state partners, deficiencies in the self-auditing protocols, and glaring data security vulnerabilities like the planned archiving of data collected or transited through MIAC.

The first decade of the 21st century was characterized by a blank check to grow and expand the infrastructure that props up mass surveillance. Fusion centers are at the very heart of that excess. They have proven themselves to be unreliable and even harmful to the people the national security apparatus claims to want to protect. So why do states continue to fund intelligence fusion when, at best, it enacts political policing that poses an existential threat to communities, immigrants, and protestors—and at worst, it actively disseminates false information to police? 

We echo the sentiments of Senator Beebe-Center and other dedicated Maine residents who say it's time to shift MIAC's nearly million-dollar per year budget towards more useful programs. Maine, pass LD 1290 and defund the Maine Information and Analysis Center.

Matthew Guariglia

Greenpeace Stands Up Against SLAPPs And Wins 

1 month ago

The U.S. litigation system is meant to resolve serious disputes. Unfortunately, the high cost of litigation can be weaponized as a means of harassment and censorship. That’s become all too common, and the last few decades have seen the rise of what’s known as a Strategic Lawsuit Against Public Participation, or SLAPP. 

At EFF, as more speech of all types has moved online, we’ve seen SLAPPs proliferate over digital speech. SLAPPs get filed against protesters who oppose oil pipelines, and regular people doing everyday things like sending emails to local officials, or even posting an online review. 

Five years ago, together with Greenpeace and other environmental nonprofits, EFF helped create the Protect the Protest coalition, or PTP. It’s a group of nonprofits that supports its members and others in their fights against SLAPP lawsuits. 


One of the lawsuits that spurred the formation of PTP was Resolute Forestry Products v. Greenpeace. In this case, a logging company claimed that Greenpeace’s advocacy for Canadian forests amounted to a “global fraud” that should be punished under civil RICO laws—U.S. federal laws that were originally intended to go after organized crime

Following a summary judgment hearing last week, the Resolute v. Greenpeace case has finally been put to rest, with a complete victory for Greenpeace. This baseless lawsuit, which lasted seven years, should never have been brought in the first place. We hope Greenpeace’s victory against Resolute sends a strong message to corporate SLAPP plaintiffs—you won’t win, and your targets won’t stay silent. 

History of the Case

In 2010, Resolute and several nonprofits, including Greenpeace, struck a deal called the Canadian Boreal Forest Agreement, or CBFA. Under that agreement, Resolute promised to refrain from certain logging activity, and Greenpeace promised not to campaign against them. In 2012, Greenpeace ended its involvement in the agreement, believing that Resolute’s continued operations in the area posed a threat to the environment. 

Resolute reacted harshly to Greenpeace’s decision to advocate against their forestry practices. In 2016, the logging company sued, making a slew of claims against Greenpeace, including RICO, defamation, and unfair competition claims.

The company’s amended complaint acknowledged they cut down trees in exactly the type of landscape Greenpeace sought to protect—but just a little. “Resolute only harvests a fraction of the remaining intact forest landscape below the Northern Boundary in Quebec and Ontario,” the company objected (p.57). They also objected to Greenpeace’s statement that Resolute was “destroying” critical caribou habitat. Resolute didn’t deny logging in caribou habitat. Rather, the company objected that Greenpeace didn’t go equally hard on “many other forest companies who are regularly harvesting in the same habitat.” It also asserted that “harvesting in these habitats is not destructive.” 

Resolute’s amended complaint also goes on (p. 79) about the industry awards it has received for sustainability—mostly from other business groups—calling Greenpeace’s failure to include its corporate press releases in its “Clearcutting Free Speech” report a false claim. Resolute sued over hundreds of Greenpeace statements, arguing “each constitut[es] a separate mail or wire communication in furtherance of the fraudulent scheme” under RICO (p. 151) 

These claims collapsed almost completely upon serious analysis. A federal judge threw out nearly all of the claims against Greenpeace, including all RICO claims, in 2019. The judge did allow Resolute to move forward on certain supposedly defamatory statements that Greenpeace made about the logging company’s activity around the Montagnes Blanches, or White Mountains, an area in northern Quebec. 

No “Actual Malice” In Montagnes Blanches Statements 

Resolute was able to drag out its remaining weak defamation and unfair competition claims into a four-year-long word game about exactly what and where the “Montagnes Blanches” are. In its motion for summary judgment, Greenpeace pointed out that the name does not have a fixed meaning. As in many scientific and geographic debates, the definition of Montagnes Blanches has changed over time and the name has been used in different ways. 

By last week, before the summary judgment hearing, all that was at issue in the case were two statements. The first was that “in the Montagnes Blanches Forest in Quebec, there are three caribou herds, and in the Caribou Forest in Ontario there is an additional herd where habitat disturbance, including some from Resolute’s operations, is jeopardizing their survival.” The second was that Resolute “acquired three harvest blocks through auction sales inside the Montagnes Blanches… All three sites have been logged.” 

In his order last week, U.S. District Judge Jon Tigar found that Resolute hadn’t shown it could prove that Greenpeace had acted with “actual malice,” a key element in a defamation claim. It’s a high standard that requires that a defendant knew it was making false statements, or had “reckless disregard” for the truth. “The term ‘Montagnes Blanches’ has acquired more than one meaning and does not universally refer to one fixed geographic area,” the judge wrote. The evidence did not show that “Montagnes Blanches” had the one particular meaning that Resolute said it should have. 

Because Resolute failed to prove actual malice on the part of Greenpeace (and the unfair competition claim was based on the defamation claim), the judge granted Greenpeace’s motion for summary judgment.

“How We Work for a Better World”

At a rally before the hearing last week, Greenpeace leaders spoke about the history of protest and corporations and the danger of SLAPPs. 

“The point of SLAPPs is to silence, intimidate, distract, bankrupt, and ultimately squash public participation,” said Greenpeace former executive director Annie Leonard. “But public participation is how we work for a better world. It’s the democratic tools that we use to promote peaceful change. It’s free speech, it’s science-based advocacy, it’s campaigning, it’s public education, it’s peaceful protest, it’s solidarity. Public participation is activism, and these SLAPP suits are designed to stop activism.” 


Amy Moas, a Greenpeace senior forest campaigner and one of the named defendants in the lawsuit, talked about how the invasive discovery process that took place over what boiled down to two statements by Greenpeace disrupted her life and work. 

“They took this phone, and the one before it, to scour all my text messages,” she said. “[Resolute lawyers] posed really outrageous questions, for more than 16 hours, trying to twist my words. I am proud I told everybody that would listen, all around the world, what Resolute Forest Products was doing to the forests. And I’m proud that we’re still here today, speaking truth to power.” 

EFF is proud to stand with Greenpeace and other organizations fighting harassing lawsuits that infringe on their First Amendment rights. Disagreements about environmental protection should be handled in the public sphere, and lawsuits should not be used to bury opponents in time-consuming and expensive litigation as a way to bypass the democratic process. 

We need strong anti-SLAPP laws so that everyone can get the level of protection that Greenpeace had in this case, if not more. Greenpeace was able to invoke the California anti-SLAPP statute to get certain claims dismissed in 2019. These types of laws can stop harassing lawsuits from moving beyond initial phases. EFF will continue to advocate for strong anti-SLAPP laws at the state and federal level

Documents from this case: 

  • Resolute Amended Complaint (2017)
  • Order Granting in Part Motion to Dismiss (2019)
  • Greenpeace Motion for Summary Judgment (December 2022)
  • Order Granting Summary Judgment (April 21, 2023) 

Joe Mullin

EFF Now Has Tor Onions

1 month ago

Today, we’re announcing .onion addresses for and two of its affiliated projects: Certbot, an EFF-developed tool for automatically obtaining and renewing TLS certificates for websites, and Surveillance Self-Defense, which provides resources and guidance for individuals and organizations to protect themselves from surveillance and other security threats.

We have been made aware of events that indicate censorship could be occurring on some of our resources. By accessing these websites through their Tor .onion addresses, users can further protect their privacy and security while using another avenue to access important information.

A Tor onion address is a unique identifier for a hidden service hosted on the Tor network. It is a random string of letters and numbers followed by the ".onion" top-level domain. Unlike traditional websites, which have a public IP address that can be used to locate the server hosting the website, Tor hidden services have a unique address on the Tor network that provides end-to-end encryption and anonymity. Tor routes the connection through several “relays,” which can be run by different individuals or organizations all over the world. The final “exit relay” would connect to normally. The ISP can see that you’re using Tor, but cannot easily see what site you are visiting.

Tor onions are useful for hosting a “copy” of your website within the Tor network without the need to “leave” via an exit relay, providing an extra layer of protection and obscurity.

EFF has long partnered with Tor and supported the project. So we are glad to have our own resources hosted on the Tor network for those out there in parts of the world where internet surveillance is heightened or modified due to oppressive regimes and laws.




Alexis Hancock

Texas Should Leave Its Anti-SLAPP Law Alone

1 month 1 week ago

The Texas Citizens Participation Act, or TCPA, has been one of the strongest laws in the nation protecting citizens against lawsuits intended to silence or punish individuals who speak up on public matters. But HB 2781, a bill making its way through the state's legislature right now, would needlessly undercut the protections Texans have enjoyed for more than a decade.

Sometimes lawsuits are filed to chill speech or harass people, rather than resolve legitimate legal disputes. These types of censorious lawsuits have been dubbed Strategic Lawsuits Against Public Participation, or SLAPPs. Those who bring SLAPPs hope that the time and money people need to defend themselves against the claims—and the stress that results—will intimidate them into silence. Anti-SLAPP laws such as the TCPA protect people from this kind of harassment. For example, thanks to the TCPA's protections, a Texas court in 2016 dismissed a $1 million lawsuit that a pet-sitting company filed against a Dallas couple just for leaving the business a one-star Yelp review.

Effective anti-SLAPP laws like the current TCPA allow judges to quickly review whether someone's been hit with a lawsuit for speaking out on a matter of public concern. During that time, other court proceedings are put on hold. If it’s determined that the case is a SLAPP, the lawsuit gets thrown out and the SLAPP victim can recover their legal fees. HB 2781 would remove this automatic stay if a motion to dismiss a SLAPP suit is found to be frivolous, untimely, or subject to a statutory exemption.

This is a mistake. Courts, after all, are not always right. Recent Texas Supreme Court cases such as Kinder Morgan v. Scurry County and Montelongo v. Abrea show that both trial courts and courts of appeal considering anti-SLAPP motions can easily decide timeliness issues incorrectly.

As EFF noted in its letter opposing the bill, the existing automatic stay is key to the TCPA’s protections. Otherwise, a person trying to use the law's protections to avoid unnecessary legal costs must instead simultaneously juggle not only the SLAPP suit itself but also the anti-SLAPP motion. This would be a tremendous waste of time and judicial resources.

A number of groups representing many different interests publicly oppose HB 2781, including Yelp, several news media outlets and organizations, the Better Business Bureau, and a coalition of groups including: Institute for Free Speech, ACLU of Texas, Americans for Prosperity-Texas, Center for Biological Diversity, Competitive Enterprise Institute, Foundation for Individual Rights and Expression (FIRE), Institute for Justice, National Coalition Against Censorship, National Right to Life, National Taxpayers Union, PEN America, Public Participation Project, The Authors Guild, and the True Texas Project.

EFF stands with these groups against HB 2781. Texas should not change an important law protecting people's speech rights in favor of those who use the courts as a tool for intimidation.

Joe Mullin

Internal Documents Show How Little the FBI Did to Correct Misuse of Section 702 Databases

1 month 1 week ago

The Federal Bureau of Investigation (FBI) has released internal documents used to guide  agency personnel on how to search the massive databases of information collected under the Foreign Intelligence Surveillance Act, including communications collected without a warrant under Section 702. Despite reassurances from the intelligence community about its “culture of compliance,” these documents depict almost no substantial consideration of privacy or civil liberties. They also suggest that in the years before these guidelines were written, even amidst widespread FBI misuse of the databases to search for Americans’ communications, there were even fewer written guidelines governing their use. Above all, FBI agents can still search for and read Americans’ private communications collected under Section 702, all without a warrant or judicial oversight.

Section 702 allows the government to conduct surveillance inside the United States by vacuuming up digital communications so long as the surveillance is directed at foreigners currently located outside the United States. It also prohibits intentionally targeting Americans. Nevertheless, the NSA routinely (“incidentally”) acquires innocent Americans' communications without a probable cause warrant. Once collected, the FBI can search through this massive database of information by “querying” the communications of specific individuals.

In 2021 alone, the FBI conducted up to 3.4 million warrantless searches of Section 702 data to find Americans’ communications. Congress and the FISA Court have imposed modest limitations on these “backdoor searches,” but according to several recent FISA Court opinions, the FBI has engaged in “widespread violations” of even these minimal privacy protections.

After a string of scandals, these newly released documents demonstrate some of the steps the FBI took to train personnel who apparently did not understand how to stay within the law’s extremely broad mandate. Namely, to query the collected communications of U.S. persons only if they are investigating foreign intelligence, a crime, or both, still without judicial review. According to FBI director and media reports, these guidelines led to a significant drop in unauthorized searches, but even this “dramatic” drop still allegedly resulted in over two hundred thousand warrantless searches of Americans’ private communications in 2022 alone. That’s two hundred thousand too many; Congress should close the “backdoor loophole” and require the FBI to get a search warrant.

In addition to stopping the unconstitutional surveillance, Congress needs to include robust new transparency measures into any reauthorization of Section 702 to enable future audits and accountability of these secretive programs. FISA has long contained procedures for private parties to sue over surveillance that violates their rights, including a mechanism for considering classified evidence while preserving national security. But, in lawsuit after lawsuit, the executive branch has sought to avoid these procedures, and the judiciary, including the Supreme Court, has adopted cramped readings of the law that create a de facto national security exception to the Constitution.

EFF is far from alone in this fight to reform Section 702. Not only are we joined by a large number of civil liberties and civil rights groups, even members of the Executive Branch’s Privacy and Civil Liberties Oversight Board (PCLOB) have announced that this program should not continue as is. PCLOB member Travis LeBlanc said at a conference, “Given what I have seen and what I know, I do have several concerns about a clean reauthorization without significant, common-sense reforms to safeguard privacy and civil liberties.”

Section 702 has become something Congress never intended: a domestic spying tool. Congress should consider ending the program entirely, but certainly not reauthorize Section 702 without critical reforms, including true accountability and oversight.

Matthew Guariglia

Your Messaging Service Should Not Be a DEA Informant

1 month 1 week ago

A new U.S. Senate bill would require private messaging services, social media companies, and even cloud providers to report their users to the Drug Enforcement Administration (DEA) if they find out about certain illegal drug sales. This would lead to inaccurate reports and turn messaging services into government informants.

The bill, named the Cooper Davis Act, is likely to result in a host of inaccurate reports and in companies sweeping up innocent conversations, including discussions about past drug use or treatment. While explicitly not required, it may also give internet companies incentive to conduct dragnet searches of private messages to find protected speech that is merely indicative of illegal behavior.

Most troubling, this bill is a template for legislators to try to force internet companies to report their users to law enforcement for other unfavorable conduct or speech. This bill aims to cut down on the illegal sales of fentanyl, methamphetamine, and counterfeit narcotics. But what would prevent the next bill from targeting marijuana or the sale or purchase of abortion pills, if a new administration deemed those drugs unsafe or illegal for purely political reasons? As we've argued many times before, once the framework exists, it could easily be expanded.

The Bill Requires Reporting to the DEA

The law targets the “unlawful sale or distribution of fentanyl, methamphetamine” and “the unlawful sale, distribution or manufacture of a counterfeit controlled substance.”

Under the law, providers are required to report to the DEA when they gain actual knowledge of facts about those drug sales or when a user makes a reasonably believable report about those sales. Providers are also allowed to make reports when they have a reasonable belief about those facts or have actual knowledge that a sale is planned or imminent. Importantly, providers can be fined hundreds of thousands of dollars for a failure to report.

Providers have discretion on what to include in a report. But they are encouraged to turn over personal information about the users involved, location information, and complete communications. The DEA can then share the reports with other law enforcement.

The law also makes a “request” that providers preserve the report and other relevant information (so law enforcement can potentially obtain it later). And it prevents providers from telling their users about the preservation, unless they first notify the DEA.

We Have Seen This Reporting Scheme Before

The bill is modeled off existing law that requires similar reporting about child sexual abuse material (CSAM). Lawmakers also previously tried and failed to use this reporting scheme to target vaguely defined terror content. This bill would port over some of the same flaws.

Under existing law, providers are required to report actual knowledge of CSAM to a group called the National Center for Missing and Exploited Children, a quasi-governmental entity that later forwards on some reports to law enforcement. Companies base some of their reporting on matches found by comparing digital signatures of images to an existing database of previously removed CSAM. Notably, this new bill requires reporting directly to the DEA, and the content at issues (drug sales) is markedly harder and more subjective to identify. While actual CSAM is unprotected by the First Amendment, mere discussion of drug use is protected speech. Due to the liability they would face for failing to report, some companies may overreport using content-scanning tools that we know have large error rates in other contexts.

Despite strong challenges, the existing CSAM reporting law has so far survived Fourth Amendment scrutiny because the government does not explicitly compel providers to search through their users’ communications (it only requires reporting if providers decide to search on their own). However, some applications of existing law have violated the Constitution—specifically, when providers make a report without fully examining the material they are reporting. In those cases, law enforcement has been deemed to have exceeded the scope of the private search by providers, which should require a warrant.

Like with this bill, a separate piece of the existing CSAM law requires providers to preserve user content after making a report. But there is increasing recognition that this compelled preservation constitutes a Fourth Amendment seizure that removes a user’s rights to delete their own content.

We Should Strengthen the Privacy of User Communications, Not Weaken Them

After years of attempts to weaken privacy, lawmakers should focus their interest on strengthening protections for user content. Under the 1986 Electronic Communications Privacy Act (ECPA), providers are generally restricted from handing over user information to law enforcement without some kind of legal process—whether it be a warrant, court order, or subpoena. However, this bill creates another carveout.

Rather than carving up ECPA, we need to update and strengthen the decades-old protections. EFF has been making this argument for more than a decade. And states like California have charted a path forward and will hopefully continue.

More immediately, if lawmakers do not abandon the Cooper Davis Act, the worst aspects must be avoided. When considering amendments, lawmakers should:

  • Make the reporting scheme entirely voluntary
  • Require the DEA to delete reports that contain innocent content, and prevent the DEA from targeting individual purchasers based on a report
  • Commission a study and create a sunset date to see if this reporting scheme even serves its stated purpose
  • At minimum, require the government to get a warrant for the lengthy preservation of content associated with a report
  • Make it easier for companies to notify their users about preservation requests, similar to the NDO Fairness Act
Mario Trujillo

The DMCA Cannot Protect You From Your Own Words

1 month 1 week ago

There is a loud debate raging over what companies should and shouldn’t be doing about the things people say on their platforms. What people often seem to forget is that we already know the dangers of providing a quick way for people to remove criticism of themselves from the internet. Thanks to copyright law’s disastrous damages provisions, all but the largest social media companies risk financial ruin if they don’t promptly remove any content that’s been flagged as infringing. As a result, copyright complaints are a highly effective way to get a post you don’t like taken offline fast.

This is exactly what happened recently to a journalist who resurfaced an actor’s own words to raise concern about his actions. When David Choe came back into the public eye thanks to his role in a popular Netflix series, some people, including investigative journalist Aura Bogado, remembered a story he told in a 2014 podcast about sexually assaulting a masseuse. Bogado talked about it on Twitter and included a link to the video clip from the podcast (which she had obtained from fellow journalist Melissa Stetten, who broke the story in 2014). Choe responded to the controversy in at least two ways: by insisting he fabricated the story, and by using false copyright claims to try to get the video erased from the internet.

His first strategy may be effective. The second should not be.

The David Young Choe Foundation claims it owns the copyright in the podcast, which may or may not extend to the specific episode in question. But whether or not Choe owns the rights, Bogado’s posting of the short clip was an obviously lawful fair use – classic criticism and commentary, with receipts.

Abusing copyright to shut down online speech isn’t new – it’s been well-documented for decades. But copyright holders continue to insist that it’s not a real problem. Tell that to Bogado, who is two complaints away from losing her Twitter account. Bogado has counter-noticed, a procedure that essentially allows users to seek restoration, but only if they are willing to submit to the jurisdiction of a federal court if the copyright holder decides to sue them. With EFF’s help, Bogado is willing to take the risk that Choe will come to his senses, talk to a lawyer, and realize that his complaint is absurd (something he would already know if he had considered whether the use is a fair use, as he was legally required to do.) Many fair users are not willing to take that risk, much less able to find pro bono counsel to help them understand their options.

Given the proliferation of misinformation online, it’s crucial to protect everyone’s ability to point to the facts. Let’s hope that Choe learns his lesson – and that other copyright bullies do too.

Corynne McSherry

California Bill to Stop Dragnet Surveillance of People Seeking Reproductive and Gender-Affirming Care Passes Key Committees

1 month 1 week ago

A.B. 793, a bill authored by Assemblymember Mia Bonta to protect people seeking abortion and gender-affirming care from dragnet-style digital surveillance, has passed two key committees in the California Assembly.

EFF is a proud co-sponsor of A.B. 793, along with ACLU California Action and If/When/How. The bill targets a type of dragnet surveillance that can compel tech companies to search their records and reveal the identities of people who have driven down a certain street or looked up particular keywords online. These demands, known as “reverse demands”, “geofence warrants,” or “keyword warrants,” enable law enforcement in states across the country to request the names and identities of people whose digital data shows they’ve (for example) spent time near a California abortion clinic or searched for information about gender-affirming care online. EFF has long opposed the use of these unconstitutional warrants; following the Dobbs decision and an increase in laws criminalizing gender-affirming case, they pose an even greater threat. 

So far, California lawmakers seem to understand these dangers. The bill passed on a bipartisan vote out of the Assembly Public Safety committee on April 11. Last week, it also passed the Assembly Judiciary committee.

More than 50 civil liberties, reproductive justice, healthcare equity, and LGBTQ+ advocacy groups form the support coalition on the bill, including NARAL Pro-Choice California, Equality California, Planned Parenthood Affiliates of California, and the American Nurses Association/California. The bill is now headed to the Assembly Appropriations Committee.

If you're a Californian who'd like to express your support for protecting the privacy of vulnerable people seeking healthcare—and particularly if you live in the district of Assembly Appropriations Chair Chris Holden, northeast of Los Angeles—please speak up for this bill.

Take Action

Tell Your Lawmakers To Support A.B. 793

Hayley Tsukayama

First Appellate Court Finds Geofence Warrant Unconstitutional

1 month 1 week ago

The California Court of Appeal has held that a geofence warrant seeking information on all devices located within several densely-populated areas in Los Angeles violated the Fourth Amendment. This is the first time an appellate court in the United States has reviewed a geofence warrant. The case is People v. Meza, and EFF filed an amicus brief and jointly argued the case before the court.

Geofence warrants, which we have written about extensively before, are unlike typical warrants for electronic information because they don’t name a suspect and are not even targeted to specific individuals or accounts. Instead, they require a provider—almost always Google—to search its entire reserve of user location data to identify all users or devices located in a geographic area during a time period specified by law enforcement.

In the Meza case, Los Angeles Sheriff’s Department deputies were investigating a homicide and had video footage suggesting the suspects followed the victim from one location to another before committing the crime. To try to identify the unknown suspects, they sought a warrant that would force Google to turn over identifying information for every device with a Google account that was within any of six locations over a five hour window. The warrant covered time periods where people were likely to be in sensitive places, like their homes, or driving along busy streets. In total, police requested data for geographic area equivalent to about 24 football fields (five to six city blocks), which included large apartment buildings, churches, barber shops, nail salons, medical centers, restaurants, a public library, and a union headquarters.

Typically, as in this case, geofence warrants lay out a three-step process by which police are supposed to execute the warrant: first, Google provides anonymized identifiers for each device within the geofenced area; second, police identify a subset of those devices and ask Google for additional information on where those devices traveled over an expanded time period; and finally, police identify a further subset of the anonymized devices and ask Google to unmask them and provide detailed account information for those device owners. A judge is only involved in issuing the initial warrant, and police have little or no direction from the court on how they should narrow down the devices they ultimately ask Google to identify. This can allow the police to arbitrarily alter the process, as they did in this case, or attempt to unmask hundreds or even thousands of devices, as they have in other cases.

In Meza, the Court of Appeal found that these problems doomed the geofence warrant at issue. The court held the warrant was invalid under the Fourth Amendment because it failed “to place any meaningful restriction on the discretion of law enforcement officers to determine which accounts would be subject to further scrutiny or deanonymization.” The court also held the warrant was overbroad because it “authorized the identification of any individual within six large search areas without any particularized probable cause as to each person or their location.” The court held the geographic areas and time periods covered by the warrant were impermissibly broad because they included areas where the suspects could not have been (like inside apartments) and covered time periods when police knew—based on time-stamped video footage—that the suspects had already moved on. This part of the court’s opinion largely tracks prior lower court rulings.

Defendants also argued that the warrant violated California’s landmark Electronic Communications Privacy Act (CalECPA), which requires state warrants for electronic communication information to “describe with particularity the information to be seized by specifying, as appropriate and reasonable . . . the target individuals or accounts.” Defendants argued that a warrant that seeks information on every individual or account fails to meet this requirement.

Unfortunately, here the court disagreed. The court focused on the statutory language limiting CalECPA’s particularity requirement to requiring police only specify accounts and individuals when it is “appropriate and reasonable” to do so. The court held the geofence warrant met this requirement by making it clear that police sought “individuals whose devices were located within the search boundaries at certain times,” even though it failed to identify those individuals.

Ultimately, the court’s CalECPA analysis proved fatal to the defendants’ case. Despite ruling the warrant violated the Fourth Amendment, the court refused to suppress the evidence, finding the officers acted in good faith based on a facially valid warrant. And while CalECPA has its own suppression remedy, the court held it only applied when there was a statutory violation, not when the warrant violated the Fourth Amendment alone. This is in clear contradiction to an earlier California geofence case, although that case was at the trial court, not at the Court of Appeal.

The court’s ruling creates an incongruous and possibly dangerous precedent on CalECPA’s particularity requirements and suppression remedy. Contrary to the court’s interpretation, CalECPA was intended to offer greater protections than existing Fourth Amendment and federal statutory law—especially for location data like that revealed through a geofence warrant. CalECPA also makes clear that its suppression remedy applies to violations of both the Fourth Amendment and CalECPA itself. Yet the California Court of Appeal’s holding in Meza ignores legislators’ clear intent in passing CalECPA. It appears to create a lower particularity standard for CalECPA warrants than warrants issued under the Fourth Amendment alone, and it significantly undermines CalECPA’s power by constraining its suppression remedy solely to statutory violations.

Given the court’s harmful CalECPA analysis and the fact that the number of geofence warrants used by police continues to increase year over year, we hope defendants will petition the California Supreme Court for review. We will continue to support them if they do.

Jennifer Lynch
53 minutes 25 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed