✍️ The Bill to Hand Parenting to Big Tech | EFFector 38.2

11 hours 12 minutes ago

Lawmakers in Washington are once again focusing on kids, screens, and mental health. But according to Congress, Big Tech is somehow both the problem and the solution. We're diving into the latest attempt to control how kids access the internet and more with our latest EFFector newsletter.

Since 1990, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This latest issue tracks what to do when you hit an age gate online, explains why rent-only copyright culture makes us all worse off, and covers the dangers of law enforcement purchasing straight-up military drones.

Prefer to listen in? In our audio companion, EFF Senior Policy Analyst Joe Mullin explains what lawmakers should do if they really want to help families. Find the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 38.2 - ✍️ THE BILL TO HAND PARENTING TO BIG TECH

Want to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight to protect people from these data breaches and unlawful surveillance when you support EFF today!

Christian Romero

DSA Human Rights Alliance Publishes Principles Calling for DSA Enforcement to Incorporate Global Perspectives

23 hours 14 minutes ago

The Digital Services Act (DSA) Human Rights Alliance has, since its founding by EFF and Access Now in 2021, worked to ensure that the European Union follows a human rights-based approach to platform governance by integrating a wide range of voices and perspectives to contextualise DSA enforcement and examining the DSA’s effect on tech regulations around the world.

As the DSA moves from legislation to enforcement, it has become increasingly clear that its impact depends not only on the text of the Act but also how it’s interpreted and enforced in practice. This is why the Alliance has created a set of recommendations to include civil society organizations and rights-defending stakeholders in the enforcement process. 

 The Principles for a Human Rights-Centred Application of the DSA: A Global Perspective, a report published this week by the Alliance, outlines steps the European Commission, as the main DSA enforcer, as well as national policymakers and regulators, should take to bring diverse groups to the table as a means of ensuring that the implementation of the DSA is grounded in human rights standards.

 The Principles also offer guidance for regulators outside the EU who look to the DSA as a reference framework and international bodies and global actors concerned with digital governance and the wider implications of the DSA. The Principles promote meaningful stakeholder engagement and emphasize the role of civil society organisations in providing expertise and acting as human rights watchdogs.

“Regulators and enforcers need input from civil society, researchers, and affected communities to understand the global dynamics of platform governance,” said EFF International Policy Director Christoph Schmon. “Non-EU-based civil society groups should be enabled to engage on equal footing with EU stakeholders on rights-focused elements of the DSA. This kind of robust engagement will help ensure that DSA enforcement serves the public interest and strengthens fundamental rights for everyone, especially marginalized and vulnerable groups.”

“As activists are increasingly intimidated, journalists silenced, and science and academic freedom attacked by those who claim to defend free speech, it is of utmost importance that the Digital Services Act's enforcement is centered around the protection of fundamental rights, including the right to the freedom of expression,” said Marcel Kolaja, Policy & Advocacy Director—Europe at Access Now. “To do so effectively, the global perspective needs to be taken into account. The DSA Human Rights Principles provide this perspective and offer valuable guidance for the European Commission, policymakers, and regulators for implementation and enforcement of policies aiming at the protection of fundamental rights.”

“The Principles come at the crucial moment for the EU candidate countries, such as Serbia, that have been aligning their legislation with the EU acquis but still struggle with some of the basic rule of law and human rights standards,” said Ana Toskic Cvetinovic, Executive Director for Partners Serbia. “The DSA HR Alliance offers the opportunity for non-EU civil society to learn about the existing challenges of DSA implementation and design strategies for impacting national policy development in order to minimize any negative impact on human rights.”

 The Principles call for:

◼ Empowering EU and non-EU Civil Society and Users to Pursue DSA Enforcement Actions

◼ Considering Extraterritorial and Cross-Border Effects of DSA Enforcement

◼ Promoting Cross-Regional Collaboration Among CSOs on Global Regulatory Issues

◼ Establishing Institutionalised Dialogue Between EU and Non-EU Stakeholders

◼ Upholding the Rule of Law and Fundamental Rights in DSA Enforcement, Free from Political Influence

◼ Considering Global Experiences with Trusted Flaggers and Avoid Enforcement Abuse

◼ Recognising the International Relevance of DSA Data Access and Transparency Provisions for Human Rights Monitoring

The Principles have been signed by 30 civil society organizations,researchers, and independent experts.

The DSA Human Right Alliance represents diverse communities across the globe to ensure that the DSA embraces a human rights-centered approach to platform governance and that EU lawmakers consider the global impacts of European legislation.

 

Karen Gullo

Beware: Government Using Image Manipulation for Propaganda

1 day 10 hours ago

U.S. Homeland Security Secretary Kristi Noem last week posted a photo of the arrest of Nekima Levy Armstrong, one of three activists who had entered a St. Paul, Minn. church to confront a pastor who also serves as acting field director of the St Paul Immigration and Customs Enforcement (ICE) office. 

A short while later, the White House posted the same photo – except that version had been digitally altered to darken Armstrong’s skin and rearrange her facial features to make it appear she was sobbing or distraught. The Guardian one of many media outlets to report on this image manipulation, created a handy slider graphic to help viewers see clearly how the photo had been changed.  

This isn’t about “owning the libs” — this is the highest office in the nation using technology to lie to the entire world. 

The New York Times reported it had run the two images through Resemble.AI, an A.I. detection system, which concluded Noem’s image was real but the White House’s version showed signs of manipulation. "The Times was able to create images nearly identical to the White House’s version by asking Gemini and Grok — generative A.I. tools from Google and Elon Musk’s xAI start-up — to alter Ms. Noem’s original image." 

Most of us can agree that the government shouldn’t lie to its constituents. We can also agree that good government does not involve emphasizing cruelty or furthering racial biases. But this abuse of technology violates both those norms. 

“Accuracy and truthfulness are core to the credibility of visual reporting,” the National Press Photographers Association said in a statement issued about this incident. “The integrity of photographic images is essential to public trust and to the historical record. Altering editorial content for any purpose that misrepresents subjects or events undermines that trust and is incompatible with professional practice.” 

This isn’t about “owning the libs” — this is the highest office in the nation using technology to lie to the entire world.

Reworking an arrest photo to make the arrestee look more distraught not only is a lie, but it’s also a doubling-down on a “the cruelty is the point” manifesto. Using a manipulated image further humiliates the individual and perpetuate harmful biases, and the only reason to darken an arrestee’s skin would be to reinforce colorist stereotypes and stoke the flames of racial prejudice, particularly against dark-skinned people.  

History is replete with cruel and racist images as propaganda: Think of Nazi Germany’s cartoons depicting Jewish people, or contemporaneously, U.S. cartoons depicting Japanese people as we placed Japanese-Americans in internment camps. Time magazine caught hell in 1994 for using an artificially darkened photo of O.J. Simpson on its cover, and several Republican politcal campaigns in recent years have been called out for similar manipulation in recent years. 

But in an age when we can create or alter a photo with a few keyboard strokes, when we can alter what viewers think is reality so easily and convincingly, the danger of abuse by government is greater.   

Had the Trump administration not ham-handedly released the retouched perp-walk photo after Noem had released the original, we might not have known the reality of that arrest at all. This dishonesty is all the more reason why Americans’ right to record law enforcement activities must be protected. Without independent records and documentation of what’s happening, there’s no way to contradict the government’s lies. 

This incident raises the question of whether the Trump Administration feels emboldened to manipulate other photos for other propaganda purposes. Does it rework photos of the President to make him appear healthier, or more awake? Does it rework military or intelligence images to create pretexts for war? Does it rework photos of American citizens protesting or safeguarding their neighbors to justify a military deployment? 

In this instance, like so much of today’s political trolling, there’s a good chance it’ll be counterproductive for the trolls: The New York Times correctly noted that the doctored photograph could hinder the Armstrong’s right to a fair trial. “As the case proceeds, her lawyers could use it to accuse the Trump administration of making what are known as improper extrajudicial statements. Most federal courts bar prosecutors from making any remarks about court filings or a legal proceeding outside of court in a way that could prejudice the pool of jurors who might ultimately hear the case.” They also could claim the doctored photo proves the Justice Department bore some sort of animus against Armstrong and charged her vindictively. 

In the past, we've urged caution when analyzing proposals to regulate technologies that could be used to create false images. In those cases, we argued that any new regulation should rely on the established framework for addressing harms caused by other forms of harmful false information. But in this situation, it is the government itself that is misusing technology and propagating harmful falsehoods. This doesn't require new laws; the government can and should put an end to this practice on its own. 

Any reputable journalism organization would fire an employee for manipulating a photo this way; many have done exactly that. It’s a shame our government can’t adhere to such a basic ethical and moral code too. 

Josh Richman

EFF Statement on ICE and CBP Violence

2 days 4 hours ago

Dangerously unchecked surveillance and rights violations have been a throughline of the Department of Homeland Security since the agency’s creation in the wake of the September 11th attacks. In particular, Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have been responsible for countless civil liberties and digital rights violations since that time. In the past year, however, ICE and CBP have descended into utter lawlessness, repeatedly refusing to exercise or submit to the democratic accountability required by the Constitution and our system of laws.  

The Trump Administration has made indiscriminate immigration enforcement and mass deportation a key feature of its agenda, with little to no accountability for illegal actions by agents and agency officials. Over the past year, we’ve seen massive ICE raids in cities from Los Angeles to Chicago to Minneapolis. Supercharged by an unprecedented funding increase, immigration enforcement agents haven’t been limited to boots on the ground: they’ve been scanning faces, tracking neighborhood cell phone activity, and amassing surveillance tools to monitor immigrants and U.S. citizens alike. 

Congress must vote to reject any further funding of ICE and CBP

The latest enforcement actions in Minnesota have led to federal immigration agents killing Renee Good and Alex Pretti. Both were engaged in their First Amendment right to observe and record law enforcement when they were killed. And it’s only because others similarly exercised their right to record that these killings were documented and widely exposed, countering false narratives the Trump Administration promoted in an attempt to justify the unjustifiable.  

These constitutional violations are systemic, not one-offs. Just last week, the Associated Press reported a leaked ICE memo that authorizes agents to enter homes solely based on “administrative” warrants—lacking any judicial involvement. This government policy is contrary to the “very core” of the Fourth Amendment, which protects us against unreasonable search and seizure, especially in our own homes.  

These violations must stop now. ICE and CBP have grown so disdainful of the rule of law that reforms or guardrails cannot suffice. We join with many others in saying that Congress must vote to reject any further funding of ICE and CBP this week. But that is not enough. It’s time for Congress to do the real work of rebuilding our immigration enforcement system from the ground up, so that it respects human rights (including digital rights) and human dignity, with real accountability for individual officers, their leadership, and the agency as a whole.

Cindy Cohn

Search Engines, AI, And The Long Fight Over Fair Use

5 days 5 hours ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

Long before generative AI, copyright holders warned that new technologies for reading and analyzing information would destroy creativity. Internet search engines, they argued, were infringement machines—tools that copied copyrighted works at scale without permission. As they had with earlier information technologies like the photocopier and the VCR, copyright owners sued.

Courts disagreed. They recognized that copying works in order to understand, index, and locate information is a classic fair use—and a necessary condition for a free and open internet.

Today, the same argument is being recycled against AI. It’s whether copyright owners should be allowed to control how others analyze, reuse, and build on existing works.

Fair Use Protects Analysis—Even When It’s Automated

U.S. courts have long recognized that copying for purposes of analysis, indexing, and learning is a classic fair use. That principle didn’t originate with artificial intelligence. It doesn’t disappear just because the processes are performed by a machine.

Copying that works in order to understand them, extract information from them, or make them searchable is transformative and lawful. That’s why search engines can index the web, libraries can make digital indexes, and researchers can analyze large collections of text and data without negotiating licenses from millions of rightsholders. These uses don’t substitute for the original works; they enable new forms of knowledge and expression.

Training AI models fits squarely within that tradition. An AI system learns by analyzing patterns across many works. The purpose of that copying is not to reproduce or replace the original texts, but to extract statistical relationships that allow the AI system to generate new outputs. That is the hallmark of a transformative use. 

Attacking AI training on copyright grounds misunderstands what’s at stake. If copyright law is expanded to require permission for analyzing or learning from existing works, the damage won’t be limited to generative AI tools. It could threaten long-standing practices in machine learning and text-and-data mining that underpin research in science, medicine, and technology. 

Researchers already rely on fair use to analyze massive datasets such as scientific literature. Requiring licenses for these uses would often be impractical or impossible, and it would advantage only the largest companies with the money to negotiate blanket deals. Fair use exists to prevent copyright from becoming a barrier to understanding the world. The law has protected learning before. It should continue to do so now, even when that learning is automated. 

A Road Forward For AI Training And Fair Use 

One court has already shown how these cases should be analyzed. In Bartz v. Anthropic, the court found that using copyrighted works to train an AI model is a highly transformative use. Training is a kind of studying how language works—not about reproducing or supplanting the original books. Any harm to the market for the original works was speculative. 

The court in Bartz rejected the idea that an AI model might infringe because, in some abstract sense, its output competes with existing works. While EFF disagrees with other parts of the decision, the court’s ruling on AI training and fair use offers a good approach. Courts should focus on whether training is transformative and non-substitutive, not on fear-based speculation about how a new tool could affect someone’s market share. 

AI Can Create Problems, But Expanding Copyright Is the Wrong Fix 

Workers’ concerns about automation and displacement are real and should not be ignored. But copyright is the wrong tool to address them. Managing economic transitions and protecting workers during turbulent times may be core functions of government, but copyright law doesn’t help with that task in the slightest. Expanding copyright control over learning and analysis won’t stop new forms of worker automation—it never has. But it will distort copyright law and undermine free expression. 

Broad licensing mandates may also do harm by entrenching the current biggest incumbent companies. Only the largest tech firms can afford to negotiate massive licensing deals covering millions of works. Smaller developers, research teams, nonprofits, and open-source projects will all get locked out. Copyright expansion won’t restrain Big Tech—it will give it a new advantage.  

Fair Use Still Matters

Learning from prior work is foundational to free expression. Rightsholders cannot be allowed to control it. Courts have rejected that move before, and they should do so again.

Search, indexing, and analysis didn’t destroy creativity. Nor did the photocopier, nor the VCR. They expanded speech, access to knowledge, and participation in culture. Artificial intelligence raises hard new questions, but fair use remains the right starting point for thinking about training.

Joe Mullin

Rent-Only Copyright Culture Makes Us All Worse Off

6 days 6 hours ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

In the Netflix/Spotify/Amazon era, many of us access copyrighted works purely in digital form – and that means we rarely have the chance to buy them. Instead, we are stuck renting them, subject to all kinds of terms and conditions. And because the content is digital, reselling it, lending it, even preserving it for your own use inevitably requires copying. Unfortunately, when it comes to copying digital media, US copyright law has pretty much lost the plot.

As we approach the 50th anniversary of the 1976 Copyrights, the last major overhaul of US copyright law, we’re not the only ones wondering if it’s time for the next one. It’s a high-risk proposition, given the wealth and influence of entrenched copyright interests who will not hesitate to send carefully selected celebrities to argue for changes that will send more money, into fewer pockets, for longer terms. But it’s equally clear that and nowhere is that more evident than the waning influence of Section 109, aka the first sale doctrine.

First sale—the principle that once you buy a copyrighted work you have the right to re-sell it, lend it, hide it under the bed, or set it on fire in protest—is deeply rooted in US copyright law. Indeed, in an era where so many judges are looking to the Framers for guidance on how to interpret current law, it’s worth noting that the first sale principles (also characterized as “copyright exhaustion”) can be found in the earliest copyright cases and applied across the rights in the so-called “copyright bundle.”

Unfortunately, courts have held that first sale, at least as it was codified in the Copyright Act, only applies to distribution, not reproduction. So even if you want to copy a rented digital textbook to a second device, and you go through the trouble of deleting it from the first device, the doctrine does not protect you.

We’re all worse off as a result. Our access to culture, from hit songs to obscure indie films, are mediated by the whims of major corporations. With physical media the first sale principle built bustling second hand markets, community swaps, and libraries—places where culture can be shared and celebrated, while making it more affordable for everyone.

And while these new subscription or rental services have an appealing upfront cost, it comes with a lot more precarity. If you love rewatching a show, you may be chasing it between services or find it is suddenly unavailable on any platform. Or, as fans of Mad Men or Buffy the Vampire Slayer know, you could be stuck with a terrible remaster as the only digital version available

Last year we saw one improvement with California Assembly Bill 2426 taking effect. In California companies must now at least disclose to potential customers if a “purchase” is a revocable license—i.e. If they can blow it up after you pay. A story driving this change was Ubisoft revoking access to “The Crew” and making customers’ copies unplayable a decade after launch. 

On the federal level, EFF, Public Knowledge, and 15 other public interest organizations backed Sen. Ron Wyden’s message to the FTC to similarly establish clear ground rules for digital ownership and sales of goods. Unfortunately FTC Chairman Andrew Ferguson has thus far turned down this easy win for consumers.

As for the courts, some scholars think they have just gotten it wrong. We agree, but it appears we need Congress to set them straight. The Copyright Act might not need a complete overhaul, but Section 109 certainly does. The current version hurts consumers, artists, and the millions of ordinary people who depend on software and digital works every day for entertainment, education, transportation, and, yes, to grow our food. 

We realize this might not be the most urgent problem Congress confronts in 2026—to be honest, we wish it was—but it’s a relatively easy one to solve. That solution could release a wave of new innovation, and equally importantly, restore some degree of agency to American consumers by making them owners again.

Corynne McSherry

Copyright Kills Competition

1 week ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

Copyright owners increasingly claim more draconian copyright law and policy will fight back against big tech companies. In reality, copyright gives the most powerful companies even more control over creators and competitors. Today’s copyright policy concentrates power among a handful of corporate gatekeepers—at everyone else’s expense. We need a system that supports grassroots innovation and emerging creators by lowering barriers to entry—ultimately offering all of us a wider variety of choices.

Pro-monopoly regulation through copyright won’t provide any meaningful economic support for vulnerable artists and creators. Because of the imbalance in bargaining power between creators and publishing gatekeepers, trying to help creators by giving them new rights under copyright law is like trying to help a bullied kid by giving them more lunch money for the bully to take.

Entertainment companies’ historical practices bear out this concern. For example, in the late-2000’s to mid-2010’s, music publishers and recording companies struck multimillion-dollar direct licensing deals with music streaming companies and video sharing platforms. Google reportedly paid more than $400 million to a single music label, and Spotify gave the major record labels a combined 18 percent ownership interest in its now- $100 billion company. Yet music labels and publishers frequently fail to share these payments with artists, and artists rarely benefit from these equity arrangements. There’s no reason to think that these same companies would treat their artists more fairly now.

AI Training

In the AI era, copyright may seem like a good way to prevent big tech from profiting from AI at individual creators’ expense—it’s not. In fact, the opposite is true. Developing a large language model requires developers to train the model on millions of works. Requiring developers to license enough AI training data to build a large language model would  limit competition to all but the largest corporations—those that either have their own trove of training data or can afford to strike a deal with one that does. This would result in all the usual harms of limited competition, like higher costs, worse service, and heightened security risks. New, beneficial AI tools that allow people to express themselves or access information.

For giant tech companies that can afford to pay, pricey licensing deals offer a way to lock in their dominant positions in the generative AI market by creating prohibitive barriers to entry.

Legacy gatekeepers have already used copyright to stifle access to information and the creation of new tools for understanding it. Consider, for example, Thomson Reuters v. Ross Intelligence, the first of many copyright lawsuits over the use of works train AI. ROSS Intelligence was a legal research startup that built an AI-based tool to compete with ubiquitous legal research platforms like Lexis and Thomson Reuters’ Westlaw. ROSS trained its tool using “West headnotes” that Thomson Reuters adds to the legal decisions it publishes, paraphrasing the individual legal conclusions (what lawyers call “holdings”) that the headnotes identified. The tool didn’t output any of the headnotes, but Thomson Reuters sued ROSS anyways. A federal appeals court is still considering the key copyright issues in the case—which EFF weighed in on last year. EFF hopes that the appeals court will reject this overbroad interpretation of copyright law. But in the meantime, the case has already forced the startup out of business, eliminating a would-be competitor that might have helped increase access to the law.

Requiring developers to license AI training materials benefits tech monopolists as well. For giant tech companies that can afford to pay, pricey licensing deals offer a way to lock in their dominant positions in the generative AI market by creating prohibitive barriers to entry. The cost of licensing enough works to train an LLM would be prohibitively expensive for most would-be competitors.

The DMCA’s “Anti-Circumvention” Provision

The Digital Millennium Copyright Act’s “anti-circumvention” provision is another case in point. Congress ostensibly passed the DMCA to discourage would-be infringers from defeating Digital Rights Management (DRM) and other access controls and copy restrictions on creative works.

Section 1201 has been used to block competition and innovation in everything from printer cartridges to garage door openers

In practice, it’s done little to deter infringement—after all, large-scale infringement already invites massive legal penalties. Instead, Section 1201 has been used to block competition and innovation in everything from printer cartridges to garage door openers, videogame console accessories, and computer maintenance services. It’s been used to threaten hobbyists who wanted to make their devices and games work better. And the problem only gets worse as software shows up in more and more places, from phones to cars to refrigerators to farm equipment. If that software is locked up behind DRM, interoperating with it so you can offer add-on services may require circumvention. As a result, manufacturers get complete control over their products, long after they are purchased, and can even shut down secondary markets (as Lexmark did for printer ink, and Microsoft tried to do for Xbox memory cards.)

Giving rights holders a veto on new competition and innovation hurts consumers. Instead, we need balanced copyright policy that rewards consumers without impeding competition.

Tori Noble

Copyright Should Not Enable Monopoly

1 week ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

There’s a crisis of creativity in mainstream American culture. We have fewer and fewer studios and record labels and fewer and fewer platforms online that serve independent artists and creators.  

At its core, copyright is a monopoly right on creative output and expression. It’s intended to allow people who make things to make a living through those things, to incentivize creativity. To square the circle that is “exclusive control over expression” and “free speech,” we have fair use.

However, we aren’t just seeing artists having a time-limited ability to make money off of their creations. We are also seeing large corporations turn into megacorporations and consolidating huge stores of copyrights under one umbrella. When the monopoly right granted by copyright is compounded by the speed and scale of media company mergers, we end up with a crisis in creativity. 

People have been complaining about the lack of originality in Hollywood for a long time. What is interesting is that the response from the major studios has rarely, especially recently, to invest in original programming. Instead, they have increased their copyright holdings through mergers and acquisitions. In today’s consolidated media world, copyright is doing the opposite of its intended purpose: instead of encouraging creativity, it’s discouraging it. The drive to snap up media franchises (or “intellectual properties”) that can generate sequels, reboots, spinoffs, and series for years to come has crowded out truly original and fresh creativity in many sectors. And since copyright terms last so long, there isn’t even a ticking clock to force these corporations to seek out new original creations. 

In theory, the internet should provide a counterweight to this problem by lowering barriers to entry for independent creators. But as online platforms for creativity likewise shrink in number and grow in scale, they have closed ranks with the major studios.  

It’s a betrayal of the promise of the internet: that it should be a level playing field where you get to decide what you want to do, watch, listen to, read. And our government should be ashamed for letting it happen.  

Katharine Trendacosta

Statutory Damages: The Fuel of Copyright-based Censorship

1 week 1 day ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

Imagine every post online came with a bounty of up to $150,000 paid to anyone who finds it violates opaque government rules—all out of the pocket of the platform. Smaller sites could be snuffed out, and big platforms would avoid crippling liability by aggressively blocking, taking down, and penalizing speech that even possibly violates these rules. In turn, users would self-censor, and opportunists would turn accusations into a profitable business.

This dystopia isn’t a fantasy, it’s close to how U.S. copyright’s broken statutory damages regime actually works.

Copyright includes "statutory damages,” which means letting a jury decide how big of a penalty the defendant will have to pay—anywhere from $200 to $150,000 per work—without the jury necessarily seeing any evidence of actual financial losses or illicit profits. In fact, the law gives judges and juries almost no guidelines on how to set damages. This is a huge problem for online speech.

One way or another, everyone builds on the speech of others when expressing themselves online: quoting posts, reposting memes, sharing images from the news. For some users, re-use is central to their online expression: parodists, journalists, researchers, and artists use others’ words, sounds, and images as part of making something new every day. Both these users and the online platforms they rely on risk unpredictable, potentially devastating penalties if a copyright holder objects to some re-use and a court disagrees with the user’s well-intentioned efforts.

On Copyright Week, we like to talk about ways to improve copyright law. One of the most important would be to fix U.S. copyright’s broken statutory damages regime. In other areas of civil law, the courts have limited jury-awarded punitive damages so that they can’t be far higher than the amount of harm caused. Extremely large jury awards for fraud, for example, have been found to offend the Constitution’s Due Process Clause. But somehow, that’s not the case in copyright—some courts have ruled that Congress can set damages that are potentially hundreds of times greater than actual harm.

Massive, unpredictable damages awards for copyright infringement, such as a $222,000 penalty for sharing 24 music tracks online, are the fuel that drives overzealous or downright abusive takedowns of creative material from online platforms. Capricious and error-prone copyright enforcement bots, like YouTube’s Content ID, were created in part to avoid the threat of massive statutory damages against the platform. Those same damages create an ever-present bias in favor of major rightsholders and against innocent users in the platforms’ enforcement decisions. And they stop platforms from addressing the serious problems of careless and downright abusive copyright takedowns.

By turning litigation into a game of financial Russian roulette, statutory damages also discourage artistic and technological experimentation at the boundaries of fair use. None but the largest corporations can risk ruinous damages if a well-intentioned fair use crosses the fuzzy line into infringement.

“But wait”, you might say, “don’t legal protections like fair use and the safe harbors of the Digital Millennium Copyright Act protect users and platforms?” They do—but the threat of statutory damages makes that protection brittle. Fair use allows for many important re-uses of copyrighted works without permission. But fair use is heavily dependent on circumstances and can sometimes be difficult to predict when copyright is applied to new uses. Even well-intentioned and well-resourced users avoid experimenting at the boundaries of fair use when the cost of a court disagreeing is so high and unpredictable.

Many reforms are possible. Congress could limit statutory damages to a multiple of actual harm. That would bring U.S. copyright in line with other countries, and with other civil laws like patent and antitrust. Congress could also make statutory damages unavailable in cases where the defendant has a good-faith claim of fair use, which would encourage creative experimentation. Fixing fair use would make many of the other problems in copyright law more easily solvable, and create a fairer system for creators and users alike.

Mitch Stoltz

💾 The Worst Data Breaches of 2025—And What You Can Do | EFFector 38.1

1 week 1 day ago

So many data breaches happen throughout the year that it can be pretty easy to gloss over not just if, but how many different breaches compromised your data. We're diving into these data breaches and more with our latest EFFector newsletter.

Since 1990, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This latest issue tracks U.S. Immigration and Customs Enforcement's (ICE) surveillance spending spree, explains how hackers are countering ICE's surveillance, and invites you to our free livestream covering online age verification mandates.

Prefer to listen in? In our audio companion, EFF Security and Privacy Activist Thorin Klosowski explains what you can do to protect yourself from data breaches and how companies can better protect their users. Find the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 38.1 - 💾 THE WORST DATA BREACHES OF 2025—and what you can do

Want to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight to protect people from these data breaches and unlawful surveillance when you support EFF today!

Christian Romero

EFF Joins Internet Advocates Calling on the Iranian Government to Restore Full Internet Connectivity

1 week 1 day ago

Earlier this month, Iran’s internet connectivity faced one of its most severe disruptions in recent years with a near-total shutdown from the global internet and major restrictions on mobile access.

EFF joined architects, operators, and stewards of the global internet infrastructure in calling upon authorities in Iran to immediately restore full and unfiltered internet access. We further call upon the international technical community to remain vigilant in monitoring connectivity and to support efforts that ensure the internet remains open, interoperable, and accessible to all.

This is not the first time the people in Iran have been forced to experience this, with the government suppressing internet access in the country for many years. In the past three years in particular, people of Iran have suffered repeated internet and social media blackouts following an activist movement that blossomed after the death of Mahsa Amini, a woman murdered in police custody for refusing to wear a hijab. The movement gained global attention and in response, the Iranian government rushed to control both the public narrative and organizing efforts by banning social media and sometimes cutting off internet access altogether. 

EFF has long maintained that governments and occupying powers must not disrupt internet or telecommunication access. Cutting off telecommunications and internet access is a violation of basic human rights and a direct attack on people's ability to access information and communicate with one another. 

Our joint statement continues:

“We assert the following principles:

  1. Connectivity is a Fundamental Enabler of Human Rights: In the 21st century, the right to assemble, the right to speak, and the right to access information are inextricably linked to internet access.
  2. Protecting the Global Internet Commons: National-scale shutdowns fragment the global network, undermining the stability and trust required for the internet to function as a global commons.
  3. Transparency: The technical community condemns the use of BGP manipulation and infrastructure filtering to obscure events on the ground.”

Read the letter in full here

Paige Collings

EFF Condemns FBI Search of Washington Post Reporter’s Home

1 week 5 days ago

Government invasion of a reporter’s home, and seizure of journalistic materials, is exactly the kind of abuse of power the First Amendment is designed to prevent. It represents the most extreme form of press intimidation. 

Yet, that’s what happened on Wednesday morning to Washington Post reporter Hannah Natanson, when the FBI searched her Virginia home and took her phone, two laptops, and a Garmin watch. 

The Electronic Frontier Foundation has joined 30 other press freedom and civil liberties organizations in condemning the FBI’s actions against Natanson. The First Amendment exists precisely to prevent the government from using its powers to punish or deter reporting on matters of public interest—including coverage of leaked or sensitive information. Searches like this threaten not only journalists, but the public’s right to know what its government is doing.

In the statement published yesterday, we call on Congress: 

To exercise oversight of the DOJ by calling Attorney General Pam Bondi before Congress to answer questions about the FBI’s actions; 

To reintroduce and pass the PRESS Act, which would limit government surveillance of journalists, and its ability to compel journalists to reveal sources; 

To reform the 108-year-old Espionage Act so it can no longer be used to intimidate and attack journalists. 

And to pass a resolution confirming that the recording of law enforcement activity is protected by the First Amendment. 

We’re joined on this letter by Free Press Action, the American Civil Liberties Union, PEN America, the NewsGuild-CWA, the Society of Professional Journalists, the Committee to Protect Journalists, and many other press freedom and civil liberties groups.

Further Reading:

Joe Mullin

EFF to California Appeals Court: First Amendment Protects Journalist from Tech Executive’s Meritless Lawsuit

1 week 5 days ago

EFF asked a California appeals court to uphold a lower court’s decision to strike a tech CEO’s lawsuit against a journalist that sought to silence reporting the CEO, Maury Blackman, didn’t like.

The journalist, Jack Poulson, reported on Maury Blackman’s arrest for felony domestic violence after receiving a copy of the arrest report from a confidential source. Blackman didn’t like that. So, he sued Poulson—along with Substack, Amazon Web Services, and Poulson’s non-profit, Tech Inquiry—to try and force Poulson to take his articles down from the internet.

Fortunately, the trial court saw this case for what it was: a classic SLAPP, or a strategic lawsuit against public participation. The court dismissed the entire complaint under California’s anti-SLAPP statute, which provides a way for defendants to swiftly defeat baseless claims designed to chill their free speech.

The appeals court should affirm the trial court’s correct decision.  

Poulson’s reporting is just the kind of activity that the state’s anti-SLAPP law was designed to protect: truthful speech about a matter of public interest. The felony domestic violence arrest of the CEO of a controversial surveillance company with U.S. military contracts is undoubtedly a matter of public interest. As we explained to the court, “the public has a clear interest in knowing about the people their government is doing business with.”

Blackman’s claims are totally meritless, because they are barred by the First Amendment. The First Amendment protects Poulson’s right to publish and report on the incident report. Blackman argues that a court order sealing the arrest overrides Poulson’s right to report the news—despite decades of Supreme Court and California Court of Appeals precedent to the contrary. The trial correctly rejected this argument and found that the First Amendment defeats all of Blackman’s claims. As the trial court explained, “the First Amendment’s protections for the publication of truthful speech concerning matters of public interest vitiate Blackman’s merits showing.”

The court of appeals should reach the same conclusion.

Related Cases: Blackman v. Substack, et al.
Karen Gullo

Baton Rouge Acquires a Straight-Up Military Surveillance Drone

1 week 5 days ago

The Baton Rouge Police Department announced this week that it will begin using a drone designed by military equipment manufacturer Lockheed Martin and Edge Autonomy, making it one of the first local police departments to use an unmanned aerial vehicle (UAV) with a history of primary use in foreign war zones. Baton Rouge is now one of the first local police departments in the United States to deploy an unmanned aerial vehicle (UAV) with such extensive surveillance capabilities — a dangerous escalation in the militarization of local law enforcement.

This is a troubling development in an already long history of local law enforcement acquiring and utilizing military-grade surveillance equipment. It should be a cautionary tale that prods  communities across the country to be proactive in ensuring that drones can only be acquired and used in ways that are well-documented, transparent, and subject to public feedback. 

Baton Rouge bought the Stalker VXE30 from Edge Autonomy, which partners with Lockheed Martin and began operating under the brand Redwire this week. According to reporting from WBRZ ABC2 in Louisiana, the drone, training, and batteries, cost about $1 million. 

Baton Rouge Police Department with Stalker VXE30 drone Baton Rouge Police Department officers stand with the Stalker VXE30 drone in a photo shared by the BRPD via Facebook.

All of the regular concerns surrounding drones apply to this new one in use by Baton Rouge:

  • Drones can access and view spaces that are otherwise off-limits to law enforcement, including backyards, decks, and other areas of personal property.
  • Footage captured by camera-enabled drones may be stored and shared in ways that go far beyond the initial flight.
  • Additional camera-based surveillance can be installed on the drone, including automated license plate readers and the retroactive application of biometric analysis, such as face recognition.

However, the use of a military-grade drone hypercharges these concerns. Stalker VXE30's surveillance capabilities extend for dozens of miles, and it can fly faster and longer than standard police drones already in use. 

“It can be miles away, but we can still have a camera looking at your face, so we can use it for surveillance operations," BRPD Police Chief TJ Morse told reporters.

Drone models similar to the Stalker VXE30 have been used in military operations around the world and are currently being used by the U.S. Army and other branches for long-range reconnaissance. Typically, police departments deploy drone models similar to those commercially available from companies like DJI, which until recently was the subject of a proposed Federal Communications Commission (FCC) ban, or devices provided by police technology companies like Skydio, in partnership with Axon and Flock Safety

Additionally troubling is the capacity to add additional equipment to these drones: so-called “payloads” that could include other types of surveillance equipment and even weapons. 

The Baton Rouge community must put policies in place that restrict and provide oversight of any possible uses of this drone, as well as any potential additions law enforcement might make. 

EFF has filed a public records request to learn more about the conditions of this acquisition and gaps in oversight policies. We've been tracking the expansion of police drone surveillance for years, and this acquisition represents a dangerous new frontier. We'll continue investigating and supporting communities fighting back against the militarization of local police and mass surveillance. To learn more about the surveillance technologies being used in your city, please check out the Atlas of Surveillance.

Beryl Lipton

Congress Wants To Hand Your Parenting to Big Tech

1 week 5 days ago

Lawmakers in Washington are once again focusing on kids, screens, and mental health. But according to Congress, Big Tech is somehow both the problem and the solution. The Senate Commerce Committee held a hearing today on “examining the effect of technology on America’s youth.” Witnesses warned about “addictive” online content, mental health, and kids spending too much time buried in screen. At the center of the debate is a bill from Sens. Ted Cruz (R-TX) and Brian Schatz (D-HI) called the Kids Off Social Media Act (KOSMA), which they say will protect children and “empower parents.” 

That’s a reasonable goal, especially at a time when many parents feel overwhelmed and nervous about how much time their kids spend on screens. But while the bill’s press release contains soothing language, KOSMA doesn’t actually give parents more control. 

Instead of respecting how most parents guide their kids towards healthy and educational content, KOSMA hands the control panel to Big Tech. That’s right—this bill would take power away from parents, and hand it over to the companies that lawmakers say are the problem.  

Kids Under 13 Are Already Banned From Social Media

One of the main promises of KOSMA is simple and dramatic: it would ban kids under 13 from social media. Based on the language of bill sponsors, one might think that’s a big change, and that today’s rules let kids wander freely into social media sites. But that’s not the case.   

Every major platform already draws the same line: kids under 13 cannot have an account. Facebook, Instagram, TikTok, X, YouTube, Snapchat, Discord, Spotify, and even blogging platforms like WordPress all say essentially the same thing—if you’re under 13, you’re not allowed. That age line has been there for many years, mostly because of how online services comply with a federal privacy law called COPPA

Of course, everyone knows many kids under 13 are on these sites anyways. The real question is how and why they get access. 

Most Social Media Use By Younger Kids Is Family-Mediated 

If lawmakers picture under-13 social media use as a bunch of kids lying about their age and sneaking onto apps behind their parents’ backs, they’ve got it wrong. Serious studies that have looked at this all find the opposite: most under-13 use is out in the open, with parents’ knowledge, and often with their direct help. 

A large national study published last year in Academic Pediatrics found that 63.8% of under-13s have a social media account, but only 5.4% of them said they were keeping one secret from their parents. That means roughly 90% of kids under 13 who are on social media aren’t hiding it at all. Their parents know. (For kids aged thirteen and over, the “secret account” number is almost as low, at 6.9%.) 

Earlier research in the U.S. found the same pattern. In a well-known study of Facebook use by 10-to-14-year-olds, researchers found that about 70% of parents said they actually helped create their child’s account, and between 82% and 95% knew the account existed. Again, this wasn’t kids sneaking around. It was families making a decision together.

A 2022 study by the UK’s media regulator Ofcom points in the same direction, finding that up to two-thirds of social media users below the age of thirteen had direct help from a parent or guardian getting onto the platform. 

The typical under-13 social media user is not a sneaky kid. It’s a family making a decision together. 

KOSMA Forces Platforms To Override Families 

This bill doesn’t just set an age rule. It creates a legal duty for platforms to police families.

Section 103(b) of the bill is blunt: if a platform knows a user is under 13, it “shall terminate any existing account or profile” belonging to that user. And “knows” doesn’t just mean someone admits their age. The bill defines knowledge to include what is “fairly implied on the basis of objective circumstances”—in other words, what a reasonable person would conclude from how the account is being used. The reality of how services would comply with KOSMA is clear: rather than risk liability for how they should have known a user was under 13, they will require all users to prove their age to ensure that they block anyone under 13. 

KOSMA contains no exceptions for parental consent, for family accounts, or for educational or supervised use. The vast majority of people policed by this bill won’t be kids sneaking around—it will be minors who are following their parents’ guidance, and the parents themselves. 

Imagine a child using their parent’s YouTube account to watch science videos about how a volcano works. If they were to leave a comment saying, “Cool video—I’ll show this to my 6th grade teacher!” and YouTube becomes aware of the comment, the platform now has clear signals that a child is using that account. It doesn’t matter whether the parent gave permission. Under KOSMA, the company is legally required to act. To avoid violating KOSMA, it would likely  lock, suspend, or terminate the account, or demand proof it belongs to an adult. That proof would likely mean asking for a scan of a government ID, biometric data, or some other form of intrusive verification, all to keep what is essentially a “family” account from being shut down.

Violations of KOSMA are enforced by the FTC and state attorneys general. That’s more than enough legal risk to make platforms err on the side of cutting people off.

Platforms have no way to remove “just the kid” from a shared account. Their tools are blunt: freeze it, verify it, or delete it. Which means that even when a parent has explicitly approved and supervised their child’s use, KOSMA forces Big Tech to override that family decision.

Your Family, Their Algorithms

KOSMA doesn’t appoint a neutral referee. Under the law, companies like Google (YouTube), Meta (Facebook and Instagram), TikTok, Spotify, X, and Discord will become the ones who decide whose account survives, whose account gets locked, who has to upload ID, and whose family loses access altogether. They won’t be doing this because they want to—but because Congress is threatening them with legal liability if they don’t. 

These companies don’t know your family or your rules. They only know what their algorithms infer. Under KOSMA, those inferences carry the force of law. Rather than parents or teachers, decisions about who can be online, and for what purpose, will be made by corporate compliance teams and automated detection systems. 

What Families Lose 

This debate isn’t really about TikTok trends or doomscrolling. It’s about all the ordinary, boring, parent-guided uses of the modern internet. It’s about a kid watching “How volcanoes work” on regular YouTube, instead of the stripped-down YouTube Kids. It’s about using a shared Spotify account to listen to music a parent already approves. It’s about piano lessons from a teacher who makes her living from YouTube ads.

These aren’t loopholes. They’re how parenting works in the digital age. Parents increasingly filter, supervise, and, usually, decide together with their kids. KOSMA will lead to more locked accounts, and more parents submitting to face scans and ID checks. It will also lead to more power concentrated in the hands of the companies Congress claims to distrust. 

What Can Be Done Instead

KOSMA also includes separate restrictions on how platforms can use algorithms for users aged 13 to 17. Those raise their own serious questions about speech, privacy, and how online services work, and need debate and scrutiny as well. But they don’t change the core problem here: this bill hands control over children’s online lives to Big Tech.

If Congress really wants to help families, it should start with something much simpler and much more effective: strong privacy protections for everyone. Limits on data collection, restrictions on behavioral tracking, and rules that apply to adults as well as kids would do far more to reduce harmful incentives than deputizing companies to guess how old your child is and shut them out.

But if lawmakers aren’t ready to do that, they should at least drop KOSMA and start over. A law that treats ordinary parenting as a compliance problem is not protecting families—it’s undermining them.

Parents don’t need Big Tech to replace them. They need laws that respect how families actually work.

Joe Mullin

Report: ICE Using Palantir Tool That Feeds On Medicaid Data

1 week 6 days ago

EFF last summer asked a federal judge to block the federal government from using Medicaid data to identify and deport immigrants.  

We also warned about the danger of the Trump administration consolidating all of the government’s information into a single searchable, AI-driven interface with help from Palantir, a company that has a shaky-at-best record on privacy and human rights

Now we have the first evidence that our concerns have become reality. 

“Palantir is working on a tool for Immigration and Customs Enforcement (ICE) that populates a map with potential deportation targets, brings up a dossier on each person, and provides a “confidence score” on the person’s current address,” 404 Media reports today. “ICE is using it to find locations where lots of people it might detain could be based.” 

The tool – dubbed Enhanced Leads Identification & Targeting for Enforcement (ELITE) – receives peoples’ addresses from the Department of Health and Human Services (which includes Medicaid) and other sources, 404 Media reports based on court testimony in Oregon by law enforcement agents, among other sources. 

This revelation comes as ICE – which has gone on a surveillance technology shopping spree – floods Minneapolis with agents, violently running roughshod over the civil rights of immigrants and U.S. citizens alike; President Trump has threatened to use the Insurrection Act of 1807 to deploy military troops against protestors there. Other localities are preparing for the possibility of similar surges. 

Different government agencies necessarily collect information to provide essential services or collect taxes, but the danger comes when the government begins pooling that data and using it for reasons unrelated to the purpose it was collected.

This kind of consolidation of government records provides enormous government power that can be abused. Different government agencies necessarily collect information to provide essential services or collect taxes, but the danger comes when the government begins pooling that data and using it for reasons unrelated to the purpose it was collected. 

As EFF Executive Director Cindy Cohn wrote in a Mercury News op-ed last August, “While couched in the benign language of eliminating government ‘data silos,’ this plan runs roughshod over your privacy and security. It’s a throwback to the rightly mocked ‘Total Information Awareness’ plans of the early 2000s that were, at least publicly, stopped after massive outcry from the public and from key members of Congress. It’s time to cry out again.” 

In addition to the amicus brief we co-authored challenging ICE’s grab for Medicaid data, EFF has successfully sued over DOGE agents grabbing personal data from the U.S. Office of Personnel Management, filed an amicus brief in a suit challenging ICE’s grab for taxpayer data, and sued the departments of State and Homeland Security to halt a mass surveillance program to monitor constitutionally protected speech by noncitizens lawfully present in the U.S. 

But litigation isn’t enough. People need to keep raising concerns via public discourse and Congress should act immediately to put brakes on this runaway train that threatens to crush the privacy and security of each and every person in America.  

Josh Richman

So, You’ve Hit an Age Gate. What Now?

2 weeks ago

This blog also appears in our Age Verification Resource Hub: our one-stop shop for users seeking to understand what age-gating laws actually do, what’s at stake, how to protect yourself, and why EFF opposes all forms of age verification mandates. Head to EFF.org/Age to explore our resources and join us in the fight for a free, open, private, and yes—safe—internet.

EFF is against age gating and age verification mandates, and we hope we’ll win in getting existing ones overturned and new ones prevented. But mandates are already in effect, and every day many people are asked to verify their age across the web, despite prominent cases of sensitive data getting leaked in the process.

At some point, you may have been faced with the decision yourself: should I continue to use this service if I have to verify my age? And if so, how can I do that with the least risk to my personal information? This is our guide to navigating those decisions, with information on what questions to ask about the age verification options you’re presented with, and answers to those questions for some of the top most popular social media sites. Even though there’s no way to implement mandated age gates in a way that fully protects speech and privacy rights, our goal here is to help you minimize the infringement of your rights as you manage this awful situation.

Follow the Data

Since we know that leaks happen despite the best efforts of software engineers, we generally recommend submitting the absolute least amount of data possible. Unfortunately, that’s not going to be possible for everyone. Even facial age estimation solutions where pictures of your face never leave your device, offering some protection against data leakage, are not a good option for all users: facial age estimation works less well for people of color, trans and nonbinary people, and people with disabilities. There are some systems that use fancy cryptography so that a digital ID saved to your device won’t tell the website anything more than if you meet the age requirement, but access to that digital ID isn’t available to everyone or for all platforms. You may also not want to register for a digital ID and save it to your phone, if you don’t want to take the chance of all the information on it being exposed upon request of an over-zealous verifier, or you simply don’t want to be a part of a digital ID system

If you’re given the option of selecting a verification method and are deciding which to use, we recommend considering the following questions for each process allowed by each vendor:

    • Data: What info does each method require?
    • Access: Who can see the data during the course of the verification process?
    • Retention: Who will hold onto that data after the verification process, and for how long?
    • Audits: How sure are we that the stated claims will happen in practice? For example, are there external audits confirming that data is not accidentally leaked to another site along the way? Ideally these will be in-depth, security-focused audits by specialized auditors like NCC Group or Trail of Bits, instead of audits that merely certify adherence to standards. 
    • Visibility: Who will be aware that you’re attempting to verify your age, and will they know which platform you’re trying to verify for?

We attempt to provide answers to these questions below. To begin, there are two major factors to consider when answering these questions: the tools each platform uses, and the overall system those tools are part of.

In general, most platforms offer age estimation options like face scans as a first line of age assurance. These vary in intrusiveness, but their main problem is inaccuracy, particularly for marginalized users. Third-party age verification vendors Private ID and k-ID offer on-device facial age estimation, but another common vendor, Yoti, sends the image to their servers during age checks by some of the biggest platforms. This risks leaking the images themselves, and also the fact that you’re using that particular website, to the third party. 

Then, there’s the document-based verification services, which require you to submit a hard identifier like a government-issued ID. This method thus requires you to prove both your age and your identity. A platform can do this in-house through a designated dataflow, or by sending that data to a third party. We’ve already seen examples of how this can fail. For example, Discord routed users' ID data through its general customer service workflow so that a third-party vendor could perform manual review of verification appeals. No one involved ever deleted users' data, so when the system was breached, Discord had to apologize for the catastrophic disclosure of nearly 70,000 photos of users' ID documents. Overly long retention periods expose documents to risk of breaches and historical data requests. Some document verifiers have retention periods that are needlessly long. This is the case with Incode, which provides ID verification for Tiktok. Incode holds onto images forever by default, though TikTok should automatically start the deletion process on your behalf.

Some platforms offer alternatives, like proving that you own a credit card, or asking for your email to check if it appears in databases associated with adulthood (like home mortgage databases). These tend to involve less risk when it comes to the sensitivity of the data itself, especially since credit cards can be replaced, but in general still undermine anonymity and pseudonymity and pose a risk of tracking your online activity. We’d prefer to see more assurances across the board about how information is handled.

Each site offers users a menu of age assurance options to choose from. We’ve chosen to present these options in the rough order that we expect most people to prefer. Jump directly to a platform to learn more about its age checks:

Meta – Facebook, Instagram, WhatsApp, Messenger, Threads Inferred Age

If Meta can guess your age, you may never even see an age verification screen. Meta, which runs Facebook, Threads, Instagram, Messenger, and WhatsApp, first tries to use information you’ve posted to guess your age, like looking at “Happy birthday!” messages. It’s a creepy reminder that they already have quite a lot of information about you.

If Meta cannot guess your age, or if Meta infers you're too young, it will next ask you to verify your age using either facial age estimation, or by uploading your photo ID. 

Face Scan

If you choose to use facial age estimation, you’ll be sent to Yoti, a third-party verification service. Your photo will be uploaded to their servers during this process. Yoti claims that “as soon as an age has been estimated, the facial image is immediately and permanently deleted.” Though it’s not as good as not having that data in the first place, Yoti’s security measures include a bug bounty program and annual penetration testing. Researchers from Mint Secure found that Yoti’s app and website are filled with trackers, so the fact that you’re verifying your age could be not only shared to Yoti, but leaked to third-party data brokers as well. 

You may not want to use this option if you’re worried about third parties potentially being able to know you’re trying to verify your age with Meta. You also might not want to use this if you’re worried about a current picture of your face accidentally leaking—for example, if elements in the background of your selfie might reveal your current location. On the other hand, if you consider a selfie to be less sensitive than a photograph of your ID, this option might be better. If you do choose (or are forced to) use the face check system, be sure to snap your selfie without anything you'd be concerned with identifying your location or embarrassing you in the background in case the image leaks.

Upload ID

If Yoti’s age estimation decides your face looks too young, or if you opt out of facial age estimation, your next recourse is to send Meta a photo of your ID. Meta sends that photo to Yoti to verify the ID. Meta says it will hold onto that ID image for 30 days, then delete it. Meanwhile, Yoti claims it will delete the image immediately after verification. Of course, bugs and process oversights exist, such as accidentally replicating information in logs or support queues, but at least they have stated processes. Your ID contains sensitive information such as your full legal name and home address. Using this option not only runs the (hopefully small, but never nonexistent) risk of that data getting leaked through errors or hacking, but it also lets Meta see the information needed to tie your profile to your identity—which you may not want. If you don’t want Meta to know your name and where you live, or rely on both Meta and Yoti to keep to their deletion promises, this option may not be right for you.

Google – Gmail, YouTube  Inferred Age

If Google can guess your age, you may never even see an age verification screen. Your Google account is typically connected to your YouTube account, so if (like mine) your YouTube account is old enough to vote, you may not need to verify your Google account at all. Google first uses information it already knows to try to guess your age, like how long you’ve had the account and your YouTube viewing habits. It’s yet another creepy reminder of how much information these corporations have on you, but at least in this case they aren’t likely to ask for even more identifying data.

If Google cannot guess your age, or decides you're too young, Google will next ask you to verify your age. You’ll be given a variety of options for how to do so, with availability that will depend on your location and your age.

Google’s methods to assure your age include ID verification, facial age estimation, verification by proxy, and digital ID. To prove you’re over 18, you may be able to use facial age estimation, give Google your credit card information, or tell a third-party provider your email address.

Face Scan

If you choose to use facial age estimation, you’ll be sent to a website run by Private ID, a third-party verification service. The website will load Private ID’s verifier within the page—this means that your selfie will be checked without any images leaving your device. If the system decides you’re over 18, it will let Google know that, and only that. Of course, no technology is perfect—should Private ID be mandated to target you specifically, there’s nothing to stop it from sending down code that does in fact upload your image, and you probably won’t notice. But unless your threat model includes being specifically targeted by a state actor or Private ID, that’s unlikely to be something you need to worry about. For most people, no one else will see your image during this process. Private ID will, however, be told that your device is trying to verify your age with Google and Google will still find out if Private ID thinks that you’re under 18.

If Private ID’s age estimation decides your face looks too young, you may next be able to decide if you’d rather let Google verify your age by giving it your credit card information, photo ID, or digital ID, or by letting Google send your email address to a third-party verifier.

Email Usage

If you choose to provide your email address, Google sends it on to a company called VerifyMy. VerifyMy will use your email address to see if you’ve done things like get a mortgage or paid for utilities using that email address. If you use Gmail as your email provider, this may be a privacy-protective option with respect to Google, as Google will then already know the email address associated with the account. But it does tell VerifyMy and its third-party partners that the person behind this email address is looking to verify their age, which you may not want them to know. VerifyMy uses “proprietary algorithms and external data sources” that involve sending your email address to “trusted third parties, such as data aggregators.” It claims to “ensure that such third parties are contractually bound to meet these requirements,” but you’ll have to trust it on that one—we haven’t seen any mention of who those parties are, so you’ll have no way to check up on their practices and security. On the bright side, VerifyMy and its partners do claim to delete your information as soon as the check is completed.

Credit Card Verification

If you choose to let Google use your credit card information, you’ll be asked to set up a Google Payments account. Note that debit cards won’t be accepted, since it’s much easier for many debit cards to be issued to people under 18. Google will then charge a small amount to the card, and refund it once it goes through. If you choose this method, you’ll have to tell Google your credit card info, but the fact that it’s done through Google Payments (their regular card-processing system) means that at least your credit card information won’t be sitting around in some unsecured system. Even if your credit card information happens to accidentally be leaked, this is a relatively low-risk option, since credit cards come with solid fraud protection. If your credit card info gets leaked, you should easily be able to dispute fraudulent charges and replace the card.

Digital ID

If the option is available to you, you may be able to use your digital ID to verify your age with Google. In some regions, you’ll be given the option to use your digital ID. In some cases, it’s possible to only reveal your age information when you use a digital ID. If you’re given that choice, it can be a good privacy-preserving option. Depending on the implementation, there’s a chance that the verification step will “phone home” to the ID provider (usually a government) to let them know the service asked for your age. It’s a complicated and varied topic that you can learn more about by visiting EFF’s page on digital identity.

Upload ID

Should none of these options work for you, your final recourse is to send Google a photo of your ID. Here, you’ll be asked to take a photo of an acceptable ID and send it to Google. Though the help page only states that your ID “will be stored securely,” the verification process page says ID “will be deleted after your date of birth is successfully verified.” Acceptable IDs vary by country, but are generally government-issued photo IDs. We like that it’s deleted immediately, though we have questions about what Google means when it says your ID will be used to “improve [its] verification services for Google products and protect against fraud and abuse.” No system is perfect, and we can only hope that Google schedules outside audits regularly.

TikTok Inferred Age

If TikTok can guess your age, you may never even see an age verification notification. TikTok first tries to use information you’ve posted to estimate your age, looking through your videos and photos to analyze your face and listen to your voice. By uploading any videos, TikTok believes you’ve given it consent to try to guess how old you look and sound.

If TikTok decides you’re too young, appeal to revoke their age decision before the deadline passes. If TikTok cannot guess your age, or decides you're too young, it will automatically revoke your access based on age—including either restricting features or deleting your account. To get your access and account back, you’ll have a limited amount of time to verify your age. As soon as you see the notification that your account is restricted, you’ll want to act fast because in some places you’ll have as little as 23 days before the deadline passes.

When you get that notification, you’re given various options to verify your age based on your location.

Face Scan

If you’re given the option to use facial age estimation, you’ll be sent to Yoti, a third-party verification service. Your photo will be uploaded to their servers during this process. Yoti claims that “as soon as an age has been estimated, the facial image is immediately and permanently deleted.” Though it’s not as good as not having that data in the first place, Yoti’s security measures include a bug bounty program and annual penetration testing. However, researchers from Mint Secure found that Yoti’s app and website are filled with trackers, so the fact that you’re verifying your age could be leaked not only to Yoti, but to third-party data brokers as well.

You may not want to use this option if you’re worried about third parties potentially being able to know you’re trying to verify your age with TikTok. You also might not want to use this if you’re worried about a current picture of your face accidentally leaking—for example, if elements in the background of your selfie might reveal your current location. On the other hand, if you consider a selfie to be less sensitive than a photograph of your ID or your credit card information, this option might be better. If you do choose (or are forced to) use the face check system, be sure to snap your selfie without anything you'd be concerned with identifying your location or embarrassing you in the background in case the image leaks.

Credit Card Verification

If you have a credit card in your name, TikTok will accept that as proof that you’re over 18. Note that debit cards won’t be accepted, since it’s much easier for many debit cards to be issued to people under 18. TikTok will charge a small amount to the credit card, and refund it once it goes through. It’s unclear if this goes through their regular payment process, or if your credit card information will be sent through and stored in a separate, less secure system. Luckily, these days credit cards come with solid fraud protection, so if your credit card gets leaked, you should easily be able to dispute fraudulent charges and replace the card. That said, we’d rather TikTok provide assurances that the information will be processed securely.

Credit Card Verification of a Parent or Guardian

Sometimes, if you’re between 13 and 17, you’ll be given the option to let your parent or guardian confirm your age. You’ll tell TikTok their email address, and TikTok will send your parent or guardian an email asking them (a) to confirm your date of birth, and (b) to verify their own age by proving that they own a valid credit card. This option doesn’t always seem to be offered, and in the one case we could find, it’s possible that TikTok never followed up with the parent. So it’s unclear how or if TikTok verifies that the adult whose email you provide is your parent or guardian. If you want to use credit card verification but you’re not old enough to have a credit card, and you’re ok with letting an adult know you use TikTok, this option may be reasonable to try.

Photo with a Random Adult?

Bizarrely, if you’re between 13 and 17, TikTok claims to offer the option to take a photo with literally any random adult to confirm your age. Its help page says that any trusted adult over 25 can be chosen, as long as they’re holding a piece of paper with the code on it that TikTok provides. It also mentions that a third-party provider is used here, but doesn’t say which one. We haven’t found any evidence of this verification method being offered. Please do let us know if you’ve used this method to verify your age on TikTok!

Photo ID and Face Comparison

If you aren’t offered or have failed the other options, you’ll have to verify your age by submitting a copy of your ID and matching photo of your face. You’ll be sent to Incode, a third-party verification service. In a disappointing failure to meet the industry standard, Incode itself doesn’t automatically delete the data you give it once the process is complete, but TikTok does claim to “start the process to delete the information you submitted,” which should include telling Incode to delete your data once the process is done. If you want to be sure, you can ask Incode to delete that data yourself. Incode tells TikTok that you met the age threshold without providing your exact date of birth, but then TikTok wants to know the exact date anyway, so it’ll ask for your date of birth even after your age has been verified.

TikTok itself might not see your actual ID depending on its implementation choices, but Incode will. Your ID contains sensitive information such as your full legal name and home address. Using this option not only runs the (hopefully small, but never nonexistent) risk of that data getting accidentally leaked through errors or hacking. If you don’t want TikTok or Incode to know your name, what you look like, and where you live—or if you don't want to rely on both TikTok and Incode to keep to their deletion promises—then this option may not be right for you.

Everywhere Else

We’ve covered the major providers here, but age verification is unfortunately being required of many other services that you might use as well. While the providers and processes may vary, the same general principles will apply. If you’re trying to choose what information to provide to continue to use a service, consider the “follow the data” questions mentioned above, and try to find out how the company will store and process the data you give it. The less sensitive information, the fewer people have access to it, and the more quickly it will be deleted, the better. You may even come to recognize popular names in the age verification industry: Spotify and OnlyFans use Yoti (just like Meta and Tiktok), Quora and Discord use k-ID, and so on. 

Unfortunately, it should be clear by now that none of the age verification options are perfect in terms of protecting information, providing access to everyone, and safely handling sensitive data. That’s just one of the reasons that EFF is against age-gating mandates, and is working to stop and overturn them across the United States and around the world.


Join EFF


Help protect digital privacy & free speech for everyone

Erica Portnoy

How Hackers Are Fighting Back Against ICE

2 weeks 6 days ago

Read more about how ICE has spent hundreds of millions of dollars on surveillance technology to spy on anyone—and potentially everyone—in the United States, and how to follow the Homeland Security Spending Trail..

ICE has been invading U.S. cities, targeting, surveilling, harassing, assaulting, detaining, and torturing people who are undocumented immigrants. They also have targeted people with work permits, asylum seekers, permanent residents (people holding “green cards”), naturalized citizens, and even citizens by birth. ICE has spent hundreds of millions of dollars on surveillance technology to spy on anyone—and potentially everyone—in the United States. It can be hard to imagine how to defend oneself against such an overwhelming force. But a few enterprising hackers have started projects to do counter surveillance against ICE, and hopefully protect their communities through clever use of technology. 

Let’s start with Flock, the company behind a number of automated license plate reader (ALPR) and other camera technologies. You might be surprised at how many Flock cameras there are in your community. Many large and small municipalities around the country have signed deals with Flock for license plate readers to track the movement of all cars in their city. Even though these deals are signed by local police departments, oftentimes ICE also gains access

Because of their ubiquity, people are interested in finding out where and how many Flock cameras are in their community. One project that can help with this is the OUI-SPY, a small piece of open source hardware. The OUI-SPY runs on a cheap Arduino compatible chip called an ESP-32. There are multiple programs available for loading on the chip, such as “Flock You,” which allows people to detect Flock cameras and “Sky-Spy” to detect overhead drones. There’s also “BLE Detect,” which detects various Bluetooth signals including ones from Axon, Meta’s Ray-Bans that secretly record you, and more. It also has a mode commonly known as “fox hunting” to track down a specific device. Activists and researchers can use this tool to map out different technologies and quantify the spread of surveillance. 

There’s also the open source Wigle app which is primarily designed for mapping out Wi-Fi, but also has the ability to make an audio alert when a specific Wi-Fi or Bluetooth identifier is detected. This means you can set it up to get a notification when it detects products from Flock, Axon, or other nasties in their vicinity. 

One enterprising YouTuber, Benn Jordan, figured out a way to fool Flock cameras into not recording his license plate simply by painting some minor visual noise on his license plate. This is innocuous enough that any human will still be able to read his license plate, but it completely prevented Flock devices from recognizing his license plate as a license plate at the time. Some states have outlawed drivers obscuring their license plates, so taking such action is not recommended. 

Jordan later went on to discover hundreds of misconfigured Flock cameras that were exposing their administrator interface without a password on the public internet. This would allow anyone with an internet connection to view a live surveillance feed, download 30 days of video, view logs, and more. The cameras pointed at parks, public trails, busy intersections, and even a playground. This was a massive breach of public trust and a huge mistake for a company that claims to be working for public safety.

Other hackers have taken on the task of open-source intelligence and community reporting. One interesting example is deflock.me and alpr.watch, which are crowdsourced maps of ALPR cameras. Much like the OUI-SPY project, this allows activists to map out and expose Flock surveillance cameras in their community. 

There have also been several ICE reporting apps released, including apps to report ICE sightings in your area such Stop ICE Alerts, ICEOUT.org, and ICE Block. ICEBlock was delisted by Apple at the request of Attorney General Pam Bondi, a fact we are suing over. There is also Eyes Up, an app to securely record and archive ICE raids, which was taken down by Apple earlier this year. 

Another interesting project documenting ICE and creating a trove of open-source intelligence is ICE List Wiki which contains info on companies that have contracts with ICE, incidents and encounters with ICE, and vehicles ICE uses. 

People without programming knowledge can also get involved. In Chicago, people used whistles to warn their neighbors that ICE was present or in the area. Many people 3D-printed whistles along with instructional booklets to hand out to their communities, allowing a wider distribution of whistles and consequently earlier warnings for their neighbors. 

Many hackers have started hosting digital security trainings for their communities or building web sites with security advice, including how to remove your data from the watchful eyes of the surveillance industry. To reach a broader community, trainers have even started hosting trainings on how to defend their communities and what to do in an ICE raid in video games, such as Fortnight

There is also EFF’s own Rayhunter project for detecting cell-site simulators, about which we have written extensively. Rayhunter runs on a cheap mobile hotspot and doesn’t require deep technical knowledge to use.

It’s important to remember that we are not powerless. Even in the face of a domestic law enforcement presence with massive surveillance capabilities and military-esque technologies, there are still ways to engage in surveillance self-defense. We cannot give into nihilism and fear. We must continue to find small ways to protect ourselves and our communities, and when we can, fight back. 

EFF is not affiliated with any of these projects (other than Rayhunter) and does not endorse them. We don’t make any statements about the legality of using any of these projects. Please consult with an attorney to determine what risks there may be. 

Join EFF

Help protect digital privacy & free speech for everyone

Related Cases: EFF v. DOJ, DHS (ICE tracking apps)
Cooper Quintin

ICE Is Going on a Surveillance Shopping Spree

3 weeks ago

Read more about how enterprising hackers have started projects to do counter surveillance against ICE, and learn how to follow the Homeland Security spending trail.

U.S. Immigration and Customs Enforcement (ICE) has a new budget under the current administration, and they are going on a surveillance tech shopping spree. Standing at $28.7 billion dollars for the year 2025 (nearly triple their 2024 budget) and at least another $56.25 billion over the next three years, ICE's budget would be the envy of many national militaries around the world. Indeed, this budget would put ICE as the 14th most well-funded military in the world, right between Ukraine and Israel.  

There are many different agencies under U.S. Department of Homeland Security (DHS) that deal with immigration, as well as non-immigration related agencies such as Cybersecurity and Infrastructure Security Agency (CISA) and Federal Emergency Management Agency (FEMA). ICE is specifically the enforcement arm of the U.S. immigration apparatus. Their stated mission is to “[p]rotect America through criminal investigations and enforcing immigration laws to preserve national security and public safety.” 

Of course, ICE doesn’t just end up targeting, surveilling, harassing, assaulting, detaining, and torturing people who are undocumented immigrants. They have targeted people on work permits, asylum seekers, permanent residents (people holding “green cards”), naturalized citizens, and even citizens by birth. 

While the NSA and FBI might be the first agencies that come to mind when thinking about surveillance in the U.S., ICE should not be discounted. ICE has always engaged in surveillance and intelligence-gathering as part of their mission. A 2022 report by Georgetown Law’s Center for Privacy and Technology found the following:

  • ICE had scanned the driver’s license photos of 1 in 3 adults.
  • ICE had access to the driver’s license data of 3 in 4 adults.
  • ICE was tracking the movements of drivers in cities home to 3 in 4 adults.
  • ICE could locate 3 in 4 adults through their utility records.
  • ​​ICE built its surveillance dragnet by tapping data from private companies and state and local bureaucracies.
  • ICE spent approximately $2.8 billion between 2008 and 2021 on new surveillance, data collection and data-sharing programs. 

With a budget for 2025 that is 10 times the size of the agency’s total surveillance spending over the last 13 years, ICE is going on a shopping spree, creating one of the largest, most comprehensive domestic surveillance machines in history. 

How We Got Here

The entire surveillance industry has been allowed to grow and flourish under both Democratic and Republican regimes. For example, President Obama dramatically expanded ICE from its more limited origins, while at the same time narrowing its focus to undocumented people accused of crimes. Under the first and second Trump administrations, ICE ramped up its operations significantly, increasing raids in major cities far from the southern border and casting a much wider net on potential targets. ICE has most recently expanded its partnerships with sheriffs across the U.S., and deported more than 1.5 million people cumulatively under the Trump administrations (600,000 of those were just during the first year of Trump’s second term according to DHS statistics), not including the 1.6 million people DHS claims have “self-deported.” More horrifying is that in just the last year of the current administration, 4,250 people detained by ICE have gone missing, and 31 have died in custody or while being detained. In contrast, 24 people died in ICE custody during the entirety of the Biden administration.

ICE also has openly stated that they plan to spy on the American public, looking for any signs of left-wing dissent against their domestic military-like presence. Acting ICE Director Todd Lyons said in a recent interview that his agency “was dedicated to the mission of going after” Antifa and left-wing gun clubs. 

On a long enough timeline, any surveillance tool you build will eventually be used by people you don’t like for reasons that you disagree with.

On a long enough timeline, any surveillance tool you build will eventually be used by people you don’t like for reasons that you disagree with. A surveillance-industrial complex and a democratic society are fundamentally incompatible, regardless of your political party. 

EFF recently published a guide to using government databases to dig up homeland security spending and compiled our own dataset of companies selling tech to DHS components. In 2025, ICE entered new contracts with several private companies for location surveillance, social media surveillance, face surveillance, spyware, and phone surveillance. Let’s dig into each.

Phone Surveillance Tools 

One common surveillance tactic of immigration officials is to get physical access to a person’s phone, either while the person is detained at a border crossing, or while they are under arrest. ICE renewed an $11 million contract with a company called Cellebrite, which helps ICE unlock phones and then can take a complete image of all the data on the phone, including apps, location history, photos, notes, call records, text messages, and even Signal and WhatsApp messages. ICE also signed a $3 million contract with Cellebrite’s main competitor Magnet Forensics, makers of the Graykey device for unlocking phones. DHS has had contracts with Cellebrite since 2008, but the number of phones they search has risen dramatically each year, reaching a new high of 14,899 devices searched by ICE’s sister agency U.S. Customs and Border Protection (CBP) between April and June of 2025. 

If ICE can’t get physical access to your phone, that won’t stop them from trying to gain access to your data. They have also resumed a $2 million contract with the spyware manufacturer, Paragon. Paragon makes the Graphite spyware, which made headlines in 2025 for being found on the phones of several dozen members of Italian civil society. Graphite is able to harvest messages from multiple different encrypted chat apps such as Signal and WhatsApp without the user ever knowing. 

Our concern with ICE buying this software is the likelihood that it will be used against undocumented people and immigrants who are here legally, as well as U.S. citizens who have spoken up against ICE or who work with immigrant communities. Malware such as Graphite can be used to read encrypted messages as they are sent, other forms of spyware can also download files, photos, location history, record phone calls, and even discretely turn on your microphone to record you. 

How to Protect Yourself 

The most effective way to protect yourself from smartphone surveillance would be to not have a phone. But that’s not realistic advice in modern society. Fortunately, for most people there are other ways you can make it harder for ICE to spy on your digital life. 

The first and easiest step is to keep your phone up to date. Installing security updates makes it harder to use malware against you and makes it less likely for Cellebrite to break into your phone. Likewise, both iPhone (Lockdown Mode) and Android (Advanced Protection) offer special modes that lock your phone down and can help protect against some malware.

The first and easiest step is to keep your phone up to date.

Having your phone’s software up to date and locked with a strong alphanumeric password will offer some protection against Cellebrite, depending on your model of phone. However, the strongest protection is simply to keep your phone turned off, which puts it in “before first unlock” mode and has been typically harder for law enforcement to bypass. This is good to do if you are at a protest and expect to be arrested, if you are crossing a border, or if you are expecting to encounter ICE. Keeping your phone on airplane mode should be enough to protect against cell-site simulators, but turning your phone off will offer extra protection against cell-site simulators and Cellebrite devices. If you aren’t able to turn your phone off, it’s a good idea to at least turn off face/fingerprint unlock to make it harder for police to force you to unlock your phone. While EFF continues to fight to strengthen our legal protections against compelling people to decrypt their devices, there is currently less protection against compelled face and fingerprint unlocking than there is against compelled password disclosure.

Internet Surveillance 

ICE has also spent $5 million to acquire at least two location and social media surveillance tools: Webloc and Tangles, from a company called Pen Link, an established player in the open source intelligence space. Webloc gathers the locations of millions of phones by gathering data from mobile data brokers and linking it together with other information about users. Tangles is a social media surveillance tool which combines web scraping with access to social media application programming interfaces. These tools are able to build a dossier on anyone who has a public social media account. Tangles is able to link together a person’s posting history, posts, and comments containing keywords, location history, tags, social graph, and photos with those of their friends and family. Penlink then sells this information to law enforcement, allowing law enforcement to avoid the need for a warrant. This means ICE can look up historic and current locations of many people all across the U.S. without ever having to get a warrant.

These tools are able to build a dossier on anyone who has a public social media account.

ICE also has established contracts with other social media scanning and AI analysis companies, such as a $4.2 million contract with a company called Fivecast for the social media surveillance and AI analysis tool ONYX. According to Fivecast, ONYX can conduct “automated, continuous and targeted collection of multimedia data” from all major “news streams, search engines, social media, marketplaces, the dark web, etc.” ONYX can build what it calls “digital footprints” from biographical data and curated datasets spanning numerous platforms, and “track shifts in sentiment and emotion” and identify the level of risk associated with an individual. 

Another contract is with ShadowDragon for their product Social Net, which is able to monitor publicly available data from over 200 websites. In an acquisition document from 2022, ICE confirmed that ShadowDragon allowed the agency to search “100+ social networking sites,” noting that “[p]ersistent access to Facebook and Twitter provided by ShadowDragon SocialNet is of the utmost importance as they are the most prominent social media platforms.”

ICE has also indicated that they intend to spend between 20 and 50 million dollars on building and staffing a 24/7 social media monitoring office with at least 30 full time agents to comb every major social media website for leads that could generate enforcement raids. 

How to protect yourself 

For U.S. citizens, making your account private on social media is a good place to start. You might also consider having accounts under a pseudonym, or deleting your social media accounts altogether. For more information, check out our guide to protecting yourself on social media. Unfortunately, people immigrating to the U.S. might be subject to greater scrutiny, including mandatory social media checks, and should consult with an immigration attorney before taking any action. For people traveling to the U.S., new rules will soon likely require them to reveal five years of social media history and 10 years of past email addresses to immigration officials. 

Street-Level Surveillance 

But it’s not just your digital habits ICE wants to surveil; they also want to spy on you in the physical world. ICE has contracts with multiple automated license plate reader (ALPR) companies and is able to follow the driving habits of a large percentage of Americans. ICE uses this data to track down specific people anywhere in the country. ICE has a $6 million contract through a Thomson Reuters subsidiary to access ALPR data from Motorola Solutions. ICE has also persuaded local law enforcement officers to run searches on their behalf through Flock Safety's massive network of ALPR data. CBP, including Border Patrol, also operates a network of covert ALPR systems in many areas. 

ICE has also invested in biometric surveillance tools, such as face recognition software called Mobile Fortify to scan the faces of people they stop to determine if they are here legally. Mobile Fortify checks the pictures it takes against a database of 200 million photos for a match (the source of the photos is unknown). Additionally, ICE has a $10 million contract with Clearview AI for face recognition. ICE has also contracted with iris scanning company BI2 technologies for even more invasive biometric surveillance. ICE agents have also been spotted wearing Meta’s Ray-Ban video recording sunglasses. 

ICE has acquired trucks equipped with cell-site simulators (AKA Stingrays) from a company called TechOps Specialty Vehicles (likely the cell-site simulators were manufactured by another company). This is not the first time ICE has bought this technology. According to documents obtained by the American Civil Liberties Union, ICE deployed cell-site simulators at least 466 times between 2017 and 2019, and ICE more than 1,885 times between 2013 and 2017, according to documents obtained by BuzzFeed News. Cell-site simulators can be used to track down a specific person in real time, with more granularity than a phone company or tools like Webloc can provide, though Webloc has the distinct advantage of being used without a warrant and not requiring agents to be in the vicinity of the person being tracked. 

How to protect yourself 

Taking public transit or bicycling is a great way to keep yourself off ALPR databases, but an even better way is to go to your local city council meetings and demand the city cancels contracts with ALPR companies, like people have done in Flagstaff, Arizona; Eugene, Oregon; and Denver, Colorado, among others. 

If you are at a protest, putting your phone on airplane mode could help protect you from cell-site simulators and from apps on your phone disclosing your location, but might leave you vulnerable to advanced targeted attacks. For more advanced protection, turning your phone completely off protects against all radio based attacks, and also makes it harder for tools like Cellebrite to break into your phone as discussed above. But each individual will need to weigh their need for security from advanced radio based attacks against their need to document potential abuses through photo or video. For more information about protecting yourself at a protest, head over to SSD.

There is nothing you can do to change your face, which is why we need more stringent privacy laws such as Illinois’ Biometric Information Privacy Act.

Tying All the Data Together 

Last but not least, ICE uses tools to combine and search all this data along with the data on Americans they have acquired from private companies, the IRS, TSA, and other government databases. 

To search all this data, ICE uses ImmigrationOS, a system that came from a $30-million contract with Palantir. What Palantir does is hard to explain, even for people who work there, but essentially they are plumbers. Palantir makes it so that ICE has all the data they have acquired in one place so it’s easy to search through. Palantir links data from different databases, like IRS data, immigration records, and private databases, and enables ICE to view all of this data about a specific person in one place. 

Palantir makes it so that ICE has all the data they have acquired in one place so it’s easy to search through.

The true civil liberties nightmare of Palantir is that they enable governments to link data that should have never been linked. There are good civil liberties reasons why IRS data was never linked with immigration data and was never linked with social media data, but Palantir breaks those firewalls. Palantir has labeled themselves as a progressive, human rights centric company historically, but their recent actions have given them away as just another tech company enabling surveillance nightmares.

Threat Modeling When ICE Is Your Adversary 

 Understanding the capabilities and limits of ICE and how to threat model helps you and your community fight back, remain powerful, and protect yourself.

One of the most important things you can do is to not spread rumors and misinformation. Rumors like “ICE has malware so now everyone's phones are compromised” or “Palantir knows what you are doing all the time” or “Signal is broken” don’t help your community. It’s more useful to spread facts, ways to protect yourself, and ways to fight back. For information about how to create a security plan for yourself or your community, and other tips to protect yourself, read our Surveillance Self-Defense guides.

How EFF Is Fighting Back

One way to fight back against ICE is in the courts. EFF currently has a lawsuit against ICE over their pressure on Apple and Google to take down ICE spotting apps, like ICEBlock. We also represent multiple labor unions suing ICE over their social media surveillance practices

We have also demanded the San Francisco Police Department stop sharing data illegally with ICE, and issued a statement condemning the collaboration between ICE and the malware provider Paragon. We also continue to maintain our Rayhunter project for detecting cell-site simulators. 

Other civil liberties organizations are also suing ICE. ACLU has sued ICE over a subpoena to Meta attempting to identify the owner of an account providing advice to protestors, and another coalition of groups has thus far successfully sued the IRS to stop sharing taxpayer data with ICE

We need to have a hard look at the surveillance industry. It is a key enabler of vast and untold violations of human rights and civil liberties, and it continues to be used by aspiring autocrats to threaten our very democracy. As long as it exists, the surveillance industry, and the data it generates, will be an irresistible tool for anti-democratic forces.

Join EFF

Help protect digital privacy & free speech for everyone

Related Cases: EFF v. DOJ, DHS (ICE tracking apps)
Cooper Quintin

EFFecting Change: The Human Cost of Online Age Verification

3 weeks 1 day ago

Age verification mandates are spreading fast, and they’re ushering in a new age of online surveillance, censorship, and exclusion for everyone—not just young people. Age-gating laws generally require websites and apps to collect sensitive data from every user, often through invasive tools like ID checks, biometric scans, or other dubious “estimation” methods, before granting them access to certain content or services. Lawmakers tout these laws as the silver-bullet solution to “kids’ online safety,” but in reality, age-verification mandates wall off large swaths of the web, build sweeping new surveillance infrastructure, increase the risk of data breaches and real-life privacy harms, and threaten the anonymity that has long allowed people to seek support, explore new ideas, and organize and build community online.

Join EFF's Rindala Alajaji and Alexis Hancock along with Hana Memon from Gen-Z for Change and Cynthia Conti-Cook from Collaborative Research Center for Resilience for a conversation about what we stand to lose as more and more governments push to age-gate the web. We’ll break down how these laws work, who they exclude, and how these mandates threaten privacy and free expression for people of all ages. The conversation will be followed by a live Q&A. 

EFFecting Change Livestream Series:
The Human Cost of Online Age Verification
Thursday, January 15th
12:00 PM - 1:00 PM Pacific
This event is LIVE and FREE!




Accessibility

This event will be live-captioned and recorded. EFF is committed to improving accessibility for our events. If you have any accessibility questions regarding the event, please contact events@eff.org.

Event Expectations

EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.

Upcoming Events

Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates. If you have a friend or colleague that might be interested, please join the fight for your digital rights by forwarding this link: eff.org/EFFectingChange. Thank you for helping EFF spread the word about privacy and free expression online. 

Recording

We hope you and your friends can join us live! If you can't make it, we’ll post the recording afterward on YouTube and the Internet Archive!

Melissa Srago
Checked
5 hours 55 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed