Yes, You Have the Right to Film ICE

2 months 2 weeks ago

Across the United States, Immigration and Customs Enforcement (ICE) has already begun increasing enforcement operations, including highly publicized raids. As immigrant communities, families, allies, and activists think about what can be done to shift policy and protect people, one thing is certain: similar to filming the police as they operate, you have the right to film ICE, as long as you are not obstructing official duties.

Filming ICE agents making an arrest or amassing in your town helps promote transparency and accountability for a system that often relies on intimidation and secrecy and obscures abuse and law-breaking

While it is crucial for people to help aid in transparency and accountability, there are considerations and precautions you should take. For an in-depth guide by organizations on the frontlines of informing people who wish to record ICE’s interactions with the public, review these handy resources from the hard-working folks at WITNESS and NYCLU

At EFF, here are our general guidelines when it comes to filming law enforcement, including ICE: 

What to Know When Recording Law Enforcement

  • You have the right to record law enforcement officers exercising their official duties in public.
  • Stay calm and courteous.
  • Do not interfere with law enforcement. If you are a bystander, stand at a safe distance from the scene that you are recording.
  • You may take photos or record video and/or audio.
  • Law enforcement cannot order you to move because you are recording, but they may order you to move for public safety reasons even if you are recording.
  • Law enforcement may not search your cell phone or other device without a warrant based on probable cause from a judge, even if you are under arrest. Thus, you may refuse a request from an officer to review or delete what you recorded. You also may refuse to unlock your phone or provide your passcode.
  • Despite reasonably exercising your First Amendment rights, law enforcement officers may illegally retaliate against you in a number of ways including with arrest, destruction of your device, and bodily harm. They may also try to retaliate by harming the person being arrested. We urge you to remain alert and mindful about this possibility.
  • Consider the sensitive nature of recording in the context of an ICE arrest. The person being arrested or their loved ones may be concerned about exposing their immigration status, so think about obtaining consent or blurring out faces in any version you publish to focus on ICE’s conduct (while still retaining the original video).
Your First Amendment Right to Record Law Enforcement Officers Exercising Their Official Duties in Public

You have a First Amendment right to record law enforcement, which federal courts and the Justice Department have recognized and affirmed. Although the Supreme Court has not squarely ruled on the issue, there is a long line of First Amendment case law from the high court that supports the right to record law enforcement. And federal appellate courts in the First, Third, Fourth, Fifth, Seventh, Eighth, Ninth, Tenth, and Eleventh Circuits have directly upheld this right. EFF has advocated for this right in many amicus briefs.

Federal appellate courts typically frame the right to record law enforcement as the right to record officers exercising their official duties in public. This right extends to private places, too, where the recorder has a legal right to be, such as in their own home. However, if the law enforcement officer is off-duty or is in a private space that you don’t have a right to be in, your right to record the officer may be limited. 

Special Considerations for Recording Audio

The right to record law enforcement unequivocally includes the right to take pictures and record video. There is an added legal wrinkle when recording audio—whether with or without video. Some law enforcement officers have argued that recording audio without their consent violates wiretap laws. Courts have generally rejected this argument. The Seventh Circuit, for example, held that the Illinois wiretap statute violated the First Amendment as applied to audio recording on-duty police.

There are two kinds of wiretaps laws: those that require “all parties” to a conversation to consent to audio recording (12 states), and those that only require “one party” to consent (38 states, the District of Columbia, and the federal statute). Thus, if you’re in a one-party consent state, and you’re involved in an incident with law enforcement (that is, you’re a party to the conversation) and you want to record audio of that interaction, you are the one party consenting to the recording and you don’t also need the law enforcement officer’s consent. If you’re in an all-party consent state, and your cell phone or recording device is in plain view, your open audio recording puts the officer on notice and thus their consent might be implied.

Additionally, wiretap laws in both all-party consent states and one-party consent states typically only prohibit audio recording of private conversations—that is, when the parties to the conversation have a reasonable expectation of privacy. Law enforcement officers exercising their official duties, particularly in public, do not have a reasonable expectation of privacy. Neither do civilians in public places who speak to law enforcement in a manner audible to passersby. Thus, if you’re a bystander, you may legally audio record an officer’s interaction with another person, regardless of whether you’re in a state with an all-party or one-party consent wiretap statute. However, you should take into consideration that ICE arrests may expose the immigration status of the person being arrested or their loved ones. As WITNESS puts it: “[I]t’s important to keep in mind the privacy and dignity of the person being targeted by law enforcement. They may not want to be recorded or have the video shared publicly. When possible, make eye contact or communicate with the person being detained to let them know that you are there to observe and document the cops’ behavior. Always respect their wishes if they ask you to stop filming.” You may also want to consider blurring faces to focus on ICE’s conduct if you publish the video online (while still retaining the original version)

Moreover, whether you may secretly record law enforcement (whether with photos, video or audio) is important to understand, given that officers may retaliate against individuals who openly record them. At least one federal appellate court, the First Circuit, has affirmed the First Amendment right to secretly audio record law enforcement performing their official duties in public. On the other hand, the Ninth Circuit recently upheld Oregon’s law that generally bans secret recordings of in-person conversations without all participants’ consent, and only allows recordings of conversations where police officers are participants if “[t]he recording is made openly and in plain view of the participants in the conversation.” Unless you are within the jurisdiction of the First Circuit (Maine, Massachusetts, New Hampshire, Puerto Rico and Rhode Island), it’s probably best to have your recording device in plain view of police officers.

Do Not Interfere With Law Enforcement

While the weight of legal authority provides that individuals have a First Amendment right to record law enforcement, courts have also stated one important caveat: you may not interfere with officers doing their jobs.

The Seventh Circuit, for example, said, “Nothing we have said here immunizes behavior that obstructs or interferes with effective law enforcement or the protection of public safety.” The court further stated, “While an officer surely cannot issue a ‘move on’ order to a person because he is recording, the police may order bystanders to disperse for reasons related to public safety and order and other legitimate law enforcement needs.”

Transparency is Vital

While a large number of deportations is a constant in the U.S. regardless of who is president or which party is in power, the current administration appears to be intentionally making ICE visible in cities and carrying out flashy raids to sow fear within immigrant communities. Specifically, there are concerns that this administration is targeting people already under government supervision while awaiting their day in court. Bearing witness and documenting the presence and actions of ICE in your communities and neighborhoods is important. You have rights, and one of them is your First Amendment-protected right to film law enforcement officers, including ICE agents.

Just because you have the right, however, does not mean law enforcement will always acknowledge and uphold your right in that moment. Be safe and be alert. If you have reason to think your devices might be seized or you may run the risk of putting yourself under surveillance, make sure to check out our Surveillance Self-Defense guides and our field guide to identifying and understanding the surveillance tools law enforcement may employ.

Saira Hussain

When Platforms and the Government Unite, Remember What’s Private and What Isn’t

2 months 2 weeks ago

For years now, there has been some concern about the coziness between technology companies and the government. Whether a company complies with casual government requests for data, requires a warrant, or even fights overly-broad warrants has been a canary in the digital coal mine during an era where companies may know more about you than your best friends and families. For example, in 2022, law enforcement served a warrant to Facebook for the messages of a 17-year-old girl—messages that were later used as evidence in a criminal trial that the teenager had received an abortion. In 2023, after a four year wait since announcing its plans, Facebook encrypted its messaging system so that the company no longer had access to the content of those communications.

The privacy of messages and the relationship between companies and the government have real-world consequences. That is why a new era of symbiosis between big tech companies and the U.S. government bodes poorly for both, our hopes that companies will be critical of requests for data, and any chance of tech regulations and consumer privacy legislation. But, this chumminess should also come with a heightened awareness for users: as companies and the government become more entwined through CEO friendships, bureaucratic entanglements, and ideological harmony, we should all be asking what online data is private and what is sitting on a company's servers and accessible to corporate leadership at the drop of hat.

Over many years, EFF has been pushing for users to switch to platforms that understand the value of encrypting data. We have also been pushing platforms to make end-to-end encryption for online communications and for your stored sensitive data the norm. This type of encryption helps ensure that a conversation is private between you and the recipient, and not accessible to the platform that runs it or any other third-parties. Thanks to the combined efforts of our organization and dozens of other concerned groups, tech users, and public officials, we now have a lot of options for applications and platforms that take our privacy more seriously than in previous generations. But, in light of recent political developments it’s time for a refresher course: which platforms and applications have encrypted DMs, and which have access to your sensitive personal communications.

The existence of what a platform calls “end-to-end encryption” is not foolproof. It may be poorly implemented, lack widespread adoption to attract the attention of security researchers, lack the funding to pay for security audits, or use a less well-established encryption protocol that doesn’t have much public scrutiny. It also can’t protect against other sorts of threats, like someone gaining access to your device or screenshotting a conversation. Being caught using certain apps can itself be dangerous in some cases. And it takes more than just a basic implementation to resist a targeted active attack, as opposed to later collection. But it’s still the best way we currently have to ensure our digital conversations are as private as possible. And more than anything, it needs to be something you and the people you speak with will actually use, so features can be an important consideration.

No platform provides a perfect mix of security features for everyone, but understanding the options can help you start figuring out the right choices. When it comes to popular social media platforms, Facebook Messenger uses end-to-end encryption on private chats by default (this feature is optional in group chats on Messenger, and on some of the company’s other offerings, like Instagram). Other companies, like X, offer optional end-to-end encryption, with caveats, such as only being available to users who pay for verification. Then there’s platforms like Snapchat, which have given talks about their end-to-end encryption in the past, but don’t provide further details about its current implementations. Other platforms, like Bluesky, Mastodon, and TikTok, do not offer end-to-end encryption in direct messages, which means those conversations could be accessible to the companies that run the platforms or made available to law enforcement upon request.

As for apps more specifically designed around chat, there are more examples. Signal offers end-to-end encryption for text messages and voice calls by default with no extra setup on your part, and collects less metadata than other options. Metadata can reveal information such as who you are talking with and when, or your location, which in some cases may be all law enforcement needs. WhatsApp is also end-to-end encrypted. Apple’s Messages app is end-to-end encrypted, but only if everyone in the chat has an iPhone (blue bubbles). The same goes for Google Messages, which is end-to-end encrypted as long as everyone has set it up properly, which sometimes happens automatically.

Of course, we have a number of other communication tools at our disposal, like Zoom, Slack, Discord, Telegram, and more. Here, things continue to get complicated, with end-to-end encryption being an optional feature sometimes, like on Zoom or Telegram; available only for specific types of communication, like video and voice calls on Discord but not text conversations; or not being available at all, like with Slack. Many other options exist with varying feature-sets, so it’s always worth doing some research if you find something new. This does not mean you need to avoid these tools entirely, but knowing that your chats may be available to the platform, law enforcement, or an administrator is an important thing to consider when choosing what to say and when to say it. 

And for high-risk users, the story becomes even more complicated. Even on an encrypted platform, users can be subject to targeted machine-in-the middle attacks (also known as man-in-the middle attacks) unless everyone verifies each others’ keys. Most encrypted apps will let you do this manually, but some have started to implement automatic key verification, which is a security win. And encryption doesn’t matter if message backups are uploaded to the company’s servers unencrypted, so it’s important to either choose to not backup messages, or carefully set up encrypted backups on platforms that allow it. This is all before getting into the intricacies of how apps handle deleted and disappearing messages, or whether there’s a risk of being found with an encrypted app in the first place.

CEOs are not the beginning and the end of a company’s culture and concerns—but we should take their commitments and signaled priorities seriously. At a time when some companies may be cozying up to the parts of government with the power to surveil and marginalize, it might be an important choice to move our data and sensitive communications to different platforms. After all, even if you are not at specific risk of being targeted by the government, your removed participation on a platform sends a clear political message about what you value in a company. 

Thorin Klosowski

EFF Sues OPM, DOGE and Musk for Endangering the Privacy of Millions

2 months 2 weeks ago
Lawsuit Argues Defendants Violated the Privacy Act by Disclosing Sensitive Data

NEW YORK—EFF and a coalition of privacy defenders led by Lex Lumina filed a lawsuit today asking a federal court to stop the U.S. Office of Personnel Management (OPM) from disclosing millions of Americans’ private, sensitive information to Elon Musk and his “Department of Government Efficiency” (DOGE).

The complaint on behalf of two labor unions and individual current and former government workers across the country, filed in the U.S. District Court for the Southern District of New York, also asks that any data disclosed by OPM to DOGE so far be deleted.

The complaint by EFF, Lex Lumina LLP, State Democracy Defenders Fund, and The Chandra Law Firm argues that OPM and OPM Acting Director Charles Ezell illegally disclosed personnel records to Musk’s DOGE in violation of the federal Privacy Act of 1974. Last week, a federal judge temporarily blocked DOGE from accessing a critical Treasury payment system under a similar lawsuit.

This lawsuit’s plaintiffs are the American Federation of Government Employees AFL-CIO; the Association of Administrative Law Judges, International Federation of Professional and Technical Engineers Judicial Council 1 AFL-CIO; Vanessa Barrow, an employee of the Brooklyn Veterans Affairs Medical Center; George Jones, President of AFGE Local 2094 and a former employee of VA New York Harbor Healthcare; Deborah Toussant, a former federal employee; and Does 1-100, representing additional current or former federal workers or contractors.

As the federal government is the nation’s largest employer, the records held by OPM represent one of the largest collections of sensitive personal data in the country. In addition to personally identifiable information such as names, social security numbers, and demographic data, these records include work information like salaries and union activities; personal health records and information regarding life insurance and health benefits; financial information like death benefit designations and savings programs; and nondisclosure agreements; and information concerning family members and other third parties referenced in background checks and health records. OPM holds these records for tens of millions Americans, including current and former federal workers and those who have applied for federal jobs. OPM has a history of privacy violations—an OPM breach in 2015 exposed the personal information of 22.1 million people—and its recent actions make its systems less secure. 

With few exceptions, the Privacy Act limits the disclosure of federally maintained sensitive records on individuals without the consent of the individuals whose data is being shared. It protects all Americans from harms caused by government stockpiling of our personal data. This law was enacted in 1974, the last time Congress acted to limit the data collection and surveillance powers of an out-of-control President.

“The Privacy Act makes it unlawful for OPM Defendants to hand over access to OPM’s millions of personnel records to DOGE Defendants, who lack a lawful and legitimate need for such access,” the complaint says. “No exception to the Privacy Act covers DOGE Defendants’ access to records held by OPM. OPM Defendants’ action granting DOGE Defendants full, continuing, and ongoing access to OPM’s systems and files for an unspecified period means that tens of millions of federal-government employees, retirees, contractors, job applicants, and impacted family members and other third parties have no assurance that their information will receive the protection that federal law affords.” 

For more than 30 years, EFF has been a fierce advocate for digital privacy rights. In that time, EFF has been at the forefront of exposing government surveillance and invasions of privacy—such as forcing the release of hundreds of pages of documents about domestic surveillance under the Patriot Act—and enforcing existing privacy laws to protect ordinary Americans—such as in its ongoing lawsuit against Sacramento's public utility company for sharing customer data with police. 

For the complaint: https://www.eff.org/document/afge-v-opm-complaint

For more about the litigation: https://www.eff.org/deeplinks/2025/02/eff-sues-doge-and-office-personnel-management-halt-ransacking-federal-data

Contacts:
Electronic Frontier Foundation: press@eff.org
Lex Lumina LLP: Managing Partner Rhett Millsaps, rhett@lex-lumina.com

Josh Richman

The TAKE IT DOWN Act: A Flawed Attempt to Protect Victims That Will Lead to Censorship

2 months 2 weeks ago

Congress has begun debating the TAKE IT DOWN Act (S. 146), a bill that seeks to speed up the removal of a troubling type of online content: non-consensual intimate imagery, or NCII. In recent years, concerns have also grown about the use of digital tools to alter or create such images, sometimes called deepfakes.

While protecting victims of these heinous privacy invasions is a legitimate goal, good intentions alone are not enough to make good policy. As currently drafted, the Act mandates a notice-and-takedown system that threatens free expression, user privacy, and due process, without addressing the problem it claims to solve.

The Bill Will Lead To Overreach and Censorship

S.B. 146 mandates that websites and other online services remove flagged content within 48 hours and requires “reasonable efforts” to identify and remove known copies. Although this provision is designed to allow NCII victims to remove this harmful content, its broad definitions and lack of safeguards will likely lead to people misusing the notice-and-takedown system to remove lawful speech.

take action

"Take It Down" Has No real Safeguards  

The takedown provision applies to a much broader category of content—potentially any images involving intimate or sexual content—than the narrower NCII definitions found elsewhere in the bill. The takedown provision also lacks critical safeguards against frivolous or bad-faith takedown requests. Lawful content—including satire, journalism, and political speech—could be wrongly censored. The legislation’s tight time frame requires that apps and websites remove content within 48 hours, meaning that online service providers, particularly smaller ones, will have to comply so quickly to avoid legal risk that they won’t be able to verify claims. Instead, automated filters will be used to catch duplicates, but these systems are infamous for flagging legal content, from fair-use commentary to news reporting.

TAKE IT DOWN creates a far broader internet censorship regime than the Digital Millennium Copyright Act (DMCA), which has been widely abused to censor legitimate speech. But at least the DMCA has an anti-abuse provision and protects services from copyright claims should they comply. This bill contains none of those minimal speech protections and essentially greenlights misuse of its takedown regime.

Threats To Encrypted Services

The online services that do the best job of protecting user privacy could also be under threat from Take It Down. While the bill exempts email services, it does not provide clear exemptions for private messaging apps, cloud storage, and other end-to-end encrypted (E2EE) services. Services that use end-to-end encryption, by design, are not able to access or view unencrypted user content.

How could such services comply with the takedown requests mandated in this bill? Platforms may respond by abandoning encryption entirely in order to be able to monitor content—turning private conversations into surveilled spaces.

In fact, victims of NCII often rely on encryption for safety—to communicate with advocates they trust, store evidence, or escape abusive situations. The bill’s failure to protect encrypted communications could harm the very people it claims to help.

Victims Of NCII Have Legal Options Under Existing Law

An array of criminal and civil laws already exist to address NCII. In addition to 48 states that have specific laws criminalizing the distribution of non-consensual pornography, there are defamation, harassment, and extortion statutes that can all be wielded against people abusing NCII. Since 2022, NCII victims have also been able to bring federal civil lawsuits against those who spread this harmful content.

As we explained in 2018:

If a deepfake is used for criminal purposes, then criminal laws will apply. If a deepfake is used to pressure someone to pay money to have it suppressed or destroyed, extortion laws would apply. For any situations in which deepfakes were used to harass, harassment laws apply. There is no need to make new, specific laws about deepfakes in either of these situations.


In many cases, civil claims could also be brought against those distributing the images under causes of action like False Light invasion of privacy. False light claims commonly address photo manipulation, embellishment, and distortion, as well as deceptive uses of non-manipulated photos for illustrative purposes.

A false light plaintiff (such as a person harmed by NCII) must prove that a defendant (such as a person who uploaded NCII) published something that gives a false or misleading impression of the plaintiff in such a way to damage the plaintiff’s reputation or cause them great offense. 

Congress should focus on enforcing and improving these existing protections, rather than opting for a broad takedown regime that is bound to be abused. Private platforms can play a part as well, improving reporting and evidence collection systems. 

Joe Mullin

EFF Sues DOGE and the Office of Personnel Management to Halt Ransacking of Federal Data

2 months 2 weeks ago

EFF and a coalition of privacy defenders have filed a lawsuit today asking a federal court to block Elon Musk’s Department of Government Efficiency (DOGE) from accessing the private information of millions of Americans that is stored by the Office of Personnel Management (OPM), and to delete any data that has been collected or removed from databases thus far. The lawsuit also names OPM, and asks the court to block OPM from sharing further data with DOGE.

The Plaintiffs who have stepped forward to bring this lawsuit include individual federal employees as well as multiple employee unions, including the American Federation of Government Employees and the Association of Administrative Law Judges.

This brazen ransacking of Americans’ sensitive data is unheard of in scale. With our co-counsel Lex Lumina, State Democracy Defenders Fund, and the Chandra Law Firm, we represent current and former federal employees whose privacy has been violated. We are asking the court for a temporary restraining order to immediately cease this dangerous and illegal intrusion. This massive trove of information includes private demographic data and work histories of essentially all current and former federal employees and contractors as well as federal job applicants. Access is restricted by the federal Privacy Act of 1974. Last week, a federal judge temporarily blocked DOGE from accessing a critical Treasury payment system under a similar lawsuit

The mishandling of this information could lead to such significant and varied abuses that they are impossible to detail. 

What’s in OPM’s Databases?

The data housed by OPM is extraordinarily sensitive for several reasons. The federal government is the nation’s largest employer, and OPM’s records are one of the largest, if not the largest, collection of employee data in the country. In addition to personally identifiable information such as names, social security numbers, and demographics, it includes work experience, union activities, salaries, performance, and demotions; health information like life insurance and health benefits; financial information like death benefit designations and savings programs; and classified information nondisclosure agreements. It holds records for millions of federal workers and millions more Americans who have applied for federal jobs. 

The mishandling of this information could lead to such significant and varied abuses that they are impossible to detail. On its own, DOGE’s unchecked access puts the safety of all federal employees at risk of everything from privacy violations to political pressure to blackmail to targeted attacks. Last year, Elon Musk publicly disclosed the names of specific government employees whose jobs he claimed he would cut before he had access to the system. He has also targeted at least one former employee of Twitter. With unrestricted access to OPM data, and with his ownership of the social media platform X, federal employees are at serious risk.

And that’s just the danger from disclosure of the data on individuals. OPM’s records could give an overview of various functions of entire government agencies and branches. Regardless of intention, the law makes it clear that this data is carefully protected and cannot be shared indiscriminately.

In late January, OPM reportedly sent about two million federal employees its "Fork in the Road" form email introducing a “deferred resignation” program. This is a visible way in which the data could be used; OPMs databases contain the email addresses for every federal employee. 

How the Privacy Act Protects Americans’ Data

Under the Privacy Act of 1974, disclosure of government records about individuals generally requires the written consent of the individual whose data is being shared, with few exceptions

Congress passed the Privacy Act in response to a crisis of confidence in the government as a result of scandals including Watergate and the FBI’s Counter Intelligence Program (COINTELPRO). The Privacy Act, like the Foreign Intelligence Surveillance Act of 1978, was  created at a time when the government was compiling massive databases of records on ordinary citizens and had minimal restrictions on sharing them, often with erroneous  information and in some cases for retaliatory purposes

These protections were created the last time Congress rose to the occasion of limiting the surveillance powers of an out-of-control President.

Congress was also concerned with the potential for abuse presented by the increasing use of electronic records and the use of identifiers such as social security numbers, both of which made it easier to combine individual records housed by various agencies and to share that information. In addition to protecting our private data from disclosure to others, the Privacy Act, along with the Freedom of Information Act, also allows us to find out what information is stored about us by the government. The Privacy Act includes a private right of action, giving ordinary people the right to decide for themselves whether to bring a lawsuit to enforce their statutory privacy rights, rather than relying on government agencies or officials.

It is no coincidence that these protections were created the last time Congress rose to the occasion of limiting the surveillance powers of an out-of-control President. That was fifty years ago; the potential impact of leaking this government information, representing the private lives of millions, is now even more serious. DOGE and OPM are violating Americans’ most fundamental privacy rights at an almost unheard-of scale. 

OPM’s Data Has Been Under Assault Before

Ten years ago, OPM announced that it had been the target of two data breaches. Over twenty-million security clearance records—information on anyone who had undergone a federal employment background check, including their relatives and references—were reportedly stolen by state-sponsored attackers working for the Chinese government. At the time, it was considered one of the most potentially damaging breaches in government history. 

DOGE employees likely have access to significantly more data than this. Just as an example, the OPM databases also include personal information for anyone who applied to a federal job through USAJobs.gov—24.5 million people last year. Make no mistake: this is, in many ways, a worse breach than what occurred in 2014. DOGE has access to ten more years of data; it likely includes what was breached before, as well as significantly more sensitive data. (This is not to mention that while DOGE has access to these databases, they reportedly have the ability to not only export records, but to add them, modify them, or delete them.) Every day that DOGE maintains its current level of access, more risks mount. 

EFF Fights for Privacy

EFF has fought to protect privacy for nearly thirty-five years at the local, state, and federal level, as well as around the world. 

We have been at the forefront of exposing government surveillance and invasions of privacy: In 2006, we sued AT&T on behalf of its customers for violating privacy law by collaborating with the NSA in the massive, illegal program to wiretap and data-mine Americans’ communications. We also filed suit against the NSA in 2008; both cases arose from surveillance that the U.S. government initiated in the aftermath of 9/11. In addition to leading or serving as co-counsel in lawsuits, such as in our ongoing case against Sacramento's public utility company for sharing customer data with police, EFF has filed amicus briefs in hundreds of cases to protect privacy, free speech, and creativity.

EFF’s fight for privacy spans advocacy and technology, as well: Our free browser extension, Privacy Badger, protects millions of individuals from invasive spying by third-party advertisers. Another browser extension, HTTPS Everywhere, alongside Certbot, a tool that makes it easy to install free HTTPS certificates for websites, helped secure the web, which has now largely switched from non-secure HTTP to the more secure HTTPS protocol. 

EFF is glad to join the brigade of lawsuits to protect this critical information. 

EFF also fights to improve privacy protections by advancing strong laws, such as the California Electronic Communications Privacy Act (CalECPA) in 2015, which requires state law enforcement to get a warrant before they can access electronic information about who we are, where we go, who we know, and what we do. We also have a long, successful history of pushing companies, as well, to protect user privacy, from Apple to Amazon

What’s Next

The question is not “what happens if this data falls into the wrong hands.” The data has already fallen into the wrong hands, according to the law, and it must be safeguarded immediately. Violations of Americans’ privacy have played out across multiple agencies, without oversight or safeguards, and EFF is glad to join the brigade of lawsuits to protect this critical information. Our case is fairly simple: OPM’s data is extraordinarily sensitive, OPM gave it to DOGE, and this violates the Privacy Act. We are asking the court to block any further data sharing and to demand that DOGE immediately destroy any and all copies of downloaded material. 

You can view the press release for this case here.

Related Cases: American Federation of Government Employees v. U.S. Office of Personnel Management
Jason Kelley

Building a Community Privacy Plan

2 months 2 weeks ago

Digital security training can feel overwhelming, and not everyone will have access to new apps, new devices, and new tools. There also isn't one single system of digital security training, and we can't know the security plans of everyone we communicate with—some people might have concerns about payment processors preventing them from obtaining fees for their online work, whilst others might be concerned about doxxing or safely communicating sensitive medical information. 

This is why good privacy decisions begin with proper knowledge about your situation and a community-oriented approach. To start, explore the following questions together with your friends and family, organizing groups, and others:

  1. What do we want to protect? This might include sensitive messages, intimate images, or information about where protests are organized.
  2. Who do we want to protect it from? For example, law enforcement or stalkers. 
  3. How much trouble are we willing to go through to try to prevent potential consequences? After all, convincing everyone to pivot to a different app when they like their current service might be tricky! 
  4. Who are our allies? Besides those who are collaborating with you throughout this process, it’s a good idea to identify others who are on your side. Because they’re likely to share the same threats you do, they can be a part of your protection plans. 

This might seem like a big task, so here are a few essentials:

Use Secure Messaging Services for Every Communication 

Private communication is a fundamental human right. In the online world, the best tool we have to defend this right is end-to-end encryption, ensuring that only the sender and recipient of any communication have access to the content. But this protection does not reach its full potential without others joining you in communicating on these platforms. 

Of the most common messaging apps, Signal provides the most extensive privacy protections through its use of end-to-end encryption, and is available for download across the globe. But we know it might not always be possible to encourage everyone in your network to transition away from their current services. There are alternatives, though. WhatsApp, one of the most popular communication platforms in the world, uses end-to-end encryption, but collects more metadata than Signal. Facebook Messenger now also provides end-to-end encryption by default in one-on-one direct messages. 

Specific privacy concerns remain with group chats. Facebook Messenger has not enabled end-to-end encryption for chats that include more than two people, and popular platforms like Slack and Discord similarly do not provide these protections. These services may appear more user-friendly in accommodating large numbers, but in the absence of real privacy protections, make sure you consider what is being communicated on these sites and use alternative messaging services when talking about sensitive topics.

As a service's user base gets larger and more diverse, it's less likely that simply downloading and using it will indicate anything about a particular user's activities. For example, the more people use Signal, the less those seeking reproductive health care or coordinating a protest would stand out by downloading it. So beyond protecting just your communications, you’re building up a user base that can protect others who use encrypted, secure services and give them the shield of a crowd. 

It also protects your messages from being available for law enforcement should they request it from the platforms you use. In choosing a platform that protects our privacy, we create a space from safety and authenticity away from government and corporate surveillance.  

For example, prosecutors in Nebraska used messages sent via Facebook Messenger (prior to the platform enabling end-to-end encryption by default) as evidence to charge a mother with three felonies and two misdemeanors for assisting her daughter with an abortion. Given that someone known to the family reported the incident to law enforcement, it’s unlikely using an end-to-end encrypted service would have prevented the arrest entirely, but it would have prevented the contents of personal messages turned over by Meta from being used as evidence in the case. 

Beyond this, it's important to know the privacy limitations of the platforms you communicate on. For example, while a secure messaging app might prevent government and corporate eavesdroppers from snooping on conversations, that doesn't stop someone you're communicating with from taking screenshots, or the government from attempting to compel you (or your contact) to turn over your messages yourselves. Secure messaging apps also don't protect when someone gets physical access to an unlocked phone with all those messages on it, which is why you may want to consider enabling disappearing message features for certain conversations.

Consider The Content You Post On Social Media 

We’re all interconnected in this digital age. Even without everyone having access to their own personal device or the internet, it is pretty difficult to completely opt out of the online. One person’s decision to upload a picture to a social media platform may impact another person without the second even knowing it, such as an association with a movement or a topic that you don’t want to be public knowledge. 

Talk with your friends about the potentially sensitive data you reveal about each other online. Even if you don’t have a social media account, or if you untag yourself from posts, friends can still unintentionally identify you, report your location, and make their connections to you public. This works in the offline world too, such as sharing precautions with organizers and fellow protesters when going to a demonstration, and discussing ahead of time how you can safely document and post the event online without exposing those in attendance to harm.

It’s important to carefully consider the tradeoffs between publicity and privacy when it comes to social media. If you’re promoting something important that needs greater reach, it may be more worth posting to the more popular platforms that undermine user privacy. To do so, it’s vital that you compartmentalize your personal information (registration credentials, post attribution, friends list, etc) away from these accounts.

If you are organising online or conversing on potentially sensitive issues, choose platforms that limit the amount of information collected and tracking undertaken. We know this is not always possible—perhaps people cannot access different applications, or might not have interest in downloading or using a different service. In this scenario, think about how you can protect your community on the platform you currently engage on. For example, if you currently use Facebook for organizing, work with others to keep your Facebook groups as private and secure as Facebook allows.

Think About Cloud Servers as Other People’s Computers  

For our online world to function, corporations use online servers (often referred to as the cloud) to store the mass amounts of data collected from our devices. When we back up our content to these cloud services, corporations may run automated tools to check the content being stored, including scanning all our messages, pictures, and videos. The best case scenario in the event of a false flag is that your account is temporarily blocked, but worst case could see your entire account deleted and/or legal action initiated for perceivably illegal content. 

For example, in 2021 a father took pictures of son’s groin area and sent these to a health care provider’s messaging service. Days later, his Google account was disabled because the photos constituted a “a severe violation of Google’s policies and might be illegal,” with an attached link flagging “child sexual abuse and exploitation” as one of the possible reasons. Despite the photos being taken for medical purposes, Google refused to reinstate the account, meaning that the father lost access to years of emails, pictures, account login details, and more. In a similar case, a father in Houston took photos of his child’s infected intimate parts to send to his wife via Google’s chat feature. Google refused to reinstate this account, too.

The adage goes, “there are no clouds, just other peoples’ computers.” It’s true! As countless discoveries over the years have revealed, the information you share on Slack at work is on Slack's computers and made accessible to your employer. So why not take extra care to choose whose computers you’re trusting with sensitive information? 

If it makes sense to back up your data onto encrypted thumb drives or limited cloud services that provide options for end-to-end encryption, then so be it. What’s most important is that you follow through with backing it up. And regularly!

Assign Team Roles

Adopting all of these best practices can be daunting, we get it. Every community is made up of people with different strengths, so with some consideration you can make smart decisions about who does what for the collective privacy and security. Once these tasks are broken down into smaller, more easily done tasks, it’s easier for a group to accomplish together. As familiarity with these tasks grows, you’ll realize you’re developing a team of experts, and after some time, you can teach each other.

Create Incident Response Plans

Developing a plan for if or when something bad happens is a good practice for anyone, but especially a community of people who face increased risk. Since many threats are social in nature, such as doxxing or networked harassment, it’s important to strategize with your allies around what to do in the event of such things happening. Doing so before an incident occurs is much easier than when you’re presently facing a crisis.

Only you and your allies can decide what belongs on such a plan, but some strategies might be: 

  • Isolating the impacted areas, such as shutting down social media accounts and turning off affected devices
  • Notifying others who may be affected
  • Switching communications to a predetermined more secure alternative
  • Noting behaviors of suspected threats and documenting these 
  • Outsourcing tasks to someone further from the affected circle who is already aware of this potential responsibility.

Everyone's security plans and situations will always be different, which is why we often say that security and privacy are a state of mind, not a purchase. But the first step is always taking a look at your community and figuring out what's needed and how to get everyone else on board.

Paige Collings

Privacy Loves Company

2 months 2 weeks ago

Most of the internet’s blessings—the opportunities for communities to connect despite physical borders and oppressive controls, the avenues to hold the powerful accountable without immediate censorship, the sharing of our hopes and frustrations with loved ones and strangers alike—tend to come at a price. Governments, corporations, and bad actors too often use our content for surveillance, exploitation, discrimination, and harm.

It’s easy to dismiss these issues because you don’t think they concern you. It might also feel like the whole system is too pervasive to actively opt-out of. But we can take small steps to better protect our own privacy, as well as to build an online space that feels as free and safe as speaking with those closest to us in the offline world.

This is why a community-oriented approach helps. In speaking with your friends and family, organizing groups, and others to discuss your specific needs and interests, you can build out digital security practices that work for you. This makes it more likely that your privacy practices will become second nature to you and your contacts.  

Good privacy decisions begin with proper knowledge about your situation—and we’ve got you covered. To learn more about building a community privacy plan, read our ‘how to’ guide here, where we talk you through the topics below in more detail: 

Using Secure Messaging Services For Every Communication 

At some point, we all need to send a message that’s safe from prying eyes, so the chances of these apps becoming the default for sensitive communications is much higher if we use these platforms for all communications. On an even simpler level, it also means that messages and images sent to family and friends in group chats will be safe from being viewed by automated and human scans on services like Telegram and Facebook Messenger. 

Consider The Content You Post On Social Media 

Our decision to send messages, take pictures, and interact with online content has a real offline impact, and whilst we cannot control for every circumstance, we can think about how our social media behaviour impacts those closest to us, as well as those in our proximity. 

Think About Cloud Servers as Other People’s Computers  

When we backup our content to online cloud services, corporations may run automated tools to check the content being stored, including scanning all our messages, pictures, and videos. Whilst we might think we don't have anything to hide, these tools scan without context, and what might be an innocent picture to you may be flagged as harmful or illegal by a corporation's service. So why not take extra care to choose whose computers you’re entrusting with sensitive information. 

Assign Team Roles

Once these privacy tasks are broken down into smaller, more easily done projects, it’s much easier for a group to accomplish together. 

Create Incident Response Plans

Since many threats are social in nature, such as doxxing or networked harassment, it’s important to strategize with your allies what to do in such circumstances. Doing so before an incident occurs is much easier than on the fly when you’re already facing a crisis.

To dig in deeper, continue reading in our blog post Building a Community Privacy Plan here.

Paige Collings

Why the So-Called AI Action Summit Falls Short

2 months 2 weeks ago

Ever since Chat-GPT’s debut, artificial intelligence (AI) has been the center of worldwide discussions on the promises and perils of new technologies. This has spawned a flurry of debates on the governance and regulation of large language models and “generative” AI, which have, among others, resulted in the Biden administration’s executive order on AI and international guiding principles for the development of generative AI and influenced Europe’s AI Act. As part of that global policy discussion, the UK government hosted the AI Safety Summit in 2023, which was followed in 2024 by the AI Seoul Summit, leading up to this year’s AI Action Summit hosted by France.

As heads of states and CEOs are heading to Paris for the AI Action Summit, the summit’s shortcomings are becoming glaringly obvious. The summit, which is hosted by the French government, has been described as a “pivotal moment in shaping the future of artificial intelligence governance”. However, a closer look at its agenda and the voices it will amplify tells a different story.

Focusing on AI’s potential economic contributions, and not differentiating between for example large language models and automated decision-making, the summit fails to take into account the many ways in which AI systems can be abused to undermine fundamental rights and push the planet's already stretched ecological limits over the edge. Instead of centering nuanced perspectives on the capabilities of different AI systems and associated risks, the summit’s agenda paints a one-sided and simplistic image, not reflective of global discussion on AI governance. For example, the summit’s main program does not include a single panel addressing issues related to discrimination or sustainability.

A summit captured by industry interests cannot claim to be a transformative venue

This imbalance is also mirrored in the summit’s speakers, among which industry representatives notably outnumber civil society leaders. While many civil society organizations are putting on side events to counterbalance the summit’s misdirected priorities, an exclusive summit captured by industry interests cannot claim to be a transformative venue for global policy discussions.

The summit’s significant shortcomings are especially problematic in light of the leadership role European countries are claiming when it comes to the governance of the AI. The European Union’s AI Act, which recently entered into force, has been celebrated as the world’s first legal framework addressing the risks of AI. However, whether the AI Act will actually “promote the uptake of human centric and trustworthy artificial intelligence” remains to be seen. 

It's unclear if the AI Act will provide a framework that incentivizes the roll out of user-centric AI tools or whether it will lock-in specific technologies at the expense of users. We like that the new rules contain a lot of promising language on fundamental rights protection, however, exceptions for law enforcement and national security render some of the safeguards fragile. This is especially true when it comes to the use of AI systems in high-risks contexts such as migration, asylum, border controls, and public safety, where the AI Act does little to protect against mass surveillance and profiling and predictive technologies. We are also concerned by the  possibility that other governments will copy-paste the AI Act’s broad exceptions without having the strong constitutional and human rights protections that exist within the EU legal system. We will therefore keep a close eye on how the AI Act is enforced in practice.

The summit also lags in addressing the essential role human rights should play in providing a common baseline for AI deployment, especially in high-impact uses. Although human-rights-related concerns appear in a few sessions, the Summit as purportedly a global forum aimed at unleashing the potential of AI for the public good and in the public interest, at a minimum, seems to miss the opportunity to clearly articulate how such a goal connects with fulfilling international human rights guarantees and which steps this entail.

Countries must address the AI divide without replicating AI harms.

Ramping up government use of AI systems is generally a key piece in national strategies for AI development worldwide. While countries must address the AI divide, doing so must not mean replicating AI harms. For example, we’ve elaborated on leveraging Inter-American human rights standards to tackle challenges and violations that emerge from public institutions’ use of algorithmic systems for rights-affecting determinations in Latin America.

In times of a global AI arms race, we do not need more hype for AI. Rather, there is a crucial need for evidence-based policy debates that address AI power centralization and consider the real-world harms associated with AI systems—while enabling diverse stakeholders to engage at eye level. The AI Action Summit will not be the place to have this conversation.

Svea Windwehr

The UK's Demands for Apple to Break Encryption Is an Emergency for Us All

2 months 3 weeks ago

The Washington Post reported that the United Kingdom is demanding that Apple create an encryption backdoor to give the government access to end-to-end encrypted data in iCloud. Encryption is one of the best ways we have to reclaim our privacy and security in a digital world filled with cyberattacks and security breaches, and there’s no way to weaken it in order to only provide access to the “good guys.” We call on Apple to resist this attempt to undermine the right to private spaces and communications.

As reported, the British government’s undisclosed order was issued last month, and requires the capability to view all encrypted material in iCloud. The core target is Apple’s Advanced Data Protection, which is an optional feature that turns on end-to-end encryption for backups and other data stored in iCloud, making it so that even Apple cannot access that information. For a long time, iCloud backups were a loophole for law enforcement to gain access to data otherwise not available to them on iPhones with device encryption enabled. That loophole still exists for anyone who doesn’t opt in to using Advanced Data Protection. If Apple does comply, users should consider disabling iCloud backups entirely. Perhaps most concerning, the U.K. is apparently seeking a backdoor into users’ data regardless of where they are or what citizenship they have.

There is no technological compromise between strong encryption that protects the data and a mechanism to allow the government special access to this data. Any “backdoor” built for the government puts everyone at greater risk of hacking, identity theft, and fraud. There is no world where, once built, these backdoors would only be used by open and democratic governments. These systems can be, and quickly will be, used by more repressive governments around the world to read protesters’ and dissenters’ communications. We’ve seen and opposed these sorts of measures for years. Now is no different.

Perhaps most concerning, the U.K. is apparently seeking a backdoor into users’ data regardless of where they are or what citizenship they have.

Of course, Apple is not the only company who uses end-to-end encryption. Some of Google’s backup options employ similar protections, as do many chat apps, cloud backup services, and more. If the U.K. government secures access to the encrypted data of Apple users through a backdoor, every other secure file-sharing, communication, and backup tool is at risk.

Meanwhile, in the U.S., just last year we had a top U.S. cybersecurity chief declare that “encryption is your friend,” taking a welcome break from the messaging we’ve seen over the years at EFF. Even the FBI, which has frequently pushed for easier access to data by law enforcement, issued the same recommendation.

There is no legal mechanism for the U.S. government to force this same sort of rule on Apple, and we’d hope to see Apple continue to resist it as they have in the past. But what happens in the U.K. will still affect users around the world, especially as the U.K. order specifically stated that Apple would be prohibited from warning its users that its Advanced Data Protection measures no longer work as initially designed.

Weakening encryption violates fundamental human rights and annihilates our right to private spaces. Apple has to continue fighting against this ruling to keep backdoors off users’ devices.

Thorin Klosowski

EFF to Ninth Circuit: Young People Have a First Amendment Right to Use Social Media (and All of Its Features)

2 months 3 weeks ago

Minors, like everyone else, have First Amendment rights. These rights extend to their ability to use social media both to speak and access the speech of others online. But these rights are under attack, as many states seek to limit minors’ use of social media through age verification measures and outright bans. California’s SB 976, or the Protecting Our Kids from Social Media Addiction Act, prohibits minors from using a key feature of social media platforms—personalized recommendation systems, or newsfeeds. This law impermissibly burdens minors’ ability to communicate and find others’ speech on social media. 

On February 6th, 2025, EFF, alongside the Freedom to Read Foundation and Library Futures, filed a brief in the Ninth Circuit Court of Appeals in NetChoice v. Bonta urging the court to overturn the district court decision partially denying a preliminary injunction of SB 976.  

SB 976 passed into law in September of 2024, and prohibits various online platforms from providing personalized recommendation systems to minors without parental consent. For now, this prohibition only applies where the platforms know a user is a minor. Starting in 2027, however, the platforms will need to estimate the age of all their users based on regulations promulgated by the California attorney general. This means that (1) all users of platforms with these systems will need to pass through an age gate to continue using these features, and (2) children without parental consent will be denied access to the protected speech that is organized and distributed via newsfeeds. This is separate from the fact that feeds are central to most platforms’ user experience, and it’s not clear how social media platforms can or will adapt the experience for young people to comply with this law. Because these effects burden both users and platforms’ First Amendment rights, EFF filed this friend-of-the-court brief. This work is part of our broader fight against similar age-verification laws at the state and federal levels. 

EFF got involved in this suit both to advocate for the First Amendment rights of adult and minor users and to correct the dangerous logic by the district court. The district court, hearing NetChoice’s challenge on behalf of online platforms, ruled that the personalized feeds covered by SB 976 are not expressive, and therefore not covered by the First Amendment. The lower court took an extremely narrow view of what constitutes expressive activity, writing that algorithms behind personalized newsfeeds don’t reflect the messages or editorial choices of their human creators and therefore do not trigger First Amendment scrutiny. The Ninth Circuit has since stayed the district court’s ruling, preliminarily blocking the law from taking effect until it has a chance to consider the issues. 

EFF pushed back on this flawed reasoning, arguing that “the personalized feeds targeted by SB 976 are inherently expressive, because they (1) reflect the choices made by platforms to organize content on their services, (2) incorporate and respond to the expression users create to distribute users’ speech, and (3) provide users with the means to access speech in a digestible and organized way.” Moreover, the presence of these personalized recommendation systems informs the speech that users create on platforms, as users often create content with the intent of it getting “picked up” by the algorithm and delivered to other users.  

SB 976 burdens the First Amendment rights of minor social media users by blocking their use of primary systems created to distribute their own speech and to hear others’ speech via those systems, EFF’s brief argues. The statute also burdens all internet users’ First Amendment rights because the age-verification scheme it requires will block some adults from accessing lawful speech, make it impossible for them to speak anonymously on these services, and increase their risk of privacy invasions. Under the law, adults and minors alike will need to provide identifying documents to prove their age, which chills users of any age who wish to remain anonymous from accessing protected speech, excludes adults lacking proper documentation, and exposes those who do share their documentation to data breaches or sale of their data. 

We hope the Ninth Circuit recognizes that personalized recommendation systems are expressive in nature, subjects SB 976 to strict scrutiny, and rejects the district court ruling.

Related Cases: NetChoice Must-Carry Litigation
Emma Leeds Armstrong

EFF Applauds Little Rock, AR for Cancelling ShotSpotter Contract

2 months 3 weeks ago

Community members coordinated to pack Little Rock City Hall on Tuesday, where board members voted 5-3 to end the city's contract with ShotSpotter.

Initially funded through a federal grant, Little Rock began its experiment with the “gunshot detection” sensors in 2018. ShotSpotter (now SoundThinking) has long been accused of steering federal grants toward local police departments in an effort to secure funding for the technology. Members of Congress are investigating this funding. EFF has long encouraged communities to follow the money that pays for police surveillance technology.

Now, faced with a $188,000 contract renewal using city funds, Little Rock has joined the growing number of cities nationwide that have rejected, ended, or called into question their use of the invasive, error-prone technology.

EFF has been a vocal critic of gunshot detection systems and extensively documented how ShotSpotter sensors risk capturing private conversations and enable discriminatory policing—ultimately calling on cities to stop using the technology.

This call has been echoed by grassroots advocates coordinating through networks like the National Stop ShotSpotter Coalition. Community organizers have dedicated countless hours to popular education, canvassing neighborhoods, and conducting strategic research to debunk the company's spurious marketing claims.

Through that effort, Little Rock has now joined the ranks of cities throughout the country to reject surveillance technologies like gunshot detection that harm marginalized communities and fail time and time again to deliver meaningful public safety. 

If you live in a city that's also considering dropping (or installing) ShotSpotter, share this news with your community and local officials!

Sarah Hamid

Protecting Free Speech in Texas: We Need To Stop SB 336

2 months 3 weeks ago

The Texas legislature will soon be debating a bill that would seriously weaken the free speech protections of people in that state. If you live in Texas, it’s time to contact your state representatives and let them know you oppose this effort. 

Texas Senate Bill 336 (SB 336) is an attack on the Texas Citizens Participation Act (TCPA), the state’s landmark anti-SLAPP law, passed in 2011 with overwhelming bipartisan support. If passed, SB 336 (or its identical companion bill, H.B. 2459) will weaken safeguards against abusive lawsuits that seek to silence peoples’ speech. 

What Are SLAPPs?

SLAPPs, or Strategic Lawsuits Against Public Participation, are lawsuits filed not to win on the merits but to burden individuals with excessive legal costs. SLAPPs are often used by the powerful to intimidate critics and discourage public discussion that they don’t like. By forcing defendants to engage in prolonged and expensive legal battles, SLAPPs create a chilling effect that discourages others from speaking out on important issues.

Under the TCPA, when a defendant files a motion to dismiss a SLAPP lawsuit, the legal proceedings are automatically paused while a court determines whether the case should move forward. They are also paused if the SLAPP victim needs to get a second review from an appeal court. This is crucial to protect individuals from being dragged through an expensive discovery process while their right to speak out is debated in a higher court. 

SB 336 Undermines Free Speech Protections

SB 336 strips away safeguards by removing the automatic stay of trial court proceedings in certain TCPA appeals. Even if a person has a strong claim that a lawsuit against them is frivolous, they would still be forced to endure the financial and emotional burden of litigation while waiting for an appellate decision. 

This would expose litigants to legal harassment. With no automatic stay, plaintiffs with deep pockets will be able to financially drain defendants. In the words of former Chief Justice of the Texas Supreme Court, Wallace B. Jefferson, removing the automatic stay in the TCPA would create a “two-tier system in which parties would be forced to litigate their cases simultaneously at the trial and appellate courts.”

If the TCPA is altered, the biggest losers will be everyday Texans who rely on the TCPA to shield them from retaliatory lawsuits. That will include domestic violence survivors who face defamation suits from their abusers after reporting them; journalists and whistleblowers who expose corruption and corporate wrongdoing; grassroots activists who choose to speak out; and small business owners and consumers who leave honest reviews and speak out against unethical business practices.

Often, these individuals already face uphill battles when confronting wealthier and more powerful parties in court. SB 336 would tip the scales further in favor of those with the financial means to weaponize the legal system against speech they dislike.

Fighting To Protect Free Speech For Texans 

In addition to EFF, SB 336 is opposed by a broad coalition of groups including the ACLU, the Reporters Committee for Freedom of the Press, and an array of national and local news organizations. To learn more about the TCPA and current efforts to weaken it, check out the website maintained by the Texas Protect Free Speech Coalition

Unfortunately, this is the fourth legislative session in a row in which a bill has been pushed to significantly weaken the TCPA. Those efforts started in 2019, and while we stopped the worst changes that year, the 2019 Texas Legislature did vote through some unfortunate exceptions to TCPA rules. We succeeded in blocking a slate of poorly thought-out changes in 2023. We can, and must, protect TCPA again in 2025–if people speak up.  

If you live in Texas, call or email your state representatives or the Senators on Committee for State Affairs today and urge them to vote NO on SB 336. Let’s ensure Texas continues to be a place where peoples’ voices are heard, not silenced by unjust lawsuits. 

Joe Mullin

Closing the Gap in Encryption on Mobile

2 months 3 weeks ago

It’s time to expand encryption on Android and iPhone. With governments around the world engaging in constant attacks on user’s digital rights and access to the internet, removing glaring and potentially dangerous targets off of people’s backs when they use their mobile phones is more important than ever. 

So far we have seen strides for at least keeping messages private on mobile devices with end-to-end encrypted apps like Signal, WhatsApp, and iMessage. Encryption on the web has been widely adopted. We even declared in 2021 that “HTTPS Is Actually Everywhere.” Most web traffic is encrypted and for a website to have a reputable presence with browsers, they have to meet certain requirements that major browsers enforce today. Mechanisms like certificate transparency, Cross-origin resource sharing (CORS) rules, and enforcing HTTPS help prevent malicious activity happening to users every day. 

Yet, mobile has always been a different and ever expanding context. You access the internet on mobile devices through more than just the web browser. Mobile applications have more room to spawn network requests in the app without the user ever knowing where and when a request was sent. There is no “URL bar” to see the network request URL for the user to see and check. In some cases, apps have been known to “roll their own” cryptographic processes outside of non-standard encryption practices.

While there is much to discuss on the privacy issues of TikTok and other social media apps, for now, let’s just focus on encryption. In 2020 security researcher Baptiste Robert found TikTok used their own “custom encryption” dubbed “ttEncrypt.” Later research showed this was a weak encryption algorithm in comparison to just using HTTPS. Eventually, TikTok replaced ttEncrypt with HTTPS, but this is an example of one of the many allowed practices mobile applications can engage in without much regulation, transparency, or control by the user.

Android has made some strides to protect users’ traffic in apps, like allowing you to set private DNS. Yet, Android app developers can still set a flag to use clear text/unencrypted requests. Android owners should be able to block app requests engaging in this practice. While security settings can be difficult for users to set themselves due to lack of understanding, it would be a valuable setting to provide. Especially since users are currently being bombarded on their devices to turn on features they didn’t even ask for or want. This flag can’t possibly capture all clear text traffic due to the amount of network access “below” HTTPS in the network stack apps can control. However, it would be a good first step for a lot of apps that still use HTTP/unencrypted requests.

As for iOS, Apple introduced a feature called iCloud Private Relay. In their words “iCloud Private Relay is designed to protect your privacy by ensuring that when you browse the web in Safari, no single party — not even Apple — can see both who you are and what sites you're visiting.” This helps shield your IP address from websites you’re visiting. This is a useful alternative for people using VPNs to provide IP masking. In several countries engaging in internet censorship and digital surveillance, using a VPN can possibly put a target on you. It’s more pertinent than ever to be able to privately browse on your devices without setting off alarms. But Private Relay is behind a iCloud+ subscription and only available on Safari. It would be better to make this free and expand Private Relay across more of iOS, especially apps.

There are nuances as to why Private Relay isn’t like a traditional VPN. The “first hop” exposes the IP address to Apple and your Internet Service Provider. However, the website names requested cannot be seen by either party. Apple is vague with its details about the “second relay,” stating,  “The second internet relay is operated by third-party partners who are some of the largest content delivery networks (CDNs) in the world.” Cloudflare is confirmed as the third-party, and its explanation goes further to expound that the standards used for Private Relay are TLS 1.3, QUIC, and MASQUE.

The combination of protocols used in Private Relay could be utilized on Android by using Cloudflare’s 1.1.1.1 app. Which would be the “closest” match from a technical standpoint for Android, and be applied globally instead of just the browser. A more favorable outcome would be utilizing this technology on mobile in a way that doesn’t use just one company to distribute modern encryption. Android’s Private DNS setting allows for various options of providers, but that covers just the encrypted DNS part of the request.

VPNs are another tool that can be used to mask an IP address and circumvent censorship, especially in cases where someone distrusts their Internet Service Provider (ISP). But using VPNs for this sole purpose should start to become obsolete with modern encryption protocols that can be deployed to protect the user. Better encryption practices across mobile platforms would lessen the need for people to flock to potentially nefarious VPN apps that put the user in danger. Android just announced a new badge program that attempts to address this issue by getting VPNs to adhere to Play Store guidelines for security and Mobile Application Security Assessment (MASA) Level 2 validation. While this attempt is noted, when mass censorship is applied, users may not always go to the most reputable VPN or even be able to access reputable VPNs because Google and Apple comply with app store take downs. So widening encryption outside of VPN usage is essential. Blocking clear text requests by apps, allowing users to restrict an app’s network access, and expanding Apple’s Private Relay would be steps in the right direction.

There are many other privacy leaks apps can engage in that expose what you are doing. In the case of apps acting badly by either rolling their own, unverified cryptography or using HTTP, users should be able to block network access to those apps. Just because the problem of mobile privacy is complex, doesn’t mean that complexity should stop potential. We can have a more private internet on our phones. “Encrypt all the things!” includes the devices we use the most to access the web and communicate with each other every day.

Alexis Hancock

Paraguay’s Broadband Providers Continue to Struggle to Attain Best Practices at Protecting Users’ Data

2 months 3 weeks ago

Paraguay’s five leading broadband service providers made some strides in making their privacy policies more accessible to the public, but continue to fall short in their commitments to transparency, due process in sharing metadata with authorities, and promoting human rights—all of which limits their user’s privacy rights, according to the new edition of TEDIC’s ¿Quién Defiende Tus Datos? (“Who Defends Your Data"). 

The report shows that, in general, providers operating as subsidiaries of foreign companies are making more progress in committing to user privacy than national internet providers. But the overall performance of the country’s providers continues to lag behind their counterparts in the region. 

As in its four previous reports about Paraguay, TEDIC evaluated Claro, Personal, and Tigo, which are subsidiaries, and national providers Copaco and Vox. 

The companies were evaluated on seven criteria: whether they provide clear and comprehensive information about how they collect, share, and store user data; require judicial authorization to disclose metadata and communication content to authorities; notify users whose data is turned over to the government; publicly take a stance to support rights protections; publish transparency reports; provide guidelines for security forces and other government bodies on how to request user information, and make their website accessible to people with disabilities.  

Tigo performed best, demonstrating 73% overall compliance with the criteria, while Vox came in last, receiving credit for complying with only 5% of the requirements.  

Paraguay’s full study is available in Spanish. The following table summarizes the report’s evaluations.  

Privacy, Judicial Authorization Policies Lag 

The report shows that Claro, Personal, and Tigo provide relatively detailed information on data collection and processing practices, but none clearly describe data retention periods, a crucial aspect of data protection. Copaco, despite having a privacy policy, limits its scope to data collected on its applications, neglecting to address data processing practices for its services, such as Internet and telephone. Vox has no publicly available privacy policy.

On the plus side, three out of the five providers in the report met all criteria in the privacy policy category. No company disclosed its policies about data collection when TEDIC reports began in 2017. The progress, though slow, is notable given that Paraguay doesn’t have a comprehensive data protection law—one of the few Latin American countries without one. There is a bill pending in Paraguay’s Parliament, but it hasn't been finally approved so far. 

All five providers require a court order before handing over user information, but the report concludes that their policies don’t cover communications metadata, despite the fact that international human rights standards applicable to surveillance, established in the rulings of the Inter-American Court of Human Rights in the cases Escher v. Brazil (2009) and CAJAR v. Colombia (2023), state that these should also be protected under privacy guarantees like the communications content. 

Nonexistent User Notification 

None of the five ISPs has a policy of notifying users when their data is requested by the authorities. This lack of transparency, already identified in all previous editions of QDTD, raises significant concerns about user rights and due process protections in Paraguay. 

While no providers have made a strong commitment to publicly promote human rights, Tigo met three out of four requirements to receive full credit in this category and Claro received half credit due to the policies of their parent companies, rather than from the direct commitment of their local units. Tigo and Claro are also the companies with the most security campaigns for their users, identified throughout the editions of ¿Quién Defiende Tus Datos? 

Claro and Tigo also provide some transparency about government requests for user data, but these reports are only accessible on their parent company websites and, even then, the regional transparency reports do not always provide detailed country-level breakdowns, making it difficult to assess the specific practices and compliance rates of their national subsidiaries 

Karen Gullo

Victory! EFF Helps Defeat Meritless Lawsuit Against Journalist

2 months 3 weeks ago

Jack Poulson is a reporter, and when a confidential source sent him the police report of a tech CEO’s arrest for felony domestic violence, he did what journalists do: reported the news.  

The CEO, Maury Blackman, didn’t like that. So he sued Poulson—along with Amazon Web Service, Substack, and Poulson’s non-profit, Tech Inquiry—to try and force Poulson to take down his articles about the arrest. Blackman argued that a court order sealing the arrest allowed him to censor the internet—despite decades of Supreme Court and California Court of Appeals precedent to the contrary.  

This is a classic SLAPP: strategic lawsuit against public participation. Fortunately, California’s anti-SLAPP statute provides a way for defendants to swiftly defeat baseless claims designed to chill their free speech.  

The court granted Poulson’s motion to strike Blackman’s complaint under the anti-SLAPP statute on Tuesday.  

In its order, the court agreed that the First Amendment protects Poulson’s right to publish and report on the incident report.  

This is an important ruling.  

Under Bartnicki v. Vopper, the First Amendment protects journalists who report on truthful matters of public concern, even when the information they are reporting on was obtained illegally by someone else. Without it, reporters would face liability when they report on information provided by whistleblowers that companies or the government wants to keep secret.  

Those principles were upheld here: Although courts have the power to seal records in appropriate cases, if and when someone provides a copy of a sealed record to a reporter, the reporter shouldn’t be forced to ignore the newsworthy information in that record. Instead, they should be allowed to do what journalists do: report the news.  

And thanks to the First Amendment, a journalist who hasn’t done anything illegal to obtain  the information has the right to publish it.  

The court agreed that Poulson’s First Amendment defense defeated all of Blackman’s claims. As the court said: 

"This court is persuaded that the First Amendment’s protections for the publication of truthful speech concerning matters of public interest vitiate Blackman’s merits showing…in this case there is no evidence that Poulson and the other defendants knew the arrest was sealed before Poulson reported on it, and all defendants’ actions in not taking down the arrest information after Blackman informed them of the sealing order was not so wrongful or unlawful that they are not protected."

The court also agreed that CEOs like Blackman cannot rewrite history by obtaining court orders that seal unflattering information—like an arrest for felony domestic violence. Blackman argued that, because, under California law, sealed arrests are “deemed” not to have occurred for certain legal purposes, reporting that he had been arrested was somehow false—and actionable. It isn’t.  

The court agreed with Poulson: statutory language that alleviates some of the consequences of an arrest “cannot alter how past events unfolded.”  

Simply put, no one can use the legal system to rewrite history.  

EFF is thrilled that the court agrees.  

Tori Noble

DDoSed by Policy: Website Takedowns and Keeping Information Alive

2 months 3 weeks ago

Who needs a DDoS (Denial of Service) attack when you have a new president? As of February 2nd, thousands of web pages and datasets have been removed from U.S. government agencies following a series of executive orders. The impacts span the Department of Veteran Affairs and the Center of Disease Control and Prevention, all the way to programs like Head Start.

Government workers had just two days to carry out sweeping takedowns and rewrites due to a memo from the Office of Personnel Management. The memo cites a recent executive order attacking Trans people and further stigmatizing them by forbidding words used to accurately describe sex and gender. The result was a government-mandated censorship to erase these identities from a broad swatch of websites, resources, and scientific research regardless of context. This flurry of confusion comes on the heels of another executive order threatening CDC research by denying funding for government programs which promoted diversity, equity, and inclusion or climate justice. What we’re left with has been an anti-science, anti-speech, and just plain dangerous fit of panic with untold impacts on the most vulnerable communities.

The good news is technologists, academics, librarians, and open access organizations rushed to action to preserve and archive the information once contained on these sites. While the memo’s deadline has passed, these efforts are ongoing and you can still help.

Fighting Back

New administrations often revise government pages to reflect new policies, though they are usually archived, not erased. These takedowns are alarming because they go beyond the usual changes in power, and could deprive the public of vital information, including scientific research impacting many different areas ranging from life saving medical research to the deadly impacts of climate change.

To help mitigate the damage, institutions like the Internet Archive provided essential tools to fight these memory holes, such as theirEnd of Term” archives, which include public-facing websites (.gov, .mil, etc) in the Legislative, Executive, and Judicial branches of the government. But anyone can use the Wayback Machine for other sites and pages: if you have something that needs archiving, you can easily do so here. Submitted links will be backed up and can be compared to previous versions of the site. Even if you do not have direct access to a website's full backup or database, saving the content of a page can often be enough to restore it later. While the Wayback archive is surprisingly extensive, some sites or changes still slip through the cracks, so it is always worth submitting them to be sure the archive is complete.

Academics are also in a unique position to protect established science and historical record of this public data. Library Innovation Lab at Harvard Law School, for example, has been preserving websites for courts and law journals. This has included hundreds of thousands of valuable datasets from data.gov, government git repositories, and more. This initiative is also building new open-source tools so that others can also make verifiable backups.

The impact of these executive orders go beyond public-facing website content. The CDC, impacted by both executive orders, also hosts vital scientific research data. If someone from the CDC were interested in backing up vital scientific research that isn’t public-facing, there are other road maps as well. Sci-Hub, a project to provide free and unrestricted access to all scientific knowledge that contains 85 million scientific articles, was kept alive by individuals downloading and seeding 850 torrents containing Sci-Hub’s 77 TB library. A community of “data hoarders,” independent archivists who declare a “rescue target” and build a “rescue team” of storage and seeders, are also archiving public datasets, like those formerly available at data.cdc.gov, which were not saved in the Internet Archive’s End of Term Archive.

Dedicating time to salvage, upload, and stop critical data from going dark, as well as rehosting later, is not for everyone, but is an important way to fight back against these kinds of takedowns.

Maintaining Support for Open Information

This widespread deletion of information is one of the reasons EFF is particularly concerned with government-mandated censorship in any context: It can be extremely difficult to know how exactly to comply, and it’s often easier to broadly remove huge swathes of information rather than risk punishment. By rooting out inconvenient truths and inconvenient identities, untold harms are done to the people most removed from power, and everyone’s well being is diminished.

Proponents of open information who have won hard fought censorship battles in the past that helped to create the tools and infrastructure needed to protect us in this moment. The global collaborative efforts afforded by digital technology means the internet rarely forgets, all thanks to the tireless work of institutional, community, and individuals in the face of powerful and erratic censors.

We appreciate those who have stepped in. These groups need constant support, especially our allies who have had their work threatened, and so EFF will continue to advocate for both their efforts and for policies which protect progress, research, and open information. 

Alexis Hancock

European Commission Gets Dinged for Unlawful Data Transfer, Sending a Big Message About Accountability

2 months 3 weeks ago

The European Commission was caught failing to comply with its own data protection regulations and, in a first, ordered to pay damages to a user for the violation. The €400 ($415) award may be tiny compared to fines levied against Big Tech by European authorities, but it’s still a win for users and considerably more than just a blip for the “talk about embarrassing” file at the commission.

The case, Bindl vs. EC, underscores the principle that when people’s data is lost, stolen, or shared without promised safeguards—which can lead to identity theft, cause uncertainty about who has access to the data and for what purpose, or place our names and personal preferences in the hands of data brokers —they’ve been harmed and have the right to hold those responsible accountable and seek damages.

Some corporations, courts, and lawmakers in the U.S. need to learn a thing or two about this principle. Victims of data breaches are subject to anxiety and panic that their social security numbers and other personal information, even their passport numbers, are being bought and sold on the dark web to criminals who will use the information to drain their bank accounts or demand a ransom not to.

But when victims try to go to court, the companies that failed to protect their data in the first place sometimes say tough luck—unless you actually lose money, they say you’re not really harmed and can’t sue. And courts in many cases go along with this.

The EC debacle arose when a German citizen using the commission’s website to register for a conference was offered to sign in using Facebook, which he did—a common practice that, surprise, surprise, can and does give U.S.-based Facebook access to signees’ personal information.

Here’s the problem: In the EU, the General Data Privacy Regulations (GDPR), a comprehensive and far-reaching data privacy law that came into effect in 2018, and a related law that applies to EU institutions, Regulation (EU) 2018/1725, requires entities that handle personal data to abide by certain rules for collecting and transferring it. They must, for instance, ensure that transfers of someone’s personal information, such as their IP address, to countries outside the EU are adequately protected.

The GDPR also give users significant control over their data, such as requiring data processors to obtain users’ clear consent to handle their personal data and allowing users to seek compensation if their privacy rights are infringed—although the regulations are silent on how damages should be assessed.

In what it called a “sufficiently serious breach,” a condition for awarding damages, the European General Court, which hears actions against EU institutions, found that the EC violated EU privacy protections by facilitating in 2022 the transfer of German citizen Thomas Bindl’s IP address and other personal data to Meta, owner of Facebook. The transfer was unlawful because there were no agreements at the time that adequately protected EU users’ data from U.S. government surveillance and weak data privacy laws.

“…personal data may be transferred to a third country or to an international organisation only if the controller or processor has provided appropriate safeguards, and on condition that enforceable data subject rights and effective legal remedies for data subjects are available,” the court said. “In the present case, the Commission has neither demonstrated nor claimed that there was an appropriate safeguard, in particular a standard data protection clause or contractual clause…”

(The EC in 2023 adopted the EU-US Data Privacy Framework to facilitate mechanisms for  personal data transfers between the U.S. and EU states, Great Britain, and Switzerland with protections that are supposed to be consistent with EU, UK, and Swiss law and limit US intelligence services’ access to personal data transferred to America.)

Bindl sought compensation for non-material—that is, not involving direct financial loss—damages because the transfer caused him to lose control of his data and deprived him of his rights and freedoms.

Applying standards it had set in a data mishandling case from Austria involving non-material damage claims, the court said he was entitled to such damages because the commission had violated the GDPR-like regulation 2018/1725 and the damages he suffered were caused by the infringement.

Importantly, the court specified that the right to compensation doesn’t hinge on an assessment of whether the harms are serious enough to take to court, a condition that some EU member state courts have used to dismiss non-material damage claims.

Rather, it was enough that the data transfer put Bindl “in a position of some uncertainty as regards the processing of his personal data, in particular of his IP address,” the court said. This is criterion that could benefit other plaintiffs seeking non-material damages for the mishandling of their data, said Tilman Herbrich, Bindl’s attorney.

Noting the ease with which IP addresses can be used to connect a person to an existing online profile and exploit their data, Bindl, in conversation with The International Association of Privacy Professionals (IAPP), said “it’s totally clear that this was more than just this tiny little piece of IP address, where people even tend to argue whether its PII (personal identifiable information) or not.”  Bindl is the founder of EuGD European Society for Data Protection, a Munich-based litigation funder that supports complainants in data protection lawsuits.

The court’s decision recognizes that losing control of your data causes real non-material harm, and shines a light on why people are entitled to seek compensation for emotional damage, probably without the need to demonstrate a minimum threshold of damage.

EFF has stood up for this principle in U.S. courts against corporate giants who—after data thieves penetrate their inadequate security systems, exposing millions of people’s private information—claim in court that victims haven’t really been injured unless they can prove a specific economic harm on top of the obvious privacy harm.

In fact, negligent data breaches inflict grievous privacy harms in and of themselves, and so the victims have “standing” to sue in federal court—without the need to prove more.

Once data has been disclosed, it is often pooled with other information, some gathered consensually and legally and some gathered from other data breaches or through other illicit means. That pooled information is then used to create inferences about the affected individuals for purposes of targeted advertising, various kinds of risk evaluation, identity theft, and more.

In the EU, the Bindl case could bring more legal certainty to individuals and companies about damages for data protection violations and perhaps open the door to collective-action lawsuits. To the extent that the case was brought to determine whether the EC follows its own rules, the outcome was decisive.

The commission “should set the standard in terms of implementation of how they are doing it,” Bindl said. “If anyone is looking at somebody who is doing it perfectly right, it should be the commission, right?”

 

Karen Gullo

Key Issues Shaping State-Level Tech Policy

2 months 3 weeks ago

We’re taking a moment to reflect on the 2024 state legislative session and what it means for the future of digital rights at the state level. Informed by insights from the State of State Technology Policy 2024 report by NYU’s Center on Technology Policy and EFF’s own advocacy work in state legislatures, this blog breaks down the key issues (Privacy, Children’s Online Safety, Artificial Intelligence, Competition, Broadband and Net Neutrality, and Right to Repair), taking a look back at last year’s developments while also offering a preview of the challenges and trends we can expect in state-level tech policy in the years ahead. 

To jump ahead to a specific issue, you can click on the hyperlinks below: 

Privacy

Children’s Online Safety and Age Verification

Artificial Intelligence

Competition

Broadband and Net Neutrality

Right to Repair

Privacy

State privacy legislation saw notable developments in 2024, with Maryland adopting a stronger privacy law that includes enhanced protections, such as prohibiting targeted advertising to teens, requiring opt-in consent to process health data, and broadening the definition of sensitive data to include location data. This places Maryland’s law ahead of similar measures in other states. In total, seven states—Kentucky, Maryland, Minnesota, Nebraska, New Hampshire, New Jersey, and Rhode Island—joined the ranks of states with comprehensive privacy laws last year, regulating the practices of private companies that collect, store, and process personal data. This expands on the 12 states that had already passed similar legislation in previous years (for a total of 19). Additionally, several of the laws passed in previous years went into effect in 2024.

In 2025, states are expected to continue enacting privacy laws based on the flawed Washington Privacy Act model, though states like Maryland have set a new standard. We still believe these bills must be stronger. States will likely also take the lead in pursuing issue-specific privacy laws covering genetic, biometric, location, and health data, filling gaps where federal action is unlikely (or likely to be weakened by business pressure).

Private Right of Action

A key issue in privacy regulation remains the debate over a private right of action (PRA), which is one of EFF’s main recommendations in comprehensive consumer privacy recommendations and would allow individuals to sue companies for privacy violations. Strong enforcement sits at the top of EFF’s recommendations for privacy bills for good reason. A report from the EPIC and the U.S. PIRG Education Fund highlighted that many state privacy laws provide minimal consumer protections largely due to the absence of private rights of action. Without a PRA, companies are often not held accountable for violations unless state or federal regulators take action, which is both slow and inconsistent. This leaves consumers vulnerable and powerless, unable to directly seek recourse for harm done to their privacy. Unless companies face serious consequences for violating our privacy, they’re unlikely to put our privacy ahead of their profits. 

While the California Consumer Privacy Act (CCPA) includes a limited PRA in cases of a “personal information security breach” only, it is alarming that no new comprehensive laws passed in 2023 or 2024 included a PRA. This reluctance to support a PRA reveals how businesses resist the kind of accountability that would force them to be more transparent and responsible with consumer data. Vermont’s 2024 comprehensive consumer privacy bill proposed a PRA in their bill language. Unfortunately, that bill was vetoed by Gov. Phil Scott, demonstrating how powerful corporate interests can undermine consumer rights for the sake of their own convenience. 

Consumer Privacy and Government Records

Comprehensive consumer privacy legislation outlined above primarily focuses on regulating the practices of private companies that collect, store, and process personal data. However, these laws do not target the handling of personal information by government entities at the state and local levels. Strong legislation is essential for protecting data held by these public agencies, as government records can contain sensitive and comprehensive personal information. For example, local governments may store data on residents’ health records, criminal history, or education. This sensitive data, if mishandled or exposed, can lead to significant privacy breaches. A case in point is when local police departments share facial recognition or ALPR data, raising privacy concerns about unauthorized surveillance and misuse. As tensions rise between federal, state, and local governments, there will be greater focus on data sharing between these entities, increasing the likelihood of the introduction of new laws to protect that data.

A notable example of the need for such legislation is California’s Information Practices Act (IPA) of 1977, which sets privacy guidelines for state agencies. The IPA limits the collection, maintenance, and dissemination of personal information by California state agencies, including sensitive data such as medical records. However, the IPA excludes local governments from these privacy protections, meaning counties and municipalities— which also collect vast amounts of personal data—are not held to the same standards. This gap leaves many individuals without privacy safeguards at the local government level, highlighting the need for stronger and more inclusive privacy legislation that addresses the data practices of both state and local entities–even beyond California. 

Right to Delete and DELETE Act

Data brokers are a major issue when it comes to the irresponsible handling of our personal information. These companies gather vast amounts of personal data and sell it with minimal oversight, often including highly sensitive details like purchasing habits, financial records, social media activity, and precise location tracking. The unregulated trade of this information opens the door to scams, identity theft, and financial exploitation, as individuals become vulnerable to misuse of their private data. This is why EFF supported the California “DELETE Act” in 2023, which allows people to easily and efficiently make one request to delete their personal information held by all data brokers. The law went into effect in 2024, and the deletion mechanism is expected by January 2026—marking a significant step in consumer privacy rights. 

Consumers in 19 states have a right to request that companies delete information collected about them, and these states represent the growing trend to expand consumer rights regarding personal data. However, because a “right to delete” that exists in comprehensive privacy laws requires people to file requests with each individual data broker that may have their information, it can be an incredibly time-consuming and tedious process. Because of this, the California Delete Act’s “one-stop shop” is particularly notable in setting a precedent for other states. In fact, Nebraska has already introduced LB602 for the 2025 legislative session, modeled after California's law, further demonstrating the momentum for such legislation. We hope to see more states adopt similar laws, making it easier for consumers to protect their data and enforce their privacy rights.

Issue-specific Privacy Legislation

In 2024, several states passed issue-specific privacy laws addressing concerns around biometric data, genetic privacy, and health information. 

Regarding biometric privacy, Maryland, New York, Utah, and Virginia imposed restrictions on the use of biometric identifying technologies by law enforcement, with Maryland specifically limiting facial recognition technology in criminal proceedings to certain high-crime investigations and Utah requiring a court order for any police use of biometrics, unless a public safety threat is present. 

Conversely, states like Oklahoma and Florida expanded law enforcement use of biometric data, with Oklahoma mandating biometric data collection from undocumented immigrants, and Florida allocating nearly $12 million to enhance its biometric identification technology for police. 

In the realm of genetic information privacy, Alabama and Nebraska joined 11 other states by passing laws that require direct-to-consumer genetic testing companies to disclose their data policies and implement robust security measures. These companies must also obtain consumer consent if they intend to use genetic data for research or sell it to third parties.

Lastly, in response to concerns about the sharing of reproductive health data due to state abortion bans, several states introduced and passed location data privacy and health data privacy legislation, with more anticipated in 2025 due to heightened scrutiny over location data trackers and the evolving federal landscape surrounding reproductive rights and gender affirming care.  Among those, nineteen states have enacted shield laws to prohibit sensitive data from being disclosed for out-of-state legal proceedings involving reproductive health activities. 

State shield laws vary, but most prevent state officials, including law enforcement and courts, from assisting out-of-state investigations or prosecutions of protected healthcare activities. For example, a state judge may be prohibited from enforcing an out-of-state subpoena for abortion clinic location data, or local police could be barred from aiding the extradition of a doctor facing criminal charges for performing an abortion. In 2023, EFF supported A.B. 352, which extended the protections of California's health care data privacy law to apps such as period trackers. Washington also passed the "My Health, My Data Act" that year, (H.B. 1155), which among other protections, prohibits the collection of health data without consent. 

Children’s Online Safety and Age Verification

Children’s online safety emerged as a key priority for state legislatures in the last few years, with significant variations in approach between states. In 2024, some states adopted age verification laws for both social media platforms and “adult content” sites, while others concentrated on imposing design restrictions on platforms and data privacy protections. For example, California and New York both enacted laws restricting "addictive feeds,” while Florida, Mississippi, and Tennessee enacted new age verification laws to regulate young people’s access to social media and access to “sexual” content online.

None of the three states have implemented their age verification for social media laws, however. Courts blocked Mississippi and Tennessee from enforcing their laws, while Florida Attorney General Ashley Moody, known for aggressive enforcement of controversial laws, has chosen not to enforce the social media age verification part of the bill. She’s also asked the court to pause the lawsuit against the Florida law until the U.S. Supreme Court rules on Texas's age verification law, which only covers the “sexual content” provisions, and does not include the provisions on social media age checks.

In 2025, we hope to see a continued trend to strengthen privacy protections for young people (and adults alike). Unfortunately, we also expect state legislatures to continue refining and expanding age verification and "addictive platform” regulation for social media platforms, as well as so-called “materials harmful to minors,” with ongoing legal challenges shaping the landscape

Targeted Advertising and Children 

In response to the growing concerns over data privacy and advertising, Louisiana banned the practice of targeting of ads to minors. Seven other states also enacted comprehensive privacy laws requiring platforms to obtain explicit consent from minors before collecting or processing their data. Colorado, Maryland, New York, and Virginia went further, extending existing privacy protections with stricter rules on data minimization and requiring impact assessments for heightened risks to children's data. 

Artificial Intelligence

2024 marked a major milestone in AI regulation, with Colorado becoming the first state to pass what many regard as comprehensive AI legislation. The law requires both developers and deployers of high-risk AI systems to implement impact assessments and risk management frameworks to protect consumers from algorithmic discrimination. Other states, such as Texas, Connecticut, and Virginia, have already begun to follow suit in the 2025 legislative session, and lawmakers in many states are discussing similar bills.

However, not all AI-related legislation has been met with consensus. One of the most controversial has been California’s S.B. 1047, which aimed to regulate AI models that might have "catastrophic" effects. While EFF supported some aspects of the bill—like the creation of a public cloud-computing cluster (CalCompute)—we were concerned that it focused too heavily on speculative, long-term catastrophic outcomes, such as machines going rogue, instead of addressing the immediate, real-world harms posed by AI systems. We believe lawmakers should focus on creating regulations that address actual, present-day risks posed by AI, rather than speculative fears of future catastrophe. After a national debate over the bill, Gov. Newsom vetoed it. Sen. Weiner has already refiled the bill.

States also continued to pass narrower AI laws targeting non-consensual intimate imagery (NCII), child sexual abuse material (CSAM), and political deepfakes during the 2024 legislative session. Given that it was an election year, the debate over the use of AI to manipulate political campaigns also escalated. Fifteen states now require political advertisers to disclose the use of generative AI in ads, with some, like California and Mississippi, going further by banning deceptive uses of AI in political ads. Legal challenges, including one in California, will likely continue to shape the future of AI regulations in political discourse.

More states are expected to introduce and debate comprehensive AI legislation based on Colorado’s model this year, as well as narrower AI bills, especially on issues like NCII deepfakes, and AI-generated CSAM. The legal and regulatory landscape for AI in political ads will continue to evolve, with further lawsuits and potential new legislation expected in 2025.

Lastly, it’s also important to recognize that states and local governments themselves are major technology users. Their procurement and use of emerging technologies, such as AI and facial recognition, is itself a form of tech policy. As such, we can expect states to introduce legislation around the adoption of these technologies by government agencies, likely focusing on setting clear standards and ensuring transparency in how these technologies are deployed. 

Competition

On the competition front, several states, including New York and California, made efforts to strengthen antitrust laws and tackle monopolistic practices in Big Tech. While progress was slow, New York's Twenty-First Century Antitrust Act aimed to create a stricter antitrust framework, and the California Law Revision Commission’s ongoing review of the Cartwright Act could lead to modernized recommendations in 2025. Delaware also passed SB 296, which amends the state’s antitrust law to allow a private right of action. 

Despite the shifts in federal enforcement, bipartisan concerns about the influence of tech companies will likely ensure that state-level antitrust efforts continue to play a critical role in regulating corporate power.

Broadband and Net Neutrality

As federal efforts to regulate broadband and net neutrality have stalled, many states have taken matters into their own hands. California, Washington, Oregon, and Vermont have already passed state-level net neutrality laws aimed at preventing internet service providers (ISPs) from blocking, throttling, or prioritizing certain content or services for financial gain. With the growing frustration over the federal government’s inaction on net neutrality, more states are likely to carry the baton in 2025. 

States will continue to play an increasingly critical role in protecting consumers' online freedoms and ensuring that broadband access remains affordable and equitable. This is especially true as more communities push for expanded broadband access and better infrastructure.

Right to Repair

Another key tech issue gaining traction in state legislatures is the Right to Repair. In 2024, California and Minnesota’s Right-to-Repair legislation went into effect, granting consumers the right to repair their electronics and devices independently or through third-party repair services. These laws require manufacturers of devices like smartphones, laptops, and other electronics to provide repair parts, tools, and manuals to consumers and repair shops. Oregon and Colorado also passed similar legislation in 2024.

States will likely continue to pass right-to-repair legislation in 2025, with advocates expecting between 25 to 30 bills to be introduced across the country. These bills will likely expand on existing laws to include more products, from wheelchairs to home appliances and agricultural equipment. As public awareness of the benefits of the Right to Repair grows, legislators will be under increasing pressure to support consumer rights, promote environmental sustainability, and combat planned obsolescence.

Looking Ahead to the Future of State-Level Digital Rights

As we reflect on the 2024 state legislative session and look forward to the challenges and opportunities of 2025, it’s clear that state lawmakers will continue to play a pivotal role in shaping the future of digital rights. From privacy protections to AI regulation, broadband access, and the right to repair, state-level policies are crucial to safeguarding consumer rights, promoting fairness, and fostering innovation.

As we enter the 2025 legislative session, it’s vital that we continue to push for stronger policies that empower consumers and protect their digital rights. The future of digital rights depends on the actions we take today. Whether it’s expanding privacy protections, ensuring fair competition, or passing comprehensive right-to-repair laws, now is the time to push for change.

Join us in holding your state lawmakers accountable and pushing for policies that ensure digital rights for all.

Rindala Alajaji

How State Tech Policies in 2024 Set the Stage for 2025

2 months 3 weeks ago

EFF has been at the forefront of defending civil liberties in the digital age, with our activism team working across state, federal, and local levels to safeguard everyone's rights in the rapidly evolving tech landscape. As federal action on technology policy often lags, many are looking to state governments to lead the way in addressing tech-related issues. 

Drawing insights from the State of State Technology Policy 2024 report by NYU’s Center on Technology Policy and EFF's own experiences advocating in state legislatures, this blog offers a breakdown on why you should care about state policy, the number of bills passed around the country, and a look forward to the coming challenges and trends in state-level tech policy.

Why Should You Care?

State governments are increasingly becoming key players in tech policy, moving much faster than the federal government. This has become especially apparent in 2024, when states enacted significantly more legislation regulating technology than in previous years

“Why?,” you may ask. State legislatures were the most partisan they’ve been in decades in 2024, where we saw a notable increase in the presence of "trifecta" governments—states where one political party controls both chambers of the legislature and the governorship. With this unified control, states can pass laws more easily and quickly. 

Forty states operated under such single-party rule in 2024, the most in at least three decades. Amongst the 40 trifecta states, 29 states also had veto-proof supermajorities, meaning legislation can pass regardless of gubernatorial opposition. This overwhelming single-party control helped push through new tech regulations, with the Center on Technology Policy reporting that 89% percent of all tech-related bills passed in trifecta states. Even with shifts in the 2024 elections, where at least two states—Michigan and Minnesota—lost their trifectas, the trend of state governments driving technology policy is unlikely to slow down anytime soon.

2024 in Numbers: A Historic Year for State Tech Policy

According to the State of State Technology Policy 2024 report by NYU’s Center on Technology Policy:

  • 238 technology-related bills passed across 46 states, marking a 163% increase from the previous year.
  • 20 states passed 28 privacy-related bills, including 7 states enacting laws similar to the industry supported Washington Privacy Act.
  • 18 states passed laws regulating biometric data, with 2 states introducing genetic privacy protections.
  • 23 states passed 48 laws focused on “online child safety,” primarily targeting age verification for adult content and regulating social media.
  • 41 states passed 107 bills regulating AI.
  • 22 states passed laws addressing Non-Consensual Intimate Images (NCII) and child sexual abuse material (CSAM) generated or altered by AI or digital means.
  • 17 states enacted 22 laws regulating the use of generative AI in political campaigns.
  • 6 states created 19 new commissions, task forces, and legislative committees to assess the impact of AI and explore its regulation or beneficial use. For example, California created a working group to guide the safe use of AI in education.
  • 15 states passed 18 bills related to funding AI research or initiatives. For example, Nebraska allocated funds to explore how AI can assist individuals with dyslexia.
  • 3 states made incremental changes to antitrust laws, while 6 states joined federal regulators in pursuing 6 significant cases against tech companies for anticompetitive practices.
  • California passed the most tech-related legislation in 2024, with 26 bills, followed by Utah, which passed 13 bills.
Looking Ahead: What to Expect in 2025

2025 will be a critical year for state tech policy, and we expect to see several trends persist: state governments will continue to prioritize technology policy, leveraging their political compositions to enact new laws faster than the federal government. We expect state legislatures to continue ongoing efforts to regulate AI, online child safety, and other pressing issues, with states taking a proactive role in shaping the future of tech regulation. We also should recognize that states and local governments are technology users, and that their procurement and use of technology itself is a form of tech policy. States are also likely to introduce legislation around the procurement and use of emerging technologies like AI and facial recognition by government agencies, aiming to set clear standards and ensure transparency in their adoption—an issue the EFF plans to monitor and address in more detail in future blog posts and resources. Legislative priorities will be influenced by federal inaction or shifts in policy, as states step in to fill gaps and drive national discussions on digital rights.

Much depends on the direction of federal leadership. Some states may push forward with their own tech regulations. Others may hold off, waiting for federal action. We might also see some states act as a counterbalance to federal efforts, particularly in areas like platform content moderation and data privacy, where the federal government could potentially impose restrictive policies. 

For a deep dive on how the major tech issues fared in 2024 and our expectations for 2025, check out our blog post: Key Issues Shaping State-Level Tech Policy.

EFF will continue to be at the forefront, working alongside lawmakers and advocacy partners to ensure that digital rights remain a priority in state legislatures. As state lawmakers take on critical issues like privacy protections and facial recognition technology, we’ll be there to help guide these conversations and promote policies that address real-world harms. 

We encourage our supporters to join us in these efforts—your voice and activism are crucial in shaping a future where tech serves the public good, not just corporate interests. To stay informed about ongoing state-level tech policy and to learn how you can get involved, follow EFF’s updates and continue championing digital rights with us. 

Rindala Alajaji

Open Licensing Promotes Culture and Learning. That's Why EFF Is Upgrading its Creative Commons Licenses.

2 months 3 weeks ago

At EFF, we’re big fans of the Creative Commons project, which makes copyright work in empowering ways for people who want to share their work widely. EFF uses Creative Commons licenses on nearly all of our public communications. To highlight the importance of open licensing as a tool for building a shared culture, we are upgrading the license on our website to the latest version, Creative Commons Attribution 4.0 International.

Open licenses like Creative Commons are an important tool for sharing culture and learning. They allow artists and creators a simple way to encourage widespread, free distribution of their work while keeping just the rights they want for themselves—such as the right to be credited as the work’s author, the right to modify the work, or the right to control commercial uses.

Without tools like Creative Commons, copyright is frequently a roadblock to sharing and preserving culture. Copyright is ubiquitous, applying automatically to most kinds of creative work from the moment they are “fixed in a tangible medium.” Copyright carries draconian penalties unknown in most areas of U.S. law, like “statutory damages” with no proof of harm and the possibility of having to pay the rightsholder’s attorney fees. And it can be hard to learn who owns a copyright in any given work, given that copyrights can last a century or more. All of these make it risky and expensive to share and re-use creative works, or sometimes even to preserve them and make them accessible to future generations.

Open licensing helps culture and learning flourish. With many millions of works now available under Creative Commons licenses, creators and knowledge-seekers have reassurance that these works of culture and learning can be freely shared and built upon without risk.

The current suite of Creative Commons licenses has thoughtful, powerful features. It’s written to work effectively in many countries, using language that can be understood in the context of different copyright laws around the world. It addresses legal regimes other than copyright that can interfere with free re-use of creative materials, like database rights, anti-circumvention laws, and rights of publicity or personality.

And importantly, the 4.0 licenses also make clear that giving credit to the author (something all of the Creative Commons licenses require) can be done in various ways, and that technical failures don't expose users to lawsuits by copyright trolls.

At EFF, we want our work to be seen and shared widely. That’s why we’ve made our content available under Creative Commons licenses for many years. Today, in that spirit, we are updating the license for most materials on our website, www.eff.org, to Creative Commons Attribution 4.0 International.

Mitch Stoltz
Checked
2 hours 38 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed