The Legal Case Against Ring’s Face Recognition Feature

21 hours 7 minutes ago

Amazon Ring’s upcoming face recognition tool has the potential to violate the privacy rights of millions of people and could result in Amazon breaking state biometric privacy laws.

Ring plans to introduce a feature to its home surveillance cameras called “Familiar Faces,” to identify specific people who come into view of the camera. When turned on, the feature will scan the faces of all people who approach the camera to try and find a match with a list of pre-saved faces. This will include many people who have not consented to a face scan, including friends and family, political canvassers, postal workers, delivery drivers, children selling cookies, or maybe even some people passing on the sidewalk.

When turned on, the feature will scan the faces of all people who approach the camera.

Many biometric privacy laws across the country are clear: Companies need your affirmative consent before running face recognition on you. In at least one state, ordinary people with the help of attorneys can challenge Amazon’s data collection. Where not possible, state privacy regulators should step in.

Sen. Ed Markey (D-Mass.) has already called on Amazon to abandon its plans and sent the company a list of questions. Ring spokesperson Emma Daniels answered written questions posed by EFF, which can be viewed here.

What is Ring’s “Familiar Faces”?

Amazon describes “Familiar Faces” as a tool that “intelligently recognizes familiar people.” It says this tool will provide camera owners with “personalized context of who is detected, eliminating guesswork and making it effortless to find and review important moments involving specific familiar people.” Amazon plans to release the feature in December.

The feature will allow camera owners to tag particular people so Ring cameras can automatically recognize them in the future. In order for Amazon to recognize particular people, it will need to perform face recognition on every person that steps in front of the camera. Even if a camera owner does not tag a particular face, Amazon says it may retain that biometric information for up to six months. Amazon said it does not currently use the biometric data for “model training or algorithmic purposes.”

In order to biometrically identify you, a company typically will take your image and extract a faceprint by taking tiny measurements of your face and converting that into a series of numbers that is saved for later. When you step in front of a camera again, the company takes a new faceprint and compares it to a list of previous prints to find a match. Other forms of biometric tracking can be done with a scan of your fingertip, eyeball, or even your particular gait.

Amazon has told reporters that the feature will be off by default and that it would be unavailable in certain jurisdictions with the most active biometric privacy enforcement—including the states of Illinois and Texas, and the city of Portland, Oregon. The company would not promise that this feature will remain off by default in the future.

Why is This a Privacy Problem?

Your biometric data, such as your faceprint, are some of the most sensitive pieces of data that a company can collect. Associated risks include mass surveillance, data breach, and discrimination.

Today’s feature to recognize your friend at your front door can easily be repurposed tomorrow for mass surveillance. Ring’s close partnership with police amplifies that threat. For example, in a city dense with face recognition cameras, the entirety of a person’s movements could be tracked with the click of a button, or all people could be identified at a particular location. A recent and unrelated private-public partnership in New Orleans unfortunately shows that mass surveillance through face recognition is not some far flung concern.

Amazon has already announced a related tool called “search party” that can identify and track lost dogs using neighbors’ cameras. A tool like this could be repurposed for law enforcement to track people. At least for now, Amazon says it does not have the technical capability to comply with law enforcement demanding a list of all cameras in which a person has been identified. Though, it complies with other law enforcement demands.

In addition, data breaches are a perpetual concern with any data collection. Biometrics magnify that risk because your face cannot be reset, unlike a password or credit card number. Amazon says it processes and stores biometrics collected by Ring cameras on its own servers, and that it uses comprehensive security measure to protect the data.

Face recognition has also been shown to have higher error rates with certain groups—most prominently with dark-skinned women. Similar technology has also been used to make questionable guesses about a person’s emotions, age, and gender.

Will Ring’s “Familiar Faces” Violate State Biometric Laws?

Any Ring collection of biometric information in states that require opt-in consent poses huge legal risk for the company. Amazon already told reporters that the feature will not be available in Illinois and Texas—strongly suggesting its feature could not survive legal scrutiny there. The company said it is also avoiding Portland, Oregon, which has a biometric privacy law that similar companies have avoided.

Its “familiar faces” feature will necessarily require its cameras to collect a faceprint from of every person who comes into view of an enabled camera, to try and find a match. It is impossible for Amazon to obtain consent from everyone—especially people who do not own Ring cameras. It appears that Amazon will try to unload some consent requirements onto individual camera owners themselves. Amazon says it will provide in-app messages to customers, reminding them to comply with applicable laws. But Amazon—as a company itself collecting, processing, and storing this biometric data—could have its own consent obligations under numerous laws.

Lawsuits against similar features highlight Amazon’s legal risks. In Texas, Google paid $1.375 billion to settle a lawsuit that alleged, among other things, that Google’s Nest cameras "indiscriminately capture the face geometry of any Texan who happens to come into view, including non-users." In Illinois, Facebook paid $650 million and shut down its face recognition tools that automatically scanned Facebook photos—even the faces of non-Facebook users—in order to identify people to recommend tagging. Later, Meta paid another $1.4 billion to settle a similar suit in Texas.

Many states aside from Illinois and Texas now protect biometric data. While the state has never enforced its law, Washington in 2017 passed a biometric privacy law. In 2023, the state passed an ever stronger law that protects biometric privacy, which allows individuals to sue on their own behalf. And at least 16 states have recently passed comprehensive privacy laws that often require companies to obtain opt-in consent for the collection of sensitive data, which typically includes biometric data. For example, in Colorado, a company that jointly with others determines the purpose and means of processing biometric data must obtain consent. Maryland goes farther, and such companies are essentially prohibited from collecting or processing biometric data from bystanders.

Many of these comprehensive laws have numerous loopholes and can only be enforced by state regulators—a glaring weakness facilitated in part by Amazon lobbyists.

Nonetheless, Ring’s new feature provides regulators a clear opportunity to step up to investigate, protect people’s privacy, and test the strength of their laws.

Mario Trujillo

License Plate Surveillance Logs Reveal Racist Policing Against Romani People

23 hours 29 minutes ago

More than 80 law enforcement agencies across the United States have used language perpetuating harmful stereotypes against Romani people when searching the nationwide Flock Safety automated license plate reader (ALPR) network, according to audit logs obtained and analyzed by the Electronic Frontier Foundation. 

When police run a search through the Flock Safety network, which links thousands of ALPR systems, they are prompted to leave a reason and/or case number for the search. Between June 2024 and October 2025, cops performed hundreds of searches for license plates using terms such as "roma" and "g*psy," and in many instances, without any mention of a suspected crime. Other uses include "g*psy vehicle," "g*psy group," "possible g*psy," "roma traveler" and "g*psy ruse," perpetuating systemic harm by demeaning individuals based on their race or ethnicity. 

These queries were run through thousands of police departments' systems—and it appears that none of these agencies flagged the searches as inappropriate. 

These searches are, by definition, racist. 

Word Choices and Flock Searches 

We are using the terms "Roma" and “Romani people” as umbrella terms, recognizing that they represent different but related groups. Since 2020, the U.S. federal government has officially recognized "Anti-Roma Racism" as including behaviors such as "stereotyping Roma as persons who engage in criminal behavior" and using the slur "g*psy." According to the U.S. Department of State, this language “leads to the treatment of Roma as an alleged alien group and associates them with a series of pejorative stereotypes and distorted images that represent a specific form of racism.” 

Nevertheless, police officers have run hundreds of searches for license plates using the terms "roma" and "g*psy." (Unlike the police ALPR queries we’ve uncovered, we substitute an asterisk for the Y to avoid repeating this racist slur). In many cases, these terms have been used on their own, with no mention of crime. In other cases, the terms have been used in contexts like "g*psy scam" and "roma burglary," when ethnicity should have no relevance to how a crime is investigated or prosecuted. 

A “g*psy scam” and “roma burglary” do not exist in criminal law separate from any other type of fraud or burglary. Several agencies contacted by EFF have since acknowledged the inappropriate use and expressed efforts to address the issue internally. 

"The use of the term does not reflect the values or expected practices of our department," a representative of the Palos Heights (IL) Police Department wrote to EFF after being confronted with two dozen searches involving the term "g*psy." "We do not condone the use of outdated or offensive terminology, and we will take this inquiry as an opportunity to educate those who are unaware of the negative connotation and to ensure that investigative notations and search reasons are documented in a manner that is accurate, professional, and free of potentially harmful language."

Of course, the broader issue is that allowing "g*psy" or "Roma" as a reason for a search isn't just offensive, it implies the criminalization an ethnic group. In fact, the Grand Prairie Police Department in Texas searched for "g*psy" six times while using Flock's "Convoy" feature, which allows an agency to identify vehicles traveling together—in essence targeting an entire traveling community of Roma without specifying a crime. 

At the bottom of this post is a list of agencies and the terms they used when searching the Flock system. 

Anti-Roma Racism in an Age of Surveillance 

Racism against Romani people has been a problem for centuries, with one of its most horrific manifestations  during the Holocaust, when the Third Reich and its allies perpetuated genocide by murdering hundreds of thousands of Romani people and sterilizing thousands more. Despite efforts by the UN and EU to combat anti-Roma discrimination, this form of racism persists. As scholars Margareta Matache and Mary T. Bassett explain, it is perpetuated by modern American policing practices: 

In recent years, police departments have set up task forces specialised in “G*psy crimes”, appointed “G*psy crime” detectives, and organised police training courses on “G*psy criminality”. The National Association of Bunco Investigators (NABI), an organisation of law enforcement professionals focusing on “non-traditional organised crime”, has even created a database of individuals arrested or suspected of criminal activity, which clearly marked those who were Roma.

Thus, it is no surprise that a 2020 Harvard University survey of Romani Americans found that 4 out of 10 respondents reported being subjected to racial profiling by police. This demonstrates the ongoing challenges they face due to systemic racism and biased policing. 

Notably, many police agencies using surveillance technologies like ALPRs have adopted some sort of basic policy against biased policing or the use of these systems to target people based on race or ethnicity. But even when such policies are in place, an agency’s failure to enforce them allows these discriminatory practices to persist. These searches were also run through the systems of thousands of other police departments that may have their own policies and state laws that prohibit bias-based policing—yet none of those agencies appeared to have flagged the searches as inappropriate. 

The Flock search data in question here shows that surveillance technology exacerbates racism, and even well-meaning policies to address bias can quickly fall apart without proper oversight and accountability. 

Cops In Their Own Words

EFF reached out to a sample of the police departments that ran these searches. Here are five representative responses we received from police departments in Illinois, California, and Virginia. They do not inspire confidence.

1. Lake County Sheriff's Office, IL 

In June 2025, the Lake County Sheriff's Office ran three searches for a dark colored pick-up truck, using the reason: "G*PSY Scam." The search covered 1,233 networks, representing 14,467 different ALPR devices. 

In response to EFF, a sheriff's representative wrote via email:

“Thank you for reaching out and for bringing this to our attention.  We certainly understand your concern regarding the use of that terminology, which we do not condone or support, and we want to assure you that we are looking into the matter.

Any sort of discriminatory practice is strictly prohibited at our organization. If you have the time to take a look at our commitment to the community and our strong relationship with the community, I firmly believe you will see discrimination is not tolerated and is quite frankly repudiated by those serving in our organization. 

We appreciate you bringing this to our attention so we can look further into this and address it.”

2. Sacramento Police Department, CA

In May 2025, the Sacramento Police Department ran six searches using the term "g*psy."  The search covered 468 networks, representing 12,885 different ALPR devices. 

In response to EFF, a police representative wrote:

“Thank you again for reaching out. We looked into the searches you mentioned and were able to confirm the entries. We’ve since reminded the team to be mindful about how they document investigative reasons. The entry reflected an investigative lead, not a disparaging reference. 

We appreciate the chance to clarify.”

3. Palos Heights Police Department, IL

In September 2024, the Palos Heights Police Department ran more than two dozen searches using terms such as "g*psy vehicle," "g*psy scam" and "g*psy concrete vehicle." Most searches hit roughly 1,000 networks. 

In response to EFF, a police representative said the searches were related to a singular criminal investigation into a vehicle involved in a "suspicious circumstance/fraudulent contracting incident" and is "not indicative of a general search based on racial or ethnic profiling." However, the agency acknowledged the language was inappropriate: 

“The use of the term does not reflect the values or expected practices of our department. We do not condone the use of outdated or offensive terminology, and we will take this inquiry as an opportunity to educate those who are unaware of the negative connotation and to ensure that investigative notations and search reasons are documented in a manner that is accurate, professional, and free of potentially harmful language.

We appreciate your outreach on this matter and the opportunity to provide clarification.”

4. Irvine Police Department, CA

In February and May 2025, the Irvine Police Department ran eight searches using the term "roma" in the reason field. The searches covered 1,420 networks, representing 29,364 different ALPR devices. 

In a call with EFF, an IPD representative explained that the cases were related to a series of organized thefts. However, they acknowledged the issue, saying, "I think it's an opportunity for our agency to look at those entries and to use a case number or use a different term." 

5. Fairfax County Police Department, VA

Between December 2024 and April 2025, the Fairfax County Police Department ran more than 150 searches involving terms such as "g*psy case" and "roma crew burglaries." Fairfax County PD continued to defend its use of this language.

In response to EFF, a police representative wrote:

“Thank you for your inquiry. When conducting searches in investigative databases, our detectives must use the exact case identifiers, terms, or names connected to a criminal investigation in order to properly retrieve information. These entries reflect terminology already tied to specific cases and investigative files from other agencies, not a bias or judgment about any group of people. The use of such identifiers does not reflect bias or discrimination and is not inconsistent with our Bias-Based Policing policy within our Human Relations General Order.

A National Trend

Roma individuals and families are not the only ones being systematically and discriminatorily targeted by ALPR surveillance technologies. For example, Flock audit logs show agencies ran 400 more searches using terms targeting Traveller communities more generally, with a specific focus on Irish Travellers, often without any mention of a crime. 

Across the country, these tools are enabling and amplifying racial profiling by embedding longstanding policing biases into surveillance technologies. For example, data from Oak Park, IL, show that 84% of drivers stopped in Flock-related traffic incidents were Black—despite Black people making up only 19% of the local population. ALPR systems are far from being neutral tools for public safety and are increasingly being used to fuel discriminatory policing practices against historically marginalized people. 

The racially coded language in Flock's logs mirrors long-standing patterns of discriminatory policing. Terms like "furtive movements," "suspicious behavior," and "high crime area" have always been cited by police to try to justify stops and searches of Black, Latine, and Native communities. These phrases might not appear in official logs because they're embedded earlier in enforcement—in the traffic stop without clear cause, the undocumented stop-and-frisk, the intelligence bulletin flagging entire neighborhoods as suspect. They function invisibly until a body-worn camera, court filing, or audit brings them to light. Flock's network didn’t create racial profiling; it industrialized it, turning deeply encoded and vague language into scalable surveillance that can search thousands of cameras across state lines. 

The Path Forward

U.S. Sen. Ron Wyden, D-OR, recently recommended that local governments reevaluate their decisions to install Flock Safety in their communities. We agree, but we also understand that sometimes elected officials need to see the abuse with their own eyes first. 

We know which agencies ran these racist searches, and they should be held accountable. But we also know that the vast majority of Flock Safety's clients—thousands of police and sheriffs—also allowed those racist searches to run through their Flock Safety systems unchallenged. 

Elected officials must act decisively to address the racist policing enabled by Flock's infrastructure. First, they should demand a complete audit of all ALPR searches conducted in their jurisdiction and a review of search logs to determine (a) whether their police agencies participated in discriminatory policing and (b) what safeguards, if any, exist to prevent such abuse. Second, officials should institute immediate restrictions on data-sharing through Flock's nationwide network. As demonstrated by California law, for example, police agencies should not be able to share their ALPR data with federal authorities or out-of-state agencies, thus eliminating a vehicle for discriminatory searches spreading across state lines.

Ultimately, elected officials must terminate Flock Safety contracts entirely. The evidence is now clear: audit logs and internal policies alone cannot prevent a surveillance system from becoming a tool for racist policing. The fundamental architecture of Flock—thousands of cameras feeding into a nationwide searchable network—makes discrimination inevitable when enforcement mechanisms fail.

As Sen. Wyden astutely explained, "local elected officials can best protect their constituents from the inevitable abuses of Flock cameras by removing Flock from their communities.”

Table Overview and Notes

The following table compiles terms used by agencies to describe the reasons for searching the Flock Safety ALPR database. In a small number of cases, we removed additional information such as case numbers, specific incident details, and officers' names that were present in the reason field. 

We removed one agency from the list due to the agency indicating that the word was a person's name and not a reference to Romani people. 

In general, we did not include searches that used the term "Romanian," although many of those may also be indicative of anti-Roma bias. We also did not include uses of "traveler" or “Traveller” when it did not include a clear ethnic modifier; however, we believe many of those searches are likely relevant.  

A text-based version of the spreadsheet is available here

Rindala Alajaji

Application Gatekeeping: An Ever-Expanding Pathway to Internet Censorship

23 hours 36 minutes ago

It’s not news that Apple and Google use their app stores to shape what apps you can and cannot have on many of your devices. What is new is more governments—including the U.S. government—using legal and extralegal tools to lean on these gatekeepers in order to assert that same control. And rather than resisting, the gatekeepers are making it easier than ever. 

Apple’s decision to take down the ICEBlock app at least partially in response to threats from the U.S. government—with Google rapidly and voluntarily following suit—was bad enough. But it pales in comparison with Google’s new program, set to launch worldwide next year, requiring developers to register with the company in order to have their apps installable on Android certified devices—including paying a fee and providing personal information backed by government-issued identification. Google claims the new program of “is an extra layer of security that deters bad actors and makes it harder for them to spread harm,” but the registration requirements are barely tied to app effectiveness or security. Why, one wonders, does Google need to see your driver’s license to evaluate whether your app is safe?  Why, one also wonders, does Google want to create a database of virtually every Android app developer in the world? 

Those communities are likely to drop out of developing for Android altogether, depriving all Android users of valuable tools. 

F-Droid, a free and open-source repository for Android apps, has been sounding the alarm. As they’ve explained in an open letter, Google’s central registration system will be devastating for the Android developer community. Many mobile apps are created, improved, and distributed by volunteers, researchers, and/or small teams with limited financial resources. Others are created by developers who do not use the name attached to any government-issued identification. Others may have good reason to fear handing over their personal information to Google, or any other third party. Those communities are likely to drop out of developing for Android altogether, depriving all Android users of valuable tools. 

Google’s promise that it’s “working on” a program for “students and hobbyists” that may have different requirements falls far short of what is necessary to alleviate these concerns. 

It’s more important than ever to support technologies which decentralize and democratize our shared digital commons. A centralized global registration system for Android will inevitably chill this work. 

The point here is not that all the apps are necessarily perfect or even safe. The point is that when you set up a gate, you invite authorities to use it to block things they don’t like. And when you build a database, you invite governments (and private parties) to try to get access to that database. If you build it, they will come.  

Imagine you have developed a virtual private network (VPN) and corresponding Android mobile app that helps dissidents, journalists, and ordinary humans avoid corporate and government surveillance. In some countries, distributing that app could invite legal threats and even prosecution. Developers in those areas should not have to trust that Google would not hand over their personal information in response to a government demand just because they want their app to be installable by all Android users. By the same token, technologists that work on Android apps for reporting ICE misdeeds should not have to worry that Google will hand over their personal information to, say, the U.S. Department of Homeland Security. 

It’s easy to see how a new registration requirement for developers could give Google a new lever for maintaining its app store monopoly

Our tech infrastructure’s substantial dependence on just a few platforms is already creating new opportunities for those platforms to be weaponized to serve all kinds of disturbing purposes, from policing to censorship. In this context, it’s more important than ever to support technologies which decentralize and democratize our shared digital commons. A centralized global registration system for Android will inevitably chill this work. 

Not coincidentally, the registration system Google announced would also help cement Google’s outsized competitive power, giving the company an additional window—if it needed one, given the company’s already massive surveillance capabilities—into what apps are being developed, by whom, and how they are being distributed. It’s more than ironic that Google’s announcement came at the same time the company is fighting a court order (in the Epic Games v. Google lawsuit) that will require it to stop punishing developers who distribute their apps through app stores that compete with Google’s own. It’s easy to see how a new registration requirement for developers, potentially enforced by technical measures on billions of Android certified mobile devices, could give Google a new lever for maintaining its app store monopoly.  

EFF has signed on to F-Droid’s open letter. If you care about taking back control of tech, you should too. 

Corynne McSherry

EFF Stands With Tunisian Media Collective Nawaat

1 day 1 hour ago

When the independent Tunisian online media collective Nawaat announced that the government had suspended its activities for one month, the news landed like a punch in the gut for anyone who remembers what the Arab uprisings promised: dignity, democracy, and a free press.

But Tunisia’s October 31 suspension of Nawaat—delivered quietly, without formal notice, and justified under Decree-Law 2011-88—is not just a bureaucratic decision. It’s a warning shot aimed at the very idea of independent civic life.

The silencing of a revolutionary media outlet

Nawaat’s statement, published last week, recounts how the group discovered the suspension: not through any official communication, but by finding the order slipped under its office door. The move came despite Nawaat’s documented compliance with all the legal requirements under Decree 88, the 2011 law that once symbolized post-revolutionary openness for associations.

Instead, the Decree, once seen as a safeguard for civic freedom, is now being weaponized as a tool of control. Nawaat’s team describes the action as part of a broader campaign of harassment: tax audits, financial investigations, and administrative interrogations that together amount to an attempt to “stifle all media resistance to the dictatorship.”

For those who have followed Tunisia’s post-2019 trajectory, the move feels chillingly familiar. Since President Kais Saied consolidated power in 2021, civil society organizations, journalists, and independent voices have faced escalating repression. Amnesty International has documented arrests of reporters, the use of counter-terrorism laws against critics, and the closure of NGOs. And now, the government has found in Decree 88 a convenient veneer of legality to achieve what old regimes did by force.

Adopted in the hopeful aftermath of the revolution, Decree-Law 2011-88 was designed to protect the right to association. It allowed citizens to form organizations without prior approval and receive funding freely—a radical departure from the Ben Ali era’s suffocating controls.

But laws are only as democratic as the institutions that enforce them. Over the years, Tunisian authorities have chipped away at these protections. Administrative notifications, once procedural, have become tools for sanction. Financial transparency requirements have turned into pretexts for selective punishment.

When a government can suspend an association that has complied with every rule, the rule of law itself becomes a performance.

Bureaucratic authoritarianism

What’s happening in Tunisia is not an isolated episode. Across the region, governments have refined the art of silencing dissent without firing a shot. But whether through Egypt’s NGO Law, Morocco’s press code, or Algeria’s foreign-funding restrictions, the outcome is the same: fewer independent outlets, and fewer critical voices.

These are the tools of bureaucratic authoritarianism…the punishment is quiet, plausible, and difficult to contest. A one-month suspension might sound minor, but for a small newsroom like Nawaat—which operates with limited funding and constant political pressure—it can mean disrupted investigations, delayed publications, and lost trust from readers and sources alike.

A decade of resistance

To understand why Nawaat matters, remember where it began. Founded in 2004 under Zine El Abidine Ben Ali’s dictatorship, Nawaat became a rare space for citizen journalism and digital dissent. During the 2011 uprising, its reporting and documentation helped the world witness Tunisia’s revolution.

Over the past two decades, Nawaat has earned international recognition, including an EFF Pioneer Award in 2011, for its commitment to free expression and technological empowerment. It’s not just a media outlet; it’s a living archive of Tunisia’s struggle for dignity and rights.

That legacy is precisely what makes it threatening to the current regime. Nawaat represents a continuity of civic resistance that authoritarianism cannot easily erase.

The cost of silence

Administrative suspensions like this one are designed to send a message: You can be shut down at any time. They impose psychological costs that are harder to quantify than arrests or raids. Journalists start to self-censor. Donors hesitate to renew grants. The public, fatigued by uncertainty, tunes out.

But the real tragedy lies in what this means for Tunisians’ right to know. Nawaat’s reporting on corruption, surveillance, and state violence fills the gaps left by state-aligned media. Silencing it deprives citizens of access to truth and accountability.

As Nawaat’s statement puts it:

“This arbitrary decision aims to silence free voices and stifle all media resistance to the dictatorship.”

The government’s ability to pause a media outlet, even temporarily, sets a precedent that could be replicated across Tunisia’s civic sphere. If Nawaat can be silenced today, so can any association tomorrow.

So what can be done? Nawaat has pledged to challenge the suspension in court, but litigation alone won’t fix a system where independence is eroding from within. What’s needed is sustained, visible, and international solidarity.

Tunisia’s government may succeed in pausing Nawaat’s operations for a month. But it cannot erase the two decades of documentation, dissent, and hope the outlet represents. Nor can it silence the networks of journalists, technologists, and readers who know what is at stake.

EFF has long argued that the right to free expression is inseparable from the right to digital freedom. Nawaat’s suspension shows how easily administrative and legal tools can become weapons against both. When states combine surveillance, regulatory control, and economic pressure, they don’t need to block websites or jail reporters outright—they simply tighten the screws until free expression becomes impossible.

That’s why what happens in Tunisia matters far beyond its borders. It’s a test of whether the ideals of 2011 still mean anything in 2025.

And Nawaat, for its part, has made its position clear:

“We will continue to defend our independence and our principles. We will not be silenced.”

Jillian C. York

What EFF Needs in a New Executive Director

1 day 2 hours ago

By Gigi Sohn, Chair, EFF Board of Directors 

With the impending departure of longtime, renowned, and beloved Executive Director Cindy Cohn, EFF and leadership advisory firm Russell Reynolds Associates have developed a profile for her successor.  While Cindy is irreplaceable, we hope that everyone who knows and loves EFF will help us find our next leader.  

First and foremost, we are looking for someone who’ll meet this pivotal moment in EFF’s history. As authoritarian surveillance creeps around the globe and society grapples with debates over AI and other tech, EFF needs a forward-looking, strategic, and collaborative executive director to bring fresh eyes and new ideas while building on our past successes.  

The San Francisco-based executive director, who reports to our board of directors, will have responsibility over all administrative, financial, development and programmatic activities at EFF.  They will lead a dedicated team of legal, technical, and advocacy professionals, steward EFF’s strong organizational culture, and ensure long-term organizational sustainability and impact. That means being: 

  • Our visionary — partnering with the board and staff to define and advance a courageous, forward-looking strategic vision for EFF; leading development, prioritization, and execution of a comprehensive strategic plan that balances proactive agenda-setting with responsive action; and ensuring clarity of mission and purpose, aligning organizational priorities and resources for maximum impact. 
  • Our face and voice — serving as a compelling, credible public voice and thought leader for EFF’s mission and work, amplifying the expertise of staff and engaging diverse audiences including media, policymakers, and the broader public, while also building and nurturing partnerships and coalitions across the technology, legal, advocacy, and philanthropic sectors. 
  • Our chief money manager — stewarding relationships with individual donors, foundations, and key supporters; developing and implementing strategies to diversify and grow EFF’s revenue streams, including membership, grassroots, institutional, and major gifts; and ensuring financial discipline, transparency, and sustainability in partnership with the board and executive team. 
  • Our fearless leader — fostering a positive, inclusive, high-performing, and accountable culture that honors EFF’s activist DNA while supporting professional growth, partnering with unionized staff, and maintaining a collaborative, constructive relationship with the staff union. 

It’ll take a special someone to lead us with courage, vision, personal integrity, and deep understanding of EFF’s unique role at the intersection of law and technology. For more details — including the compensation range and how to apply — click here for the full position specification. And if you know someone who you believe fits the bill, all nominations (strictly confidential, of course) are welcome at eff@russellreynolds.com.  

Guest Author

Once Again, Chat Control Flails After Strong Public Pressure

3 days 23 hours ago

The European Union Council pushed for a dangerous plan to scan encrypted messages, and once again, people around the world loudly called out the risks, leading to the current Danish presidency to withdraw the plan

EFF has strongly opposed Chat Control since it was first introduced in 2022. The zombie proposal comes back time and time again, and time and time again, it’s been shot down because there’s no public support. The fight is delayed, but not over.

It’s time for lawmakers to stop attempting to compromise encryption under the guise of public safety. Instead of making minor tweaks and resubmitting this proposal over and over, the EU Council should accept that any sort of client-side scanning of devices undermines encryption, and move on to developing real solutions that don’t violate the human rights of people around the world. 

As long as lawmakers continue to misunderstand the way encryption technology works, there is no way forward with message-scanning proposals, not in the EU or anywhere else. This sort of surveillance is not just an overreach; it’s an attack on fundamental human rights. 

The coming EU presidencies should abandon these attempts and work on finding a solution that protects people’s privacy and security.

Thorin Klosowski

Opt Out October: Daily Tips to Protect Your Privacy and Security

4 days 4 hours ago

Trying to take control of your online privacy can feel like a full-time job. But if you break it up into small tasks and take on one project at a time it makes the process of protecting your privacy much easier. This month we’re going to do just that. For the month of October, we’ll update this post with new tips every weekday that show various ways you can opt yourself out of the ways tech giants surveil you.

Online privacy isn’t dead. But the tech giants make it a pain in the butt to achieve. With these incremental tweaks to the services we use, we can throw sand in the gears of the surveillance machine and opt out of the ways tech companies attempt to optimize us into advertisement and content viewing machines. We’re also pushing companies to make more privacy-protective defaults the norm, but until that happens, the onus is on all of us to dig into the settings.

Support EFF!

All month long we’ll share tips, including some with the help from our friends at Consumer Reports’ Security Planner tool. Use the Table of Contents here to jump straight to any tip.

Table of Contents

Tip 1: Establish Good Digital Hygiene

Before we can get into the privacy weeds, we need to first establish strong basics. Namely, two security fundamentals: using strong passwords (a password manager helps simplify this) and two-factor authentication for your online accounts. Together, they can significantly improve your online privacy by making it much harder for your data to fall into the hands of a stranger.

Using unique passwords for every web login means that if your account information ends up in a data breach, it won’t give bad actors an easy way to unlock your other accounts. Since it’s impossible for all of us to remember a unique password for every login we have, most people will want to use a password manager, which generates and stores those passwords for you.

Two-factor authentication is the second lock on those same accounts. In order to login to, say, Facebook for the first time on a particular computer, you’ll need to provide a password and a “second factor,” usually an always-changing numeric code generated in an app or sent to you on another device. This makes it much harder for someone else to get into your account because it’s less likely they’ll have both a password and the temporary code.

This can be a little overwhelming to get started if you’re new to online privacy! Aside from our guides on Surveillance Self-Defense, we recommend taking a look at Consumer Reports’ Security Planner for ways to help you get started setting up your first password manager and turning on two-factor authentication.

Tip 2: Learn What a Data Broker Knows About You

Hundreds of data brokers you’ve never heard of are harvesting and selling your personal information. This can include your address, online activity, financial transactions, relationships, and even your location history. Once sold, your data can be abused by scammers, advertisers, predatory companies, and even law enforcement agencies.

Data brokers build detailed profiles of our lives but try to keep their own practices hidden. Fortunately, several state privacy laws give you the right to see what information these companies have collected about you. You can exercise this right by submitting a data access request to a data broker. Even if you live in a state without privacy legislation, some data brokers will still respond to your request.

There are hundreds of known data brokers, but here are a few major ones to start with:

Data brokers have been caught ignoring privacy laws, so there’s a chance you won’t get a response. If you do, you’ll learn what information the data broker has collected about you and the categories of third parties they’ve sold it to. If the results motivate you to take more privacy action, encourage your friends and family to do the same. Don’t let data brokers keep their spying a secret.

You can also ask data brokers to delete your data, with or without an access request. We’ll get to that later this month and explain how to do this with people-search sites, a category of data brokers.

Tip 3: Disable Ad Tracking on iPhone and Android

Picture this: you’re doomscrolling and spot a t-shirt you love. Later, you mention it to a friend and suddenly see an ad for that exact shirt in another app. The natural question pops into your head: “Is my phone listening to me?” Take a sigh of relief because, no, your phone is not listening to you. But advertisers are using shady tactics to profile your interests. Here’s an easy way to fight back: disable the ad identifier on your phone to make it harder for advertisers and data brokers to track you.

Disable Ad Tracking on iOS and iPadOS:

  • Open Settings > Privacy & Security > Tracking, and turn off “Allow Apps to Request to Track.”
  • Open Settings > Privacy & Security > Apple Advertising, and disable “Personalized Ads” to also stop some of Apple’s internal tracking for apps like the App Store. 
  • If you use Safari, go to Settings > Apps > Safari > Advanced and disable “Privacy Preserving Ad Measurement.”

Disable Ad Tracking on Android:

  • Open Settings > Security & privacy > Privacy controls > Ads, and tap “Delete advertising ID.”
  • While you’re at it, run through Google’s “Privacy Checkup” to review what info other Google services—like YouTube or your location—may be sharing with advertisers and data brokers.

These quick settings changes can help keep bad actors from spying on you. For a deeper dive on securing your iPhone or Android device, be sure to check out our full Surveillance Self-Defense guides.

Tip 4: Declutter Your Apps

Decluttering is all the rage for optimizers and organizers alike, but did you know a cleansing sweep through your apps can also help your privacy? Apps collect a lot of data, often in the background when you are not using them. This can be a prime way companies harvest your information, and then repackage and sell it to other companies you've never heard of. Having a lot of apps increases the peepholes that companies can gain into your personal life. 

Do you need three airline apps when you're not even traveling? Or the app for that hotel chain you stayed in once? It's best to delete that app and cut off their access to your information. In an ideal world, app makers would not process any of your data unless strictly necessary to give you what you asked for. Until then, to do an app audit:

  • Look through the apps you have and identify ones you rarely open or barely use. 
  • Long-press on apps that you don't use anymore and delete or uninstall them when a menu pops up. 
  • Even on apps you keep, take a swing through the location, microphone, or camera permissions for each of them. For iOS devices you can follow these instructions to find that menu. For Android, check out this instructions page.

If you delete an app and later find you need it, you can always redownload it. Try giving some apps the boot today to gain some memory space and some peace of mind.

Support EFF!

Tip 5: Disable Behavioral Ads on Amazon

Happy Amazon Prime Day! Let’s celebrate by taking back a piece of our privacy.

Amazon collects an astounding amount of information about your shopping habits. While the only way to truly free yourself from the company’s all-seeing eye is to never shop there, there is something you can do to disrupt some of that data use: tell Amazon to stop using your data to market more things to you (these settings are for US users and may not be available in all countries).

  • Log into your Amazon account, then click “Account & Lists” under your name. 
  • Scroll down to the “Communication and Content” section and click “Advertising preferences” (or just click this link to head directly there).
  • Click the option next to “Do not show me interest-based ads provided by Amazon.”
  • You may want to also delete the data Amazon already collected, so click the “Delete ad data” button.

This setting will turn off the personalized ads based on what Amazon infers about you, though you will likely still see recommendations based on your past purchases at Amazon.

Of course, Amazon sells a lot of other products. If you own an Alexa, now’s a good time to review the few remaining privacy options available to you after the company took away the ability to disable voice recordings. Kindle users might want to turn off some of the data usage tracking. And if you own a Ring camera, consider enabling end-to-end encryption to ensure you’re in control of the recording, not the company. 

Tip 6: Install Privacy Badger to Block Online Trackers

Every time you browse the web, you’re being tracked. Most websites contain invisible tracking code that lets companies collect and profit from your data. That data can end up in the hands of advertisers, data brokers, scammers, and even government agencies. Privacy Badger, EFF’s free browser extension, can help you fight back.

Privacy Badger automatically blocks hidden trackers to stop companies from spying on you online. It also tells websites not to share or sell your data by sending the “Global Privacy Control” signal, which is legally binding under some state privacy laws. Privacy Badger has evolved over the past decade to fight various methods of online tracking. Whether you want to protect your sensitive information from data brokers or just don’t want Big Tech monetizing your data, Privacy Badger has your back.

Visit privacybadger.org to install Privacy Badger.

It’s available on Chrome, Firefox, Edge, and Opera for desktop devices and Firefox and Edge for Android devices. Once installed, all of Privacy Badger’s features work automatically. There’s no setup required! If blocking harmful trackers ends up breaking something on a website, you can easily turn off Privacy Badger for that site while maintaining privacy protections everywhere else.

When you install Privacy Badger, you’re not just protecting yourself—you’re joining EFF and millions of other users in the fight against online surveillance.

Tip 7: Review Location Tracking Settings

Data brokers don’t just collect information on your purchases and browsing history. Mobile apps that have the location permission turned on will deliver your coordinates to third parties in exchange for insights or monetary kickbacks. Even when they don’t deliver that data directly to data brokers, if the app serves ad space, your location will be delivered in real-time bid requests not only to those wishing to place an ad, but to all participants in the ad auction—even if they lose the bid. Location data brokers take part in these auctions just to harvest location data en masse, without any intention of buying ad space.

Luckily, you can change a few settings to protect yourself against this hoovering of your whereabouts. You can use iOS or Android tools to audit an app’s permissions, providing clarity on who is providing what info to whom. You can then go to the apps that don’t need your location data and disable their access to that data (you can always change your mind later if it turns out location access was useful). You can also disable real-time location tracking by putting your phone into airplane mode, while still being able to navigate using offline maps. And by disabling mobile advertising identifiers (see tip three), you break the chain that links your location from one moment to the next.

Finally, for particularly sensitive situations you may want to bring an entirely separate, single-purpose device which you’ve kept clean of unneeded apps and locked down settings on. Similar in concept to a burner phone, even if this single-purpose device does manage to gather data on you, it can only tell a partial story about you—all the other data linking you to your normal activities will be kept separate.

For details on how you can follow these tips and more on your own devices, check out our more extensive post on the topic.

Tip 8: Limit the Data Your Gaming Console Collects About You

Oh, the beauty of gaming consoles—just plug in and play! Well... after you speed-run through a bunch of terms and conditions, internet setup, and privacy settings. If you rushed through those startup screens, don’t worry! It’s not too late to limit the data your console is collecting about you. Because yes, modern consoles do collect a lot about your gaming habits.

Start with the basics: make sure you have two-factor authentication turned on for your accounts. PlayStation, Xbox, and Nintendo all have guides on their sites. Between payment details and other personal info tied to these accounts, 2FA is an easy first line of defense for your data.

Then, it’s time to check the privacy controls on your console:

  • PlayStation 5: Go to Settings > Users and Accounts > Privacy to adjust what you share with both strangers and friends. To limit the data your PS5 collects about you, go to Settings > Users and Accounts > Privacy, where you can adjust settings under Data You Provide and Personalization.
  • Xbox Series X|S: Press the Xbox button > Profile & System > Settings > Account > Privacy & online safety > Xbox Privacy to fine-tune your sharing. To manage data collection, head to Profile & System > Settings > Account > Privacy & online safety > Data collection.
  • Nintendo Switch: The Switch doesn’t share as much data by default, but you still have options. To control who sees your play activity, go to System Settings > Users > [your profile] > Play Activity Settings. To opt out of sharing eShop data, open the eShop, select your profile (top right), then go to Google Analytics Preferences > Do Not Share.

Plug and play, right? Almost. These quick checks can help keep your gaming sessions fun—and more private.

Tip 9: Hide Your Start and End Points on Strava

Sharing your personal fitness goals, whether it be extended distances, accurate calorie counts, or GPS paths—sounds like a fun, competitive feature offered by today's digital fitness trackers. If you enjoy tracking those activities, you've probably heard of Strava. While it's excellent for motivation and connecting with fellow athletes, Strava's default settings can reveal sensitive information about where you live, work, or exercise, creating serious security and privacy risks. Fortunately, Strava gives you control over how much of your activity map is visible to others, allowing you to stay active in your community while protecting your personal safety.

We've covered how Strava data exposed classified military bases in 2018 when service members used fitness trackers. If fitness data can compromise national security, what's it revealing about you?

Here's how to hide your start and end points:

  • On the website: Hover over your profile picture > Settings > Privacy Controls > Map Visibility.
  • On mobile: Open Settings > Privacy Controls > Map Visibility.
  • You can then choose from three options: hide portions near a specific address, hide start/end of all activities, or hide entire maps

You can also adjust individual activities:

  • Open the activity you want to edit.
  • Select the three-dot menu icon.
  • Choose "Edit Map Visibility."
  • Use sliders to customize what's hidden or enable "Hide the Entire Map."

Great job taking control of your location privacy! Remember that these settings only apply to Strava, so if you share activities to other platforms, you'll need to adjust those privacy settings separately. While you're at it, consider reviewing your overall activity visibility settings to ensure you're only sharing what you want with the people you choose.

Tip 10: Find and Delete An Account You No Longer Use

Millions of online accounts are compromised each year. The more accounts you have, the more at risk you are of having your personal data illegally accessed and published online. Even if you don’t suffer a data breach, there’s also the possibility that someone could find one of your abandoned social media accounts containing information you shared publicly on purpose in the past, but don’t necessarily want floating around anymore. And companies may still be profiting off details of your personal life, even though you’re not getting any benefit from their service.

So, now’s a good time to find an old account to delete. There may be one you can already think of, but if you’re stuck, you can look through your password manager, look through logins saved on your web browser, or search your email inbox for phrases like “new account,” “password,” “welcome to,” or “confirm your email.” Or, enter your email address on the website HaveIBeenPwned to get a list of sites where your personal information has been compromised to see if any of them are accounts you no longer use.

Once you’ve decided on an account, you’ll need to find the steps to delete it. Simply deleting an app off of your phone or computer does not delete your account. Often you can log in and look in the account settings, or find instructions in the help menu, the FAQ page, or the pop-up customer service chat. If that fails, use a search engine to see if anybody else has written up the steps to deleting your specific type of account.

For more information, check out the Delete Unused Accounts tip on Security Planner.

Support EFF!

Tip 11: Search for Yourself

Today's tip may sound a little existential, but we're not suggesting a deep spiritual journey. Just a trip to your nearest search engine. Pop your name into search engines such as Google or DuckDuckGo, or even AI tools such as ChatGPT, to see what you find. This is one of the simplest things you can do to raise your own awareness of your digital reputation. It can be the first thing prospective employers (or future first dates) do when trying to figure out who you are. From a privacy perspective, doing it yourself can also shed light on how your information is presented to the general public. If there's a defunct social media account you'd rather keep hidden, but it's on the first page of your search results, that might be a good signal for you to finally delete that account. If you shared your cellphone number with an organization you volunteer for and it's on their home page, you can ask them to take it down.

Knowledge is power. It's important to know what search results are out there about you, so you understand what people see when they look for you. Once you have this overview, you can make better choices about your online privacy. 

Tip 12: Tell “People Search” Sites to Delete Your Information

When you search online for someone’s name, you’ll likely see results from people-search sites selling their home address, phone number, relatives’ names, and more. People-search sites are a type of data broker with an especially dangerous impact. They can expose people to scams, stalking, and identity theft. Submit opt out requests to these sites to reduce the amount of personal information that is easily available about you online.

Check out this list of opt-out links and instructions for more than 50 people search sites, organized by priority. Before submitting a request, check that the site actually has your information. Here are a few high-priority sites to start with: 

Data brokers continuously collect new information, so your data could reappear after being deleted. You’ll have to re-submit opt-outs periodically to keep your information off of people-search sites. Subscription-based services can automate this process and save you time, but a Consumer Reports study found that manual opt-outs are more effective.

Tip 13: Remove Your Personal Addresses from Search Engines 

Your home address may often be found with just a few clicks online. Whether you're concerned about your digital footprint or looking to safeguard your physical privacy, understanding where your address appears and how to remove or obscure it is a crucial step. Here's what you need to know.

Your personal addresses can be available through public records like property purchases, medical licensing information, or data brokers. Opting out from data brokers will do a lot to remove what's available commercially, but sometimes you can't erase the information entirely from things like property sales records.

You can ask some search engines to remove your personal information from search indexes, which is the most efficient way to make information like your personal addresses, phone number, and email address a lot harder to find. Google has a form that makes this request quite easy, and we’d suggest starting there.

Day 14: Check Out Signal

Here's the problem: many of your texts aren't actually private. Phone companies, government agencies, and app developers all too often can all peek at your conversations.

So on Global Encryption Day, our tip is to check out Signal—a messaging app that actually keeps your conversations private.

Signal uses end-to-end encryption, meaning only you and your recipient can read your messages—not even Signal can see them. Security experts love Signal because it's run by a privacy-focused nonprofit, funded by donations instead of data collection, and its code is publicly auditable. 

Beyond privacy, Signal offers free messaging and calls over Wi-Fi, helping you avoid SMS charges and international calling fees. The only catch? Your contacts need Signal too, so start recruiting your friends and family!

How to get started: Download Signal from your app store, verify your phone number, set a secure PIN, and start messaging your contacts who join you. Consider also setting up a username so people can reach you without sharing your phone number. For more detailed instructions, check out our guide.

Global Encryption Day is the perfect timing to protect your communications. Take your time to explore the app, and check out other privacy protecting features like disappearing messages, session verification, and lock screen notification privacy.

Tip 15: Switch to a Privacy-Protective Browser

Your browser stores tons of personal information: browsing history, tracking cookies, and data that companies use to build detailed profiles for targeted advertising. The browser you choose makes a huge difference in how much of this tracking you can prevent.

Most people use Chrome or Safari, which are automatically installed on Google and Apple products, but these browsers have significant privacy drawbacks. For example: Chrome's Incognito mode only hides history on your device—it doesn't stop tracking. Safari has been caught storing deleted browser history and collecting data even in private browsing mode.

Firefox is one alternative that puts privacy first. Unlike Chrome, Firefox automatically blocks trackers and ads in Private Browsing mode and prevents websites from sharing your data between sites. It also warns you when websites try to extract your personal information. But Firefox isn't your only option—other privacy-focused browsers like DuckDuckGo, Brave, and Tor also offer strong protections with different features. The key is switching away from browsers that prioritize data collection over your privacy.

Switching is easy: download your chosen browser from the links above and install it. Most browsers let you import bookmarks and passwords during setup.

You now have a new browser! Take some time to explore your new browser's privacy settings to maximize your protection.

Tip 16: Give Yourself Another Online Identity

We all take on different identities at times. Just as it's important to set boundaries in your daily life, the same can be true for your digital identity. For many reasons, people may want to keep aspects of their lives separate—and giving people control over how their information is used is one of the fundamental reasons we fight for privacy. Consider chopping up pieces of your life over separate email accounts, phone numbers, or social media accounts. 

This can help you manage your life and keep a full picture of your private information out of the hands of nosy data-mining companies. Maybe you volunteer for an organization in your spare time that you'd rather keep private, want to keep emails from your kids' school separate from a mountain of spam, or simply would rather keep your professional and private social media accounts separate. 

Whatever the reason, consider whether there's a piece of your life that could benefit from its own identity. When you set up these boundaries, you can also protect your privacy.

Tip 17: Check Out Virtual Card Services

Ever encounter an online vendor selling something that’s just what you need—if you could only be sure they aren’t skimming your credit card number? Or maybe you trust the vendor, but aren’t sure the web site (seemingly written in some arcane e-commerce platform from 1998) won’t be hacked within the hour after your purchase? Buying those bits and bobs shouldn’t cost you your peace of mind on top of the dollar amount. For these types of purchases, we recommend checking out a virtual card service.

These services will generate a seemingly random credit card for your use which is locked down in a particular way which you can specify. This may mean a card locked to a single vendor, where no one else can make charges on it. It could only validate charges for a specific category of purchase, for example clothing. You can not only set limits on vendors, but set spending limits a card can’t exceed, or that it should just be a one-time use card and then close itself. You can even pause a card if you are sure you won’t be using it for some time, and then unpause it later. The configuration is up to you.

There are a number of virtual card services available, like Privacy.com or IronVest, just to name a few. Just like any vendor, though, these services need some way to charge you. So for any virtual card service, pop them into your favored search engine to verify they’re legit, and aren’t going to burden you with additional fees. Some options may also only be available in specific countries or regions, due to financial regulation laws.

Support EFF!

Tip 18: Minimize Risk While Using Digital Payment Apps

Digital payment apps like Cash App, Venmo, and Zelle generally offer fewer fraud protections than credit cards offered by traditional financial institutions. It’s safer to stick to credit cards when making online purchases. That said, there are ways to minimize your risk.

Turn on transaction alerts:

  • On Cash App, tap on your picture or initials on the right side of the app. Tap Notifications, and then Transactions. From there, you can toggle the settings to receive a push notification, a text, and/or an email with receipts or to track activity on the app.
  • On PayPal, tap on the top right icon to access your account. Tap Notification Preferences, click on “Open Settings” and toggle to “Allow Notifications” if you’d like to see those on your phone.
  • On Venmo, tap on your picture or initials to go to the Me tab. Then, tap the Settings gear in the top right of the app, and tap Notifications. From there, you can adjust your text and email notifications, and even turn on push notifications. 

Report suspected fraud quickly

If you receive a notification for a purchase you didn’t make, even if it’s a small amount, make sure to immediately report it. Scammers sometimes test the waters with small amounts to see whether or not their targets are paying attention. Additionally, you may be on the hook for part of the payment if you don’t act fast. PayPal and Venmo say they cover lost funds if they’re reported within 60 days, but Cash App has more complicated restrictions, which can include fees of up to $500 if you lose your device or password and don’t report it within 48 hours.

And don’t forget to turn on multifactor authentication for each app. For more information, check out these tips from Consumer Reports.

Tip 19: Turn Off Ad Personalization to Limit How the Tech Giants Monetize Your Data

Tech companies make billions by harvesting your personal data and using it to sell hyper-targeted ads. This business model drives them to track us far beyond their own platforms, gathering data about our online and offline activity. Surveillance-based advertising isn’t just creepy—it’s harmful. The systems that power hyper-targeted ads can also funnel your personal information to data brokers, advertisers, scammers, and law enforcement agencies. 

To limit how companies monetize your data through surveillance-based advertising, turn off ad personalization on your accounts. This setting looks different depending on the platform, but here are some key places to start:

  • Meta (Facebook & Instagram): Follow this guide to find the setting for disabling ad targeting based on data Meta collects about you from other websites and apps.
  • Google: Visit https://myadcenter.google.com/home and switch the “Personalized ads” option from “On” to “Off.”
  • X (formerly known as Twitter): Visit https://x.com/settings/privacy_and_safety and turn off all settings under “Data sharing and personalization”

Banning online behavioral ads would be a better solution, but turning off ad personalization is a quick and easy step to limit how tech companies profit from your data. And don’t forget to change this same setting on Amazon, too.

Tip 20: Tighten Account Privacy Settings

Just because you want to share information with select friends and family on social media doesn’t necessarily mean you want to broadcast everything to the entire world. Whether you want to make sure you’re not sharing your real-time location with someone you’d rather not bump into or only want your close friends to know about your favorite pop star, you can typically restrict how companies display your status updates and other information.

In addition to whether data is displayed publicly or just to a select group of contacts, you may have some control over how data is collected, used, and shared with advertisers, or how long it is stored for.

To get started, choose an account and review the privacy settings, making changes as needed. Here are links to a few of the major companies to get you started:

Unfortunately, you may need to tweak your privacy settings multiple times to get them the way you want, as companies often introduce new features that are less private by default. And while companies sometimes offer choices on how data is collected, you can’t control most of the data collection that takes place. For more information, see Security Planner.

Tip 21: Protect Your Data When Dating Online

Dating apps like Grindr and Tinder collect vast amounts of intimate details—everything from sexual preferences, location history, and behavioral patterns—all from people that are just looking for love and connection. This data falling into the wrong hands can come with unacceptable consequences, especially for members of the LGBTQ+ community and other vulnerable users that pertinently need privacy protections.

To ensuring that finding love does not involve such a privacy impinging tradeoff, follow these tips to protecting yourself when online dating:

  1. Review your login information and make sure to use a strong, unique password for your accounts; and enable two-factor authentication when offered. 
  2. Disable behavioral ads so personal details about you cannot be used to create a comprehensive portrait of your life—including your sexual orientation.
  3. Review your access to your location and camera roll, and possibly change these in line with what information you would like to keep private. 
  4. Consider what photos you choose, upload, and share; and assume that everything can and will be made public.
  5. Disable the integration of third-party apps like Spotify if you want more privacy. 
  6. Be mindful of what you share with others when you first chat, such as not disclosing financial details, and trust your gut if something feels off. 

There isn't one singular way to use dating apps, but taking these small steps can have a big impact in staying safe when dating online.

Tip 22: Turn Off Automatic Content Recognition (ACR) On Your TV

You might think TVs are just meant to be watched, but it turns out TV manufacturers do their fair share of watching what you watch, too. This is done through technology called “automatic content recognition” (ACR), which snoops on and identifies what you’re watching by snapping screenshots and comparing them to a big database. How many screenshots? The Markup found some TVs captured up to 7,200 images per hour. The main reason? Ad targeting, of course. 

Any TV that’s connected to the internet likely does this alongside now-standard snooping practices, like tracking what apps you open and where you’re located. ACR is particularly nefarious, though, as it can identify not just streaming services, but also offline content, like video games, over-the-air broadcasts, and physical media. What we watch can and should be private, but that’s especially true when we’re using media that’s otherwise not connected to the internet, like Blu-Rays or DVDs.

Opting out of ACR can be a bit of a chore, but it is possible on most smart TVs. Consumer Reports has guides for most of the major TV manufacturers. 

And that’s it for Opt Out October. Hopefully you’ve come across a tip or two that you didn’t know about, and found ways to protect your privacy, and disrupt the astounding amount of data collection tech companies do.

Thorin Klosowski

The Department of Defense Wants Less Proof its Software Works

4 days 5 hours ago

When Congress eventually reopens, the 2026 National Defense Authorization Act (NDAA) will be moving toward a vote. This gives us a chance to see the priorities of the Secretary of Defense and his Congressional allies when it comes to the military—and one of those priorities is buying technology, especially AI, with less of an obligation to prove it’s effective and worth the money the government will be paying for it. 

As reported by Lawfare, “This year’s defense policy bill—the National Defense Authorization Act (NDAA)—would roll back data disclosures that help the department understand the real costs of what they are buying, and testing requirements that establish whether what contractors promise is technically feasible or even suited to its needs.” This change comes amid a push from the Secretary of Defense to “Maximize Lethality” by acquiring modern software “at a speed and scale for our Warfighter.” The Senate Armed Services Committee has also expressed interest in making “significant reforms to modernize the Pentagon's budgeting and acquisition operations...to improve efficiency, unleash innovation, and modernize the budget process.”

The 2026 NDAA itself says that the “Secretary of Defense shall prioritize alternative acquisition mechanisms to accelerate development and production” of technology, including an expedited “software acquisition pathway”—a special part of the U.S. code that, if this version of the NDAA passes, will transfer powers to the Secretary of Defense to streamline the buying process and make new technology or updates to existing technology and get it operational “in a period of not more than one year from the time the process is initiated…” It also makes sure the new technology “shall not be subjected to” some of the traditional levers of oversight

All of this signals one thing: speed over due diligence. In a commercial technology landscape where companies are repeatedly found to be overselling or even deceiving people about their product’s technical capabilities—or where police departments are constantly grappling with the reality that expensive technology may not be effective at providing the solutions they’re after—it’s important that the government agency with the most expansive budget has time to test the efficacy and cost-efficiency of new technology. It’s easy for the military or police departments to listen to a tech company’s marketing department and believe their well-rehearsed sales pitch, but Congress should make sure that public money is being used wisely and in a way that is consistent with both civil liberties and human rights. 

The military and those who support its preferred budget should think twice about cutting corners before buying and deploying new technology. The Department of Defense’s posturing does not elicit confidence that the technologically-focused military of tomorrow will be equipped in a way that is effective, efficient, or transparent. 

Matthew Guariglia

Age Verification, Estimation, Assurance, Oh My! A Guide to the Terminology

4 days 21 hours ago

If you've been following the wave of age-gating laws sweeping across the country and the globe, you've probably noticed that lawmakers, tech companies, and advocates all seem to be using different terms for what sounds like the same thing. Age verification, age assurance, age estimation, age gating—they get thrown around interchangeably, but they technically mean different things. And those differences matter a lot when we're talking about your rights, your privacy, your data, and who gets to access information online.

So let's clear up the confusion. Here's your guide to the terminology that's shaping these laws, and why you should care about the distinctions.

Age Gating: “No Kids Allowed”

Age gating refers to age-based restrictions on access to online services. Age gating can be required by law or voluntarily imposed as a corporate decision. Age gating does not necessarily refer to any specific technology or manner of enforcement for estimating or verifying a user’s age. It simply refers to the fact that a restriction exists. Think of it as the concept of “you must be this old to enter” without getting into the details of how they’re checking. 

Age Assurance: The Umbrella Term

Think of age assurance as the catch-all category. It covers any method an online service uses to figure out how old you are with some level of confidence. That's intentionally vague, because age assurance includes everything from the most basic check-the-box systems to full-blown government ID scanning.

Age assurance is the big tent that contains all the other terms we're about to discuss below. When a company or lawmaker talks about "age assurance," they're not being specific about how they're determining your age—just that they're trying to. For decades, the internet operated on a “self-attestation” system where you checked a box saying you were 18, and that was it. These new age-verification laws are specifically designed to replace that system. When lawmakers say they want "robust age assurance," what they really mean is "we don't trust self-attestation anymore, so now you need to prove your age beyond just swearing to it."

Age Estimation: Letting the Algorithm Decide

Age estimation is where things start getting creepy. Instead of asking you directly, the system guesses your age based on data it collects about you.

This might include:

  • Analyzing your face through a video selfie or photo
  • Examining your voice
  • Looking at your online behavior—what you watch, what you like, what you post
  • Checking your existing profile data

Companies like Instagram have partnered with services like Yoti to offer facial age estimation. You submit a video selfie, an algorithm analyzes your face, and spits out an estimated age range. Sounds convenient, right?

Here's the problem, “estimation” is exactly that: it’s a guess. And it is inherently imprecise. Age estimation is notoriously unreliable, especially for teenagers—the exact group these laws claim to protect. An algorithm might tell a website you're somewhere between 15 and 19 years old. That's not helpful when the cutoff is 18, and what's at stake is a young person's constitutional rights.

And it gets worse. These systems consistently fail for certain groups:

When estimation fails (and it often does), users get kicked to the next level: actual verification. Which brings us to…

Age Verification: “Show Me Your Papers”

Age verification is the most invasive option. This is where you have to prove your age to a certain date, rather than, for example, prove that you have crossed some age threshold (like 18 or 21 or 65). EFF generally refers to most age gates and mandates on young people’s access to online information as “age verification,” as most of them typically require you to submit hard identifiers like:

  • Government-issued ID (driver's license, passport, state ID)
  • Credit card information
  • Utility bills or other documents
  • Biometric data

This is what a lot of new state laws are actually requiring, even when they use softer language like "age assurance." Age verification doesn't just confirm you're over 18, it reveals your full identity. Your name, address, date of birth, photo—everything.

Here's the critical thing to understand: age verification is really identity verification. You're not just proving you're old enough—you're proving exactly who you are. And that data has to be stored, transmitted, and protected by every website that collects it.

We already know how that story ends. Data breaches are inevitable. And when a database containing your government ID tied to your adult content browsing history gets hacked—and it will—the consequences can be devastating.

Why This Confusion Matters

Politicians and tech companies love using these terms interchangeably because it obscures what they're actually proposing. A law that requires "age assurance" sounds reasonable and moderate. But if that law defines age assurance as requiring government ID verification, it's not moderate at all—it's mass surveillance. Similarly, when Instagram says it's using "age estimation" to protect teens, that sounds privacy-friendly. But when their estimation fails and forces you to upload your driver's license instead, the privacy promise evaporates.

Language matters because it shapes how we think about these systems. "Assurance" sounds gentle. "Verification" sounds official. "Estimation" sounds technical and impersonal, and also admits its inherent imprecision. 

Here's the uncomfortable truth: most lawmakers writing these bills have no idea how any of this technology actually works. They don't know that age estimation systems routinely fail for people of color, trans individuals, and people with disabilities. They don't know that verification systems have error rates. They don't even seem to understand that the terms they're using mean different things. The fact that their terminology is all over the place—using "age assurance," "age verification," and "age estimation" interchangeably—makes this ignorance painfully clear, and leaves the onus on platforms to choose whichever option best insulates them from liability.

Language matters because it shapes how we think about these systems. "Assurance" sounds gentle. "Verification" sounds official. "Estimation" sounds technical and impersonal, and also admits its inherent imprecision. But they all involve collecting your data and create a metaphysical age gate to the internet. The terminology is deliberately confusing, but the stakes are clear: it's your privacy, your data, and your ability to access the internet without constant identity checks. Don't let fuzzy language disguise what these systems really do.

Rindala Alajaji

❤️ Let's Sue the Government! | EFFector 37.15

6 days 3 hours ago

There are no tricks in EFF's EFFector newsletter, just treats to keep you up-to-date on the latest in the fight for digital privacy and free expression. 

In our latest issue, we're explaining a new lawsuit to stop the U.S. government's viewpoint-based surveillance of online speech; sharing even more tips to protect your privacy; and celebrating a victory for transparency around AI police reports.

Prefer to listen in? Check out our audio companion, where EFF Staff Attorney Lisa Femia explains why EFF is suing to stop the Trump administration's ideological social media surveillance program. Catch the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 37.15 - ❤️ LET'S SUE THE GOVERNMENT!

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Joint Statement on the UN Cybercrime Convention: EFF and Global Partners Urge Governments Not to Sign

1 week 1 day ago

Today, EFF joined a coalition of civil society organizations in urging UN Member States not to sign the UN Convention Against Cybercrime. For those that move forward despite these warnings, we urge them to take immediate and concrete steps to limit the human rights harms this Convention will unleash. These harms are likely to be severe and will be extremely difficult to prevent in practice.

The Convention obligates states to establish broad electronic surveillance powers to investigate and cooperate on a wide range of crimes—including those unrelated to information and communication systems—without adequate human rights safeguards. It requires governments to collect, obtain, preserve, and share electronic evidence with foreign authorities for any “serious crime”—defined as an offense punishable under domestic law by at least four years’ imprisonment (or a higher penalty).

In many countries, merely speaking freely; expressing a nonconforming sexual orientation or gender identity; or protesting peacefully can constitute a serious criminal offense per the definition of the convention. People have faced lengthy prison terms, or even more severe acts like torture, for criticizing their governments on social media, raising a rainbow flag, or criticizing a monarch. 

In today’s digital era, nearly every message or call generates granular metadata—revealing who communicates with whom, when, and from where—that routinely traverses national borders through global networks. The UN cybercrime convention, as currently written, risks enabling states to leverage its expansive cross-border data-access and cooperation mechanisms to obtain such information for political surveillance—abusing the Convention’s mechanisms to monitor critics, pressure their families, and target marginalized communities abroad.

As abusive governments increasingly rely on questionable tactics to extend their reach beyond their borders—targeting dissidents, activists, and journalists worldwide—the UN Cybercrime Convention risks becoming a vehicle for globalizing repression, enabling an unprecedented multilateral infrastructure for digital surveillance that allows states to access and exchange data across borders in ways that make political monitoring and targeting difficult to detect or challenge.

EFF has long sounded the alarm over the UN Cybercrime Treaty’s sweeping powers of cross-border cooperation and its alarming lack of human-rights safeguards. As the Convention opens for signature on October 25–26, 2025 in Hanoi, Vietnam—a country repeatedly condemned by international rights groups for jailing critics and suppressing online speech—the stakes for global digital freedom have never been higher.

The Convention’s many flaws cannot easily be mitigated because it fundamentally lacks a mechanism for suspending states that systematically fail to respect human rights or the rule of law. States must refuse to sign or ratify the Convention. 

Read our full letter here.

Paige Collings

Science Must Decentralize

1 week 3 days ago

Knowledge production doesn’t happen in a vacuum. Every great scientific breakthrough is built on prior work, and an ongoing exchange with peers in the field. That’s why we need to address the threat of major publishers and platforms having an improper influence on how scientific knowledge is accessed—or outright suppressed.

In the digital age, the collaborative and often community-governed effort of scholarly research has gone global and unlocked unprecedented potential to improve our understanding and quality of life. That is, if we let it. Publishers continue to monopolize access to life-saving research and increase the burden on researchers through article processing charges and a pyramid of volunteer labor. This exploitation makes a mockery of open inquiry and the denial of access as a serious human rights issue.

While alternatives like Diamond Open Access are promising, crashing through publishing gatekeepers isn’t enough. Large intermediary platforms are capturing other aspects of the research process—inserting themselves between researchers and between the researchers and these published works—through platformization

Funneling scholars into a few major platforms isn’t just annoying, it’s corrosive to privacy and intellectual freedom. Enshittification has come for research infrastructure, turning everyday tools into avenues for surveillance. Most professors are now worried their research is being scrutinized by academic bossware, forcing them to worry about arbitrary metrics which don’t always reflect research quality. While playing this numbers game, a growing threat of surveillance in scholarly publishing gives these measures a menacing tilt, chilling the publication and access of targeted research areas. These risks spike in the midst of governmental campaigns to muzzle scientific knowledge, buttressed by a scourge of platform censorship on corporate social media.

The only antidote to this ‘platformization’ is Open Science and decentralization. Infrastructure we rely on must be built in the open and on interoperable standards, and hostile to corporate (or governmental) takeovers. Universities and the science community are well situated to lead this fight. As we’ve seen in EFF’s TOR University Challenge, promoting access to knowledge and public interest infrastructure is aligned with the core values of higher education. 

Using social media as an example, universities have a strong interest in promoting the work being done at their campuses far and wide. This is where traditional platforms fall short: algorithms typically prioritizing paid content, downrank off-site links, and prioritize sensational claims to drive engagement. When users are free from enshittification and can themselves control the  platform’s algorithms, as they can on platforms like Bluesky, scientists get more engagement and find interactions are more useful

Institutions play a pivotal role in encouraging the adoption of these alternatives, ranging from leveraging existing IT support to assist with account use and verification, all the way to shouldering some of the hosting with Mastodon instances and/or Bluesky PDS for official accounts. This support is good for the research, good for the university, and makes our systems of science more resilient to attacks on science and the instability of digital monocultures.

This subtle influence of intermediaries can also appear in other tools relied on by researchers, while there are a number of open alternatives and interoperable tools developed for everything from citation managementdata hosting to online chat among collaborators. Individual scholars and research teams can implement these tools today, but real change depends on institutions investing in tech that puts community before shareholders.

When infrastructure is too centralized, gatekeepers gain new powers to capture, enshittify, and censor. The result is a system that becomes less useful, less stable, and with more costs put on access. Science thrives on sharing and access equity, and its future depends on a global and democratic revolt against predatory centralized platforms.

EFF is proud to celebrate Open Access Week.

Rory Mir

When AI and Secure Chat Meet, Users Deserve Strong Controls Over How They Interact

1 week 5 days ago

Both Google and Apple are cramming new AI features into their phones and other devices, and neither company has offered clear ways to control which apps those AI systems can access. Recent issues around WhatsApp on both Android and iPhone demonstrate how these interactions can go sideways, risking revealing chat conversations beyond what you intend. Users deserve better controls and clearer documentation around what these AI features can access.

After diving into how Google Gemini and Apple Intelligence (and in some cases Siri) currently work, we didn’t always find clear answers to questions about how data is stored, who has access, and what it can be used for.

At a high level, when you compose a message with these tools, the companies can usually see the contents of those messages and receive at least a temporary copy of the text on their servers.

When receiving messages, things get trickier. When you use an AI like Gemini or a feature like Apple Intelligence to summarize or read notifications, we believe companies should be doing that content processing on-device. But poor documentation and weak guardrails create issues that have lead us deep into documentation rabbit holes and still fail to clarify the privacy practices as clearly as we’d like.

We’ll dig into the specifics below as well as potential solutions we’d like to see Apple, Google, and other device-makers implement, but first things first, here’s what you can do right now to control access:

Control AI Access to Secure Chat on Android and iOS

Here are some steps you can take to control access if you want nothing to do with the device-level AI features' integration and don’t want to risk accidentally sharing the text of a message outside of the app you’re using.

How to Check and Limit What Gemini Can Access

If you’re using Gemini on your Android phone, it’s a good time to review your settings to ensure things are set up how you want. Here’s how to check each of the relevant settings:

  • Disable Gemini App Activity: Gemini App Activity is a history Google stores of all your interactions with Gemini. It’s enabled by default. To disable it, open Gemini (depending on your phone model, you may or may not even have the Google Gemini app installed. If you don’t have it installed, you don’t really need to worry about any of this). Tap your profile picture > Gemini Apps Activity, then change the toggle to either “Turn off,” or “Turn off and delete activity” if you want to delete previous conversations. If the option reads “Turn on,” then Gemini Apps Activity is already turned off. 
  • Control app and notification access: You can control which apps Gemini can access by tapping your profile picture > Apps, then scrolling down and disabling the toggle next to any apps you do not want Gemini to access. If you do not want Gemini to potentially access the content that appears in notifications, open the Settings app and revoke notification access from the Google app.
  • Delete the Gemini app: Depending on your phone model, you might be able to delete the Gemini app and revert to using Google Assistant instead. You can do so by long-pressing the Gemini app and selecting the option to delete. 
How to Check and Limit what Apple Intelligence and Siri Can Access

Similarly, there are a few things you can do to clamp down on what Apple Intelligence and Siri can do: 

  • Disable the “Use with Siri Requests” option: If you want to continue using Siri, but don’t want to accidentally use it to send messages through secure messaging apps, like WhatsApp, then you can disable that feature by opening Settings > Apps > [app name], and disabling “Use with Siri Requests,” which turns off the ability to compose messages with Siri and send them through that app.
  • Disable Apple Intelligence entirely: Apple Intelligence is an all-or-nothing setting on iPhones, so if you want to avoid any potential issues your only option is to turn it off completely. To do so, open Settings > Apple Intelligence & Siri, and disable “Apple Intelligence” (you will only see this option if your device supports Apple Intelligence, if it doesn’t, the menu will only be for “Siri”). You can also disable certain features, like “writing tools,” using Screen Time restrictions. Siri can’t be universally turned off in the same way, though you can turn off the options under “Talk to Siri” to make it so you can’t speak to it. 

For more information about cutting off AI access at different levels in other apps, this Consumer Reports article covers other platforms and services.

Why It Matters  Sending Messages Has Different Privacy Concerns than Receiving Them

Let’s start with a look at how Google and Apple integrate their AI systems into message composition, using WhatsApp as an example.

Google Gemini and WhatsApp

On Android, you can optionally link WhatsApp and Gemini together so you can then initiate various actions for sending messages from the Gemini app, like “Call Mom on WhatsApp” or “Text Jason on WhatsApp that we need to cancel our secret meeting, but make it a haiku.” This feature raised red flags for users concerned about privacy.

By default, everything you do in Gemini is stored in the “Gemini Apps Activity,” where messages are stored forever, subject to human review, and are used to train Google’s products. So, unless you change it, when you use Gemini to compose and send a message in WhatsApp then the message you composed is visible to Google.

If you turn the activity off, interactions are still stored for 72 hours. Google’s documentation claims that even though messages are stored, those conversations aren't reviewed or used to improve Google machine learning technologies, though that appears to be an internal policy choice with no technical limits preventing Google from accessing those messages.

By default, everything you do in Gemini is stored in the “Gemini Apps Activity,” where messages are stored forever, subject to human review, and are used to train Google’s products.

The simplicity of invoking Gemini to compose and send a message may lead to a false sense of privacy. Notably, other secure messaging apps, like Signal, do not offer this Gemini integration.

For comparison’s sake, let’s see how this works with Apple devices.

Siri and WhatsApp

The closest comparison to this process on iOS is to use Siri, which it is claimed, will eventually be a part of Apple Intelligence. Currently, Apple’s AI message composition tools are not available for third-party apps like Signal and WhatsApp.

According to its privacy policy, when you dictate a message through Siri to send to WhatsApp (or anywhere else), the message, including metadata like the recipient phone number and other identifiers, is sent to Apple’s servers. This was confirmed by researchers to include the text of messages sent to WhatsApp. When you use Siri to compose a WhatsApp message, the message gets routed to both Apple and WhatsApp. Apple claims it does not store this transcript unless you’ve opted into “Improve Siri and Dictation.” WhatsApp defers to Apple’s support for data handling concerns. This is similar to how Google handles speech-to-text prompts.

In response to that research, Apple said this was expected behavior with an app that uses SiriKit—the extension that allows third-party apps to integrate with Siri—like WhatsApp does.

Both Siri and Apple Intelligence can sometimes run locally on-device, and other times need to rely on Apple-managed cloud servers to complete requests. Apple Intelligence can use the company’s Private Cloud Compute, but Siri doesn’t have a similar feature.

The ambiguity around where data goes makes it overly difficult to decide on whether you are comfortable with the sort of privacy trade-off that using features like Siri or Apple Intelligence might entail.

How Receiving Messages Works

Sending encrypted messages is just one half of the privacy puzzle. What happens on the receiving end matters too. 

Google Gemini

By default, the Gemini app doesn’t have access to the text inside secure messaging apps or to notifications. But you can grant access to notifications using the Utilities app. Utilities can read, summarize, and reply to notifications, including in WhatsApp and Signal (it can also read notifications in headphones).

This could open up any notifications routed through the Utilities app to the Gemini app to access internally or from third-parties.

We could not find anything in Google’s Utilities documentation that clarifies what information is collected, stored, or sent to Google from these notifications. When we reached out to Google, the company responded that it “builds technical data protections that safeguard user data, uses data responsibly, and provides users with tools to control their Gemini experience.” Which means Google has no technical limitation around accessing the text from notifications if you’ve enabled the feature in the Utilities app. This could open up any notifications routed through the Utilities app to the Gemini app to be accessed internally or from third-parties. Google needs to publicly make its data handling explicit in its documentation.

If you use encrypted communications apps and have granted access to notifications, then it is worth considering disabling that feature or controlling what’s visible in your notifications on an app-level.

Apple Intelligence

Apple is more clear about how it handles this sort of notification access.

Siri can read and reply to messages with the “Announce Notifications” feature. With this enabled, Siri can read notifications out loud on select headphones or via CarPlay. In a press release, Apple states, “When a user talks or types to Siri, their request is processed on device whenever possible. For example, when a user asks Siri to read unread messages… the processing is done on the user’s device. The contents of the messages aren’t transmitted to Apple servers, because that isn’t necessary to fulfill the request.”

Apple Intelligence can summarize notifications from any app that you’ve enabled notifications on. Apple is clear that these summaries are generated on your device, “when Apple Intelligence provides you with preview summaries of your emails, messages, and notifications, these summaries are generated by on-device models.” This means there should be no risk that the text of notifications from apps like WhatsApp or Signal get sent to Apple’s servers just to summarize them.

New AI Features Must Come With Strong User Controls

As more device-makers cram AI features into their devices, the more necessary it is for us to have clear and simple controls over what personal data these features can access on our devices. If users do not have control over when a text leaves a device for any sort of AI processing—whether that’s to a “private” cloud or not—it erodes our privacy and potentially threatens the foundations of end-to-end encrypted communications.

Per-app AI Permissions

Google, Apple, and other device makers should add a device-level AI permission, just like they do for other potentially invasive privacy features, like location sharing, to their phones. You should be able to tell the operating system’s AI to not access an app, even if that comes at the “cost” of missing out on some features. The setting should be straightforward and easy to understand in ways the Gemini an Apple Intelligence controls currently are not.

Offer On-Device-Only Modes

Device-makers should offer an “on-device only” mode for those interested in using some features without having to try to figure out what happens on device or on the cloud. Samsung offers this, and both Google and Apple would benefit from a similar option.

Improve Documentation

Both Google and Apple should improve their documentation about how these features interact with various apps. Apple doesn’t seem to clarify notification processing privacy anywhere outside of a press release, and we couldn’t find anything about Google’s Utilities privacy at all. We appreciate tools like Gemini Apps Activity as a way to audit what the company collects, but vague information like “Prompted a Communications query” is only useful if there’s an explanation somewhere about what that means.

The current user options are not enough. It’s clear that the AI features device-makers add come with significant confusion about their privacy implications, and it’s time to push back and demand better controls. The privacy problems introduced alongside new AI features should be taken seriously, and remedies should be offered to both users and developers who want real, transparent safeguards over how a company accesses their private data and communications.

Thorin Klosowski

Civil Disobedience of Copyright Keeps Science Going

1 week 5 days ago

Creating and sharing knowledge are defining traits of humankind, yet copyright law has grown so restrictive that it can require acts of civil disobedience to ensure that students and scholars have the books they need and to preserve swaths of culture from being lost forever.

Reputable research generally follows a familiar pattern: Scientific articles are written by scholars based on their research—often with public funding. Those articles are then peer-reviewed by other scholars in their fields and revisions are made according to those comments. Afterwards, most large publishers expect to be given the copyright on the article as a condition of packaging it up and selling it back to the institutions that employ the academics who did the research and to the public at large. Because research is valuable and because copyright is a monopoly on disseminating the articles in question, these publishers can charge exorbitant fees that place a strain even on wealthy universities and are simply out of reach for the general public or universities with limited budgets, such as those in the global south. The result is a global human rights problem.

This model is broken, yet science goes on thanks to widespread civil disobedience of the copyright regime that locks up the knowledge created by researchers. Some turn to social media to ask that a colleague with access share articles they need (despite copyright’s prohibitions on sharing). Certainly, at least some such sharing is protected fair use, but scholars should not have to seek a legal opinion or risk legal threats from publishers to share the collective knowledge they generate.

Even more useful, though on shakier legal ground, are so-called “shadow archives” and aggregators such as SciHub, Library Genesis (LibGen), Z-Library, or Anna’s Archive. These are the culmination of efforts from volunteers dedicated to defending science.

SciHub alone handles tens of millions of requests for scientific articles each year and remains operational despite adverse court rulings thanks both to being based in Russia, and to the community of academics who see it as an ethical response to the high access barriers that publishers impose and provide it their log-on credentials so it can retrieve requested articles. SciHub and LibGen are continuations of samizdat, the Soviet-era practice of disobeying state censorship in the interests of learning and free speech.

Unless publishing gatekeepers adopt drastically more equitable practices and become partners in disseminating knowledge, they will continue to lose ground to open access alternatives, legal or otherwise.

EFF is proud to celebrate Open Access Week.

Kit Walsh

EFF Backs Constitutional Challenge to Ecuador’s Intelligence Law That Undermines Human Rights

1 week 5 days ago

In early September, EFF submitted an amicus brief to Ecuador’s Constitutional Court supporting a constitutional challenge filed by Ecuadorian NGOs, including INREDH and LaLibre. The case challenges the constitutionality of the Ley Orgánica de Inteligencia (LOI) and its implementing regulation, the General Regulation of the LOI.

EFF’s amicus brief argues that the LOI enables disproportionate surveillance and secrecy that undermine constitutional and Inter-American human rights standards. EFF urges the Constitutional Court to declare the LOI and its regulation unconstitutional in their entirety.

More specifically, our submission notes that:

“The LOI presents a structural flaw that undermines compliance with the principles of legality, legitimate purpose, suitability, necessity, and proportionality; it inverts the rule and the exception, with serious harm to rights enshrined constitutionally and under the Convention; and it prioritizes indeterminate state interests, in contravention of the ultimate aim of intelligence activities and state action, namely the protection of individuals, their rights, and freedoms.”

Core Legal Problems Identified

Vague and Overbroad Definitions

The LOI contains key terms like “national security,” “integral security of the State,” “threats,” and “risks” that are left either undefined or so broadly framed that they could mean almost anything. This vagueness grants intelligence agencies wide, unchecked discretion, and fails short of the standard of legal certainty required under the American Convention on Human Rights (CADH).

Secrecy and Lack of Transparency

The LOI makes secrecy the rule rather than the exception, reversing the Inter-American principle of maximum disclosure, which holds that access to information should be the norm and secrecy a narrowly justified exception. The law establishes a classification system—“restricted,” “secret,” and “top secret”—for intelligence and counterintelligence information, but without clear, verifiable parameters to guide its application on a case-by-case basis. As a result, all information produced by the governing body (ente rector) of the National Intelligence System is classified as secret by default. Moreover, intelligence budgets and spending are insulated from meaningful public oversight, concentrated under a single authority, and ultimately destroyed, leaving no mechanism for accountability.

Weak or Nonexistent Oversight Mechanisms

The LOI leaves intelligence agencies to regulate themselves, with almost no external scrutiny. Civilian oversight is minimal, limited to occasional, closed-door briefings before a parliamentary commission that lacks real access to information or decision making power. This structure offers no guarantee of independent or judicial supervision and instead fosters an environment where intelligence operations can proceed without transparency or accountability.

Intrusive Powers Without Judicial Authorization

The LOI allows access to communications, databases, and personal data without prior judicial order, which enables the mass surveillance of electronic communications, metadata, and databases across public and private entities—including telecommunication operators. This directly contradicts rulings of the Inter-American Court of Human Rights, which establish that any restriction of the right to privacy must be necessary, proportionate, and subject to independent oversight. It also runs counter to CAJAR vs. Colombia, which affirms that intrusive surveillance requires prior judicial authorization.

International Human Rights Standards Applied

Our amicus curiae draws on the CAJAR vs. Colombia judgment, which set strict standards for intelligence activities. Crucially, Ecuador’s LOI fall short of all these tests: it doesn’t constitute an adequate legal basis for limiting rights; contravenes necessary and proportionate principles; fails to ensure robust controls and safeguards, like prior judicial authorization and solid civilian oversight; and completely disregards related data protection guarantees and data subject’s rights.

At its core, the LOI structurally prioritizes vague notions of “state interest” over the protection of human rights and fundamental freedoms. It legalizes secrecy, unchecked surveillance, and the impunity of intelligence agencies. For these reasons, we urge Ecuador’s Constitutional Court to declare the LOI and its regulations unconstitutional, as they violate both the Ecuadorian Constitution and the American Convention on Human Rights (CADH).

Read our full amicus brief here to learn more about how Ecuador’s intelligence framework undermines privacy, transparency, and the human rights protected under Inter-American human rights law.

Paige Collings

It’s Time to Take Back CTRL

2 weeks ago

Technology is supercharging the attack on democracy by making it easier to spy on people, block free speech, and control what we do. The Electronic Frontier Foundation’s activists, lawyers, and technologists are fighting back. Join the movement to Take Back CTRL.

DONATE TODAY

Join EFF and Fight Back

Take Back CTRL is EFF's new website to give you insight into the ways that technology has become the veins and arteries of rising global authoritarianism. It’s not just because of technology’s massive power to surveil, categorize, censor, and make decisions for governments—but also because the money made by selling your data props up companies and CEOs with clear authoritarian agendas. As the preeminent digital rights organization, EFF has a clear role to play.

If You Use Technology, This Fight Is Yours.

EFF was created for scary moments like the one we’re facing now. For 35 years, EFF has fought to ensure your rights follow you online and wherever you use technology. We’ve sued, we’ve analyzed, we’ve hacked, we’ve argued, and we’ve helped people be heard in halls of power.

But we're still missing something. You.

Because it's your rights we're fighting for:

  • Your right to speak and learn freely online, free of government censorship
  • Your right to move through the world without being surveilled everywhere you go
  • Your right to use your device without it tracking your every click, purchase, and IRL movement
  • Your right to control your data, including data about your body, and to know that data given to one government agency won’t be weaponized against you by another
  • Your right to do what you please with the products and content you pay for
  • Consider Take Back CTRL our "help wanted" notice, because we need your help to win this fight today.

Join EFF

The future is being decided today. Join the movement to Take Back CTRL.

The Take Back CTRL campaign highlights the work that EFF is doing to fight for our democracy, defend vulnerable members of our community, and stand up against the use of tech in this authoritarian takeover. It also features actions everyone can take to support EFF’s work, use our tools in their everyday lives, and fight back.

Help us spread the word:

Stop tech from dismantling democracy. Join the movement to Take Back CTRL of our rights. https://eff.org/tbc

Allison Morris

No Tricks, Just Treats 🎃 EFF’s Halloween Signal Stickers Are Here!

2 weeks ago

EFF usually warns of new horrors threatening your rights online, but this Halloween we’ve summoned a few of our own we’d like to share.  Our new Signal Sticker Pack highlights some creatures—both mythical and terrifying—conjured up by our designers for you to share this spooky season.

If you’re new to Signal, it's a free and secure messaging app built by the nonprofit Signal Foundation at the forefront of defending user privacy. While chatting privately, you can add some seasonal flair with Signal Stickers, and rest assured: friends receiving them get the full sticker pack fully encrypted, safe from prying eyes and lurking spirits.

How To Get and Share Signal Stickers

On any mobile device or desktop with the Signal app installed, you can simply click the button below.

Download EFF's Signal Stickers

To share Frights and Rights  

You can also paste the sticker link directly into a signal chat, and then tap it to download the pack directly to the app.

Once they’re installed, they are even easier to share—simply open a chat, tap the sticker menu on your keyboard, and send one of EFF’s spooky stickers.  They’ll then be asked if they’d like to also have the sticker pack.

All of this works without any third parties knowing what sticker packs you have or whom you shared them with. Our little ghosts and ghouls are just between us.

Meet The Encryptids

These familiar champions of digital rights—The Encryptids—are back! Don’t let their monstrous looks fool you; each one advocates for privacy, security, and a dash of weirdness in their own way. Whether they’re shouting about online anonymity or the importance of interoperability, they’re ready to help you share your love for digital rights. Learn more about their stories here, and you can even grab a bigfoot pin to let everyone know that privacy is a “human” right.

Street-Level Surveillance Monsters

On a cool autumn night, you might be on the lookout for ghosts and ghouls from your favorite horror flicks—but in the real world, there are far scarier monsters lurking in the dark: police surveillance technologies. Often hidden in plain sight, these tools quietly watch from the shadows and are hard to spot. That’s why we’ve given these tools the hideous faces they deserve in our Street-Level Surveillance Monsters series, ready to scare (and inform) your loved ones.

Copyright Creatures

Ask any online creator and they’ll tell you: few things are scarier than a copyright takedown. From unfair DMCA claims and demonetization to frivolous lawsuits designed to intimidate people into a hefty payment, the creeping expansion of copyright can inspire as much dread as any monster on the big screen. That’s why this pack includes a few trolls and creeps straight from a broken copyright system—where profit haunts innovation. 

To that end, all of EFF’s work (including these stickers) are under an open CC-BY License, free for you to use and remix as you see fit.

Happy Haunting Everybody!

These frights may disappear with your message, but the fights persist. That’s why we’re so grateful to EFF supporters for helping us make the digital world a little more weird and a little less scary. You can become a member today and grab some gear to show your support. Happy Halloween!

DONATE TODAY

Rory Mir

No One Should Be Forced to Conform to the Views of the State

2 weeks 5 days ago

Should you have to think twice before posting a protest flyer to your Instagram story? Or feel pressure to delete that bald JD Vance meme that you shared? Now imagine that you could get kicked out of the country—potentially losing your job or education—based on the Trump administration’s dislike of your views on social media. 

That threat to free expression and dissent is happening now, but we won’t let it stand. 

"...they're not just targeting individuals—they're targeting the very idea of freedom itself."

The Electronic Frontier Foundation and co-counsel are representing the United Automobile Workers (UAW), Communications Workers of America (CWA), and American Federation of Teachers (AFT) in a lawsuit against the U.S. State Department and Department of Homeland Security for their viewpoint-based surveillance and suppression of noncitizens’ First Amendment-protected speech online.  The lawsuit asks a federal court to stop the government’s unconstitutional surveillance program, which has silenced citizens and noncitizens alike. It has even hindered unions’ ability to associate with their members. 

"When they spy on, silence, and fire union members for speaking out, they're not just targeting individuals—they're targeting the very idea of freedom itself,” said UAW President Shawn Fain. 

The Trump administration has built this mass surveillance program to monitor the constitutionally protected online speech of noncitizens who are lawfully present in the U.S. The program uses AI and automated technologies to scour social media and other online platforms to identify and punish individuals who express viewpoints the government considers "hostile" to "our culture" and "our civilization".  But make no mistake: no one should be forced to conform to the views of the state. 

The Foundation of Democracy 

Your free expression and privacy are fundamental human rights, and democracy crumbles without them. We have an opportunity to fight back, but we need you.  EFF’s team of lawyers, activists, researchers, and technologists have been on a mission to protect your freedom online since 1990, and we’re just getting started.

Donate and become a member of EFF today. Your support helps protect crucial rights, online and off, for everyone.

Give Today

Related Cases: United Auto Workers v. U.S. Department of State
Lisa Femia

Labor Unions, EFF Sue Trump Administration to Stop Ideological Surveillance of Free Speech Online

2 weeks 5 days ago
Viewpoint-based Online Surveillance of Permanent Residents and Visa Holders Violates First Amendment, Lawsuit Argues

NEW YORK—The United Automobile Workers (UAW), Communications Workers of America (CWA), and American Federation of Teachers (AFT) filed a lawsuit today against the Departments of State and Homeland Security for their viewpoint-based surveillance and suppression of protected expression online. The complaint asks a federal court to stop this unconstitutional surveillance program, which has silenced and frightened both citizens and noncitizens, and hampered the ability of the unions to associate with their members and potential members. The case is titled UAW v. State Department.

Since taking power, the Trump administration has created a mass surveillance program to monitor constitutionally protected speech by noncitizens lawfully present in the U.S. Using AI and other automated technologies, the program surveils the social media accounts of visa holders with the goal of identifying and punishing those who express viewpoints the government doesn't like. This has been paired with a public intimidation campaign, silencing not just noncitizens with immigration status, but also the families, coworkers, and friends with whom their lives are integrated.

As detailed in the complaint, when asked in a survey if they had changed their social media activity as a result of the Trump administration's ideological online surveillance program, over 60 percent of responding UAW members and over 30 percent of responding CWA members who were aware of the program said they had. Among noncitizens, these numbers were even higher. Of respondents aware of the program, over 80 percent of UAW members who were not U.S. citizens and over 40 percent of CWA members who were not U.S. citizens said they had changed their activity online.

Individual union members reported refraining from posting, refraining from sharing union content, deleting posts, and deleting entire accounts in response to the ideological online surveillance program. Criticism of the Trump administration or its policies was the most common type of content respondents reported changing their social media activity around. Many members also reported altering their offline union activity in response to the program, including avoiding being publicly identified as part of the unions and reducing their participation in rallies and protests. One member even said they declined to report a wage theft claim due to fears arising from the surveillance program.

Represented by the Electronic Frontier Foundation (EFF), Muslim Advocates (MA), and the Media Freedom & Information Access Clinic (MFIA), the UAW, CWA, and AFT seek to halt the program that affects thousands of their members individually and has harmed the ability of the unions to organize, represent, and recruit members. The lawsuit argues that the viewpoint-based online surveillance program violates the First Amendment and the Administrative Procedure Act.

"The Trump administration's use of surveillance to track and intimidate UAW members is a direct assault on the First Amendment—and an attack on every working person in this country," said UAW President Shawn Fain. "When they spy on, silence, and fire union members for speaking out, they're not just targeting individuals—they're targeting the very idea of freedom itself. The right to protest, to organize, to speak without fear—that's the foundation of American democracy. If they can come for UAW members at our worksites, they can come for any one of us tomorrow. And we will not stand by and let that happen."

"Every worker should be alarmed by the Trump administration’s online surveillance program," said CWA President Claude Cummings Jr. "The labor movement is built on our freedoms under the First Amendment to speak and assemble without fear retaliation by the government. The unconstitutional Challenged Surveillance Program threatens those freedoms and explicitly targets those who are critical of the administration and its policies. This policy interferes with CWA members’ ability to express their points of view online and organize to improve their working conditions."

"Free speech is the foundation of democracy in America," said AFT President Randi Weingarten. "The Trump administration has rejected that core constitutional right and now says only speech it agrees with is permitted—and that it will silence those who disagree. This suit exposes the online surveillance tools and other cyber tactics never envisioned by the founders to enforce compliance with the administration’s views. It details the direct harms on both the target of these attacks and the chilling effect on all those we represent and teach."

"Using a variety of AI and automated tools, the government can now conduct viewpoint-based surveillance and analysis on a scale that was never possible with human review alone," said EFF Staff Attorney Lisa Femia. "The scale of this spying is matched by an equally massive chilling effect on free speech."

"The administration is hunting online for an ever-growing list of disfavored viewpoints," said Golnaz Fakhimi, Legal Director of Muslim Advocates. "Its goal is clear: consolidate authoritarian power by crushing dissent, starting with noncitizens, but certainly not ending there. This urgent lawsuit aims to put a stop to this power grab and defend First Amendment freedoms crucial to a pluralistic and democratic society."

"This case goes to the heart of the First Amendment," said Anthony Cosentino, a student in the Media Freedom & Information Access Clinic. "The government can’t go after people for saying things it doesn’t like. The current administration has ignored that principle, developing a vast surveillance apparatus to find and punish people for their constitutionally protected speech. It is an extraordinary abuse of power, creating a climate of fear not seen in this country since the McCarthy era, especially on college campuses. Our laws and Constitution will not allow it."

For the complaint: https://www.eff.org/document/uaw-v-dos-complaint

For more about the litigation: https://eff.org/cases/united-auto-workers-v-us-department-state

Contacts:
Electronic Frontier Foundation: press@eff.org
Muslim Advocates: golnaz@muslimadvocates.org

Hudson Hongo

🎃 A Full Month of Privacy Tips from EFF | EFFector 37.14

2 weeks 6 days ago

Instead of catching you off-guard with a jump scare this Halloween season, EFF is here to catch you up on the latest digital rights news with our EFFector newsletter!

In this issue, we’re helping you take control of your online privacy with Opt Out October; explaining the UK’s attack on encryption and why it’s bad for all users; and covering shocking new details about an abortion surveillance case in Texas.

Prefer to listen in? Check out our audio companion, where EFF Security and Privacy Activist Thorin Klosowski explains how small steps to protect your privacy can add up to big changes.  Catch the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 37.14 - 🎃 A FULL MONTH OF PRIVACY TIPS FROM EFF

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero
Checked
2 hours 4 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed