Copyright Bullying vs. Religious Freedom

12 hours 16 minutes ago

The government should not help a religious institution to punish or deter members from inquiring about their faith. Yet, once again, the Watch Tower Bible and Tract Society is trying to use flimsy copyright claims to exploit the special legal tools available to copyright owners in order to unmask anonymous online speakers. And, once again, EFF has stepped in to urge the courts not to give Watch Tower’s attempts the force of law, with the help of local counsel Jonathan Phillips of Phillips & Bathke, P.C.

EFF’s client, J. Doe, is a member of the Jehovah’s Witnesses who became interested in the history of the organization’s public statements, and how they’ve changed over time. They created research tools to analyze those documents and ultimately created a website, JWS Library, allowing others to use those tools and verify their findings through an archive that included documents suppressed by the church. Doe and others discovered prophecies that failed to come true, erasure of a leader’s disgrace, increased calls for obedience and donations, and other insights about the Jehovah’s Witnesses’ practices. Doe also used machine translation on a foreign-language document to help the community understand what the church was saying to different audiences and also to help understand potential changes in the organization’s attitudes towards dissent.

Within the church, dissent or even asking questions has often been punished by labeling members as apostates and ostracizing—or “disfellowshipping”— them. As a result, Doe and others choose to speak anonymously to avoid retaliation that could cost them family, friend, and professional relationships.

There is no law against questioning the Jehovah’s Witnesses. Instead, Watch Tower argues that Doe’s activities constitute copyright infringement and seeks to use the special process provided in the Digital Millennium Copyright Act (DMCA) to unmask them. It sent DMCA subpoenas to Google and Cloudflare, seeking information that would help them uncover Doe’s identity.

The problem for Watch Tower is that Doe’s research and commentary are clear fair uses allowed under copyright law. The First Amendment does not permit the unmasking of anonymous speakers based on such weak claims. Indeed, the First Amendment protects anonymous speakers precisely because some would be deterred from speaking if they faced retribution for doing so.

EFF stands with those who question the claims of those in power and who share the tools and knowledge needed to do so. We urge the judges in the Southern District of New York to quash these improper subpoenas and not allow copyright to be used to suppress important, legitimate speech.

Kit Walsh

Think Twice Before Buying or Using Meta’s Ray-Bans

14 hours 19 minutes ago

Over the last decade or so, the tech industry has tried, and mostly failed, to make “smart glasses”—tech-infused glasses with cameras, AI, maps, displays, and more—a thing. But over the past year, products like Meta’s Ray-Ban Display Glasses and Oakley’s Meta Glasses have gone from a curious niche to the mainstream

Before you strap a dashcam to your face and sprint out into the world filming everything and everyone in your life, there are some civil liberties and privacy concerns to consider before buying or using a pair.

Meta is the biggest company that makes these sorts of glasses and their partnerships with Ray-Ban and Oakely are the most popular options, so we’ll be mostly focusing on them here. Others, like models from Snapchat are similar in form but far less ubiquitous. But Meta won’t hold this space for long. Google’s already announced a partnership with Warby Parker for their “AI-powered smart glasses,” and there are rumors around a competing product from Apple

With that, let’s dive into some of the considerations you should make before purchasing a pair.

If You’re Thinking About Buying Smart Glasses You’re likely not the only one who can see (and hear) your footage

The photos and videos you record with most smartglasses will likely be stored online at some point in the process. On Meta’s offerings, unless you are livestreaming, media you capture when you press the camera button is kept on the glasses until you import them onto your phone, but media is imported automatically by default into the Meta AI mobile app, which is required to set up the glasses. 

You can't use any AI features locally on the glasses. So anytime you use AI features, like when you say, “Hey Meta, start recording,” the footage is fed to Meta. You can use the glasses without the Meta AI app entirely, but considering you can’t easily download footage from the glasses to your phone without it, most people will likely use the app.

Some videos are fed to Meta for AI training, and we know at least in some cases that those videos go through human review. An investigation by Swedish newspapers found that workers were reviewing and annotating camera footage, which includes all sorts of sensitive videos, including nudity, sex, and going to the bathroom. Meta claimed to the BBC that this is in accordance with its terms of use, all in the name of AI training, which states:

In some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review may be automated or manual (human).

This all means that Meta and their third-party contractors will have access to at least some of what you record, and it’s very hard as a user to know where footage goes, who will have access to it, and what they will do with it. When you save footage to your phone’s camera roll, which is where the Meta AI app stores content, that might also be sent to Apple or Google’s servers, depending on your settings. Employees at these companies can then possibly access that media, and it could be shared with law enforcement.

The recorded audio from conversations with Meta AI are also saved by default, and if you don’t like that, tough luck, unless you go in and manually delete them every time you say something.

Filming all the time is even more privacy invasive than you think

A common argument in favor of using the cameras in smartglasses is that phones and cameras can do this too, and it’s never been a problem. 

But smartglasses are designed to resemble regular glasses, to the point where most reviews point out how friends didn’t notice that they had cameras embedded in them. They’re designed to be invisible to those being recorded outside of a small indicator light when they’re recording video footage (that cheap hacks can disable). Whereas it is often obvious that a person is recording if they pull their phone out of their pocket and point it at someone else.

They’re designed to be invisible to those being recorded outside of a small indicator light when they’re recording video footage

Moreover, constant recording of everything in public spaces can create all sorts of potential privacy problems, some more obvious than others. This is another way that cameras on glasses are different from cameras on phones: it is far easier to constantly record one’s whereabouts with the former than the latter. If you continuously record, maybe you just happen to catch someone entering their passcode or password onto their phone or computer at a coffee shop, or broadcast someone’s bank details when you’re standing in line at an ATM. That doesn’t even begin to get into when smartglasses are intentionally used for less socially responsible means. And some people may forget to turn off their smartglasses when they enter a private space like a bathroom.  

And if you find yourself caught on someone’s camera, there’s not much you can do in recourse. If you do notice a stranger recording you, it’s up to you to intervene and ask not to be included in that footage, which can easily turn awkward or confrontational.

Our expectations of privacy shift when we’re in public, but bystanders in many cases will still have privacy interests. Public spaces are a place where you will be seen, but that shouldn’t mean it’s suddenly okay to catalog and identify everyone.

Consider the company’s the track record and public statements

Meta, Google, Apple—perhaps one benefit of all the major tech companies entering this market is that we already have a good idea of how much they tend to respect the privacy of their users or the openness of their platforms. Spoiler, it’s often not much.

Meta has a long history of privacy invasive technologies and practices. We’ve heard rumblings that Meta hopes to add face recognition to its smartglasses, preferably, “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.” Yikes. This is a monumentally bad idea that should be abandoned by Meta and any of its competitors considering a similar feature. But regardless of whether they launch this feature, it’s a pretty clear indication of where Meta wants these sorts of devices to go. 

If You Have Smartglasses Already Opt out of sharing with Meta where you can

You can disable a couple of the features where unnecessary data is sent to Meta. In the Meta AI app, under the device settings, there’s a privacy page where you can disable sharing additional data, and more importantly, turn off “Cloud media,” where your photos and videos are sent to Meta’s cloud for processing and temporary storage. 

Decide your use-case and stick to it

These glasses can be useful for filming a variety of activities. We’ve seen fascinating scenes of tattoo artists doing their work (with client’s permission), and it doesn’t take a stretch of the imagination to see how people might use it to film extreme sports. Even on an everyday level, you might find them useful for capturing holidays, birthdays, and all sorts of other private occasions. 

But if you buy these glasses for a specific, mostly private purpose, it is probably best to stick to that, instead of wearing them everywhere and recording everything you do.

Follow the rules of a businesses and social expectations

You often have a right to record in public spaces, but that doesn’t mean other people will like it. Businesses, including restaurants and stores, may want nothing to do with continuous filming and may either post a sign asking you not to use smartglasses, or ask you to stop. This may reflect the preferences not just of the business owner, but the people around you. And don’t use glasses to record when you enter other people’s private spaces like bathrooms or changing rooms.

It’s also a good idea to check in with friends and family before tapping that record button at a social gathering. Some people may not be as comfortable with these glasses as they are with other recording equipment.

Consider blurring strangers if you’re going to upload video

Blurring video footage isn’t an easy task, but if you’re considering uploading footage from something like a protest, it may be worth the effort to do so (apps like Meta’s Edits simplify this process, as do some other video sites, like YouTube). Some people don’t want the government to see their faces at protests, and might be afraid to attend if other people are uploading their faces.

Some people don’t want the government to see their faces at protests, and might be afraid to attend if other people are uploading their faces.

It would be better if Meta leveraged its AI features to offer this sort of feature automatically, especially with livestreaming. It’s not that outlandish of a request, as it seems like the company tries to blur faces automatically in footage it captures for annotation, though it’s not always reliable. After all, Google began redacting faces in Street View years ago, following privacy concerns from groups like EFF.

Resist face recognition

Adding facial recognition technology to smartglasses would obliterate the privacy of everyone. We cannot let companies push face recognition into these glasses, and as a user, you should make your voice clear that this is not something you want.

Smartglasses don’t have to be used to decimate the privacy of anyone you encounter during the day. There are legitimate uses out there, but it’s up to those who use them to respect the social norms of the spaces they enter and the people they encounter.

Thorin Klosowski

The Government Must Not Force Companies to Participate in AI-powered Surveillance

14 hours 43 minutes ago

The rapidly escalating conflict between Anthropic and the Pentagon, which started when the company refused to let the government use its technology to spy on Americans, has now gone to court. The Department of Defense retaliated by designating the company a “supply chain risk” (SCR). Now, Anthropic is asking courts to block the designation, arguing that the First Amendment does not permit the government to coerce a private actor to rewrite its code to serve government ends.

We agree.

As EFF, the Foundation for Individual Rights and Expression, and multiple other public interest organizations explained in a brief filed in support of Anthropic’s motion, the development and operation of large language models involve multiple expressive choices protected by the First Amendment. Requiring a company to rewrite its code to remove guardrails means compelling different expression, a clear constitutional violation. Further, the public record shows that the SCR designation is intended to punish the company both for pushing back and for its CEO’s public statements explaining that AI may supercharge surveillance practices that current law has proven ill-equipped to address.

As we also explain, the company’s concerns about how the government will use its technology are well-founded. The U.S. government has a long history of illegally surveilling its citizens without adequate judicial oversight based on questionable interpretations of its Constitutional and statutory obligations. The Department of Defense acquires vast troves of personal information from commercial entities, including individuals’ physical location, social media, and web browsing data. Other government agencies continue to collect and query vast quantities of Americans’ information, including by acquiring information from third party data brokers.

A growing body of social science research illustrates the chilling effects of these pervasive activities. Fearing retribution for unpopular views, dissenters stay silent. And AI only exacerbates the problem. AI can quickly analyze the government’s massive datasets or combine that information with data scraped off the internet, purchased through the commercial data broker market, or from local police surveillance devices and use all of that data to construct a comprehensive picture of a person’s life and infer sensitive details like their religious beliefs, medical conditions, political opinions, or even sex partners. For example, an agency could use AI to infer an individual’s association with a particular mosque based on data showing that they visited its website, followed its social media accounts, and were located near the mosque during religious services. AI can also deanonymize online speech by using public information to unmask anonymous users.

It is easy to conceive how an agency, a government employee with improper intent, or a malicious hacker could exploit these capabilities to monitor public discourse, preemptively squelch dissent, or persecute people from marginalized communities. Against this background and absent meaningful changes to the governing national security laws and judicial oversight structure, it is entirely reasonable for Anthropic—or any other company—to insist on its own guardrails.

Without action from Congress, the task of protecting your privacy has fallen in large part to Big Tech—something no one wants, including Big Tech. But if Congress won’t do it, companies like Anthropic must be allowed to step in, without facing retribution.

Corynne McSherry

The SAFE Act is an Imperfect Vehicle for Real Section 702 Reform

1 day 14 hours ago

The SAFE act, introduced by Senators Mike Lee (R-UT) and Dick Durbin (D-IL), is the first of many likely proposals we will see to reauthorize Section 702 of the Foreign Intelligence Surveillance Act (FISA) Amendments Act of 2008—and while imperfect, it does propose a litany of real and much-needed reforms of Big Brother’s favorite surveillance authority. 

The irresponsible 2024 reauthorization of the secretive mass surveillance authority Section 702 not only gave the government two more years of unconstitutional surveillance powers, it also made the policy much worse. But, now people who value privacy and the rule of law get another bite at the apple. With expiration for Section 702 looming in April 2026, we are starting to see the emergence of proposals for how to reauthorize the surveillance authority—including calls from inside the White House for a clean reauthorization that would keep the policy unchanged. EFF has always had a consistent policy: Section 702 should not be reauthorized absent major reforms that will keep this tactic of foreign surveillance from being used as a tool of mass domestic espionage. 

What is Section 702?

Section 702 was intended to modernize foreign surveillance of the internet for national security purposes. It allows collection of foreign intelligence from non-Americans located outside the United States by requiring U.S.-based companies that handle online communications to hand over data to the government. As the law is written, the intelligence community (IC) cannot use Section 702 programs to target Americans, who are protected by the Fourth Amendment’s prohibition on unreasonable searches and seizures. But the law gives the intelligence community space to target foreign intelligence in ways that inherently and intentionally sweep in Americans’ communications.

We live in an increasingly globalized world where people are constantly in communication with people overseas. That means, while targeting foreigners outside the U.S. for “foreign intelligence Information” the IC routinely acquires the American side of those communications without a probable cause warrant. The collection of all that data from U.S telecommunications and internet providers results in the “incidental” capture of conversations involving a huge number of people in the United States.

But, this backdoor access to U.S. persons’ data isn’t “incidental.” Section 702 has become a routine part of the FBI’s law enforcement mission. In fact, the IC’s latest Annual Statistical Transparency Report documents the many ways the Federal Bureau of Investigation (FBI) uses Section 702 to spy on Americans without a warrant. The IC lobbied for Section 702 as a tool for national security outside the borders of the U.S., but it is apparent that the FBI uses it to conduct domestic, warrantless surveillance on Americans. In 2021 alone, the FBI conducted 3.4 million warrantless searches of US person’s 702 data.

The Good

Let’s start with the good things that this bill does. These are reforms EFF has been seeking for a long time and their implementation would mean a big improvement in the status quo of national security law.

First, the bill would partially close the loophole that allows the FBI and domestic law enforcement to dig through 702-collected data’s “incidental” collection of the U.S. side of communications. The FBI currently operates with a “finders keeper” mentality, meaning that because the data is pre-collected by another agency, the FBI believes it can operate with almost no constraints on using it for other purposes. The SAFE act would require a warrant before the FBI looked at the content of these collected communications. As we will get to later, this reform does not go nearly far enough because they can query to see what data on a person exists before getting a warrant, but it is certainly an improvement on the current system. 

Second, the bill addresses the age-old problem of parallel construction. If you’re unfamiliar with this term, parallel construction is a method by which intelligence agencies or domestic law enforcement find out a piece of information about a subject through secret, even illegal or unconstitutional methods. Uninterested in revealing these methods, officers hide what actually happened by publicly offering an alternative route they could have used to find that information. So, for instance, if police want to hide the fact that they knew about a specific email because it was intercepted under the authority of Section 702, they might use another method, like a warranted request to a service provider, to create a more publicly-acceptable path to that information. To deal with this problem, the SAFE Act mandates that when the government seeks to use Section 702 evidence in court, it must disclosure the source of this evidence “without regard to any claim that the information or evidence…would inevitably have been discovered, or was subsequently reobtained through other means.” 

Next, the bill proposes a policy that EFF and other groups have nonetheless been trying to get through Congress for over five years: ending the data broker loophole. As the system currently stands, data brokers who buy and sell your personal data collected from smartphone applications, among other sources, are able to sell that sensitive information, including a phone’s geolocation, to the law enforcement and intelligence agencies. That means that with a bit of money, police can buy the data (or buy access to services that purchase and map the data) that they would otherwise need a warrant to get. A bill that would close this loophole, the Fourth Amendment is Not For Sale Act passed through the House in 2024 but has yet to be voted on by the Senate. In the meantime, states have taken it upon themselves to close this loophole with Montana being the first state to pass similar legislation in May 2025. The SAFE Act proposes to partially fix the loophole at least as far as intelligence agencies are concerned. This fix could not come soon enough—especially since the Office of the Director of National Intelligence has signaled their willingness to create one big, streamlined, digital marketplace where the government can buy data from data brokers. 

Another positive thing about the SAFE Act is that it creates an official statutory end to surveillance power that the government allowed to expire in 2020. In its heyday, the intelligence community used Section 215 of the Patriot Act to justify the mass collection of communication records like metadata from phone calls. Although this legal authority has lapsed, it has always been our fear that it will not sit dormant forever and could be reauthorized at any time. This new bill says that its dormant powers shall “cease to be in effect” within 180 of the SAFE Act being enacted. 

What Needs to Change 

The SAFE Act also attempts to clarify very important language that gauges the scope of the surveillance authority: who is obligated to turn over digital information to the U.S. government. Under Section 702, “electronic communication service providers” (ECSP) are on the hook for providing information, but the definition of that term has been in dispute and has changed over time—most recently when a FISA court opinion expanded the definition to include a category of “secret” ECSPs that have not been publicly disclosed.  Unfortunately, this bill still leaves ambiguity in interpretation and an audit system without a clear directive for enforcing limitations on who is an ECSP or guaranteeing transparency. 

As mentioned earlier, the SAFE Act introduces a warrant requirement for the FBI to read the contents of Americans’ communications that have been warrantlessly collected under Section 702. However, the law does not in its current form require the FBI to get a warrant before running searches identifying whether Americans have communications present in the database in the first place. Knowing this information is itself very revealing and the government should not be able to profit from circumventing the Fourth Amendment. 

When Congress reauthorized Section 702 in 2014, they did so through a piece of policy called the Reforming Intelligence and Securing America Act (RISAA). This bill made 702 worse in several ways, one of the most severe being that it expanded the legal uses for the surveillance authority to include vetting immigrants. In an era when the United States government is rounding up immigrants, including people awaiting asylum hearings, and which U.S officials are continuously threatening to withhold admission to the United States from people whose politics does not align with the current administration, RISAA sets a dangerous precedent. Although RISAA is officially expiring in April, it would be helpful for any Section 702 reauthorization bill to explicitly prohibit the use of this authority for that reason. 

Finally, in the same way that the SAFE Act statutorily ends the expired Section 215 of the Patriot Act, it should also impose an explicit end to “Abouts collection” a practice of collecting digital communications, not if their from suspected people, but if their are “about” specific topics. This practice has been discontinued, but still sits on the books, just waiting to be revamped. 

Matthew Guariglia

Privacy's Defender: Launch Party in Berkeley

1 day 15 hours ago

We're celebrating the launch of Privacy's Defender, a new book by EFF Executive Director Cindy Cohn on Thursday, March 12—and we want you to join us! Cindy has tangled with the feds, fought for your data security, and argued before judges to protect our access to science and knowledge on the internet. In Privacy's Defender she asks: can we still have private conversations if we live our lives online?

Join the festivities for a live conversation between Cindy Cohn and Annalee Newitz followed by a book signing with Cindy.

REGISTER TODAY! 

$20 General Admission for 1
$30 Discounted tickets for 2
$12.50 Student Ticket
All proceeds benefit EFF's mission.

Want your own copy of Privacy's Defender?
Save $10 when you preorder the book with your ticket purchase

WHEN:
Thursday, March 12th, 2026
6:30 pm to 9:30 pm

WHERE:
Ciel Creative Space
Entrance located at:
940 Parker St, Berkeley, CA 94710

6:30 PM Doors Open
7:15 PM Program Begins


About the book

Throughout her career, Cindy Cohn has been driven by a fundamental question: Can we still have private conversations if we live our lives online? Privacy’s Defender chronicles her thirty-year battle to protect our right to digital privacy and shows just how central this right is to all our other rights, including our ability to organize and make change in the world.

Shattering the hypermasculine myth that our digital reality was solely the work of a handful of charismatic tech founders, the author weaves her own personal story with the history of Crypto Wars, FBI gag orders, and the post-9/11 surveillance state. She describes how she became a seasoned leader in the early digital rights movement, as well as how this work serendipitously helped her discover her birth parents and find her life partner. Along the way, she also details the development of the Electronic Frontier Foundation, which she grew from a ragtag group of lawyers and hackers into one of the most powerful digital rights organizations in the world.

Part memoir and part legal history for the general reader, the book is a compelling testament to just how hard-won the privacy rights we now enjoy as tech users are, but also how crucial these rights are in our efforts to combat authoritarianism, grow democracy, and strengthen other human rights. Learn about the Privacy's Defender book tour.

Parking

Street parking is available around the building.

Accessibility

The main event space is wheelchair accessible, on concrete. Lively music will be playing, and the speakers will be using a microphone, so louder volumes are expected. EFF is committed to improving accessibility for our events. If you will be attending in-person and need accommodation, or have accessibility questions prior to the event, please contact events@eff.org.

Food and Drink

Wine & Beer will be available for purchase. Cellarmaker Brewing Co., located next door to Ciel Space, will be serving food until 8:00 pm. 

Questions?

Email us at events@eff.org.

About the Speakers

Cindy Cohn
Cindy Cohn is the Executive Director of the Electronic Frontier Foundation. From 2000-2015 she served as EFF’s Legal Director as well as its General Counsel.  Ms. Cohn first became involved with EFF in 1993, when EFF asked her to serve as the outside lead attorney in Bernstein v. Dept. of Justice, the successful First Amendment challenge to the U.S. export restrictions on cryptography. 

Ms. Cohn has been named to TheNonProfitTimes 2020 Power & Influence TOP 50 list, honoring 2020's movers and shakers.  In 2018, Forbes included Ms. Cohn as one of America's Top 50 Women in Tech. The National Law Journal named Ms. Cohn one of 100 most influential lawyers in America in 2013, noting: "[I]f Big Brother is watching, he better look out for Cindy Cohn." She was also named in 2006 for "rushing to the barricades wherever freedom and civil liberties are at stake online."  In 2007 the National Law Journal named her one of the 50 most influential women lawyers in America. In 2010 the Intellectual Property Section of the State Bar of California awarded her its Intellectual Property Vanguard Award and in 2012 the Northern California Chapter of the Society of Professional Journalists awarded her the James Madison Freedom of Information Award.  

Ms. Cohn is the author of the professional memoir, called Privacy's Defender to be published by MIT Press in March, 2026. She is also the co-host of EFF's award-winning podcast, How to Fix the Internet.  

 

Annalee Newitz
Annalee Newitz writes science fiction and nonfiction. They are the author of four novels: Automatic Noodle, The Terraformers, The Future of Another Timeline, and Autonomous, which won the Lambda Literary Award. As a science journalist, they are the author of Stories Are Weapons: Psychological Warfare and the American Mind, Four Lost Cities: A Secret History of the Urban Age and Scatter, Adapt and Remember: How Humans Will Survive a Mass Extinction, which was a finalist for the LA Times Book Prize in science. They are a writer for the New York Times and elsewhere, and have a monthly column in New Scientist. They have published in The Washington Post, Slate, Scientific American, Ars Technica, The New Yorker, and Technology Review, among others. They were the co-host of the Hugo Award-winning podcast Our Opinions Are Correct, and have contributed to the public radio shows Science Friday, On the Media, KQED Forum, and Here and Now. Previously, they were the founder of io9, and served as the editor-in-chief of Gizmodo.

Melissa Srago

EFFecting Change: Privacy's Defender

1 day 17 hours ago

Join EFF Executive Director Cindy Cohn in conversation with 404 Media Cofounder Jason Koebler to discuss Privacy's Defender: My Thirty-Year Fight Against Digital Surveillance, Cindy’s personal story of standing up to the Justice Department, taking on the NSA, and tangling with the FBI to protect our right to digital privacy. The highly anticipated book asks the fundamental question: Can we still have private conversations if we live our lives online? Join the livestream for a live discussion followed by by Q&A.

EFFecting Change Livestream Series:
Privacy's Defender
Thursday, March 19th
11:00 AM - 12:00 PM Pacific
This event is LIVE and FREE!



Accessibility

This event will be live-captioned and recorded. EFF is committed to improving accessibility for our events. If you have any accessibility questions regarding the event, please contact events@eff.org.

Event Expectations

EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.

Upcoming Events

Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates. If you have a friend or colleague that might be interested, please join the fight for your digital rights by this link: eff.org/EFFectingChange. Thank you for helping EFF spread the word about privacy and free expression online.

Recording

We hope you and your friends can join us live! If you can't make it, we’ll post the recording afterward on YouTube and the Internet Archive!

About the Speakers

 

 Cindy Cohn 
Cindy Cohn is the Executive Director of the Electronic Frontier Foundation. From 2000-2015 she served as EFF’s Legal Director as well as its General Counsel.  Ms. Cohn first became involved with EFF in 1993, when EFF asked her to serve as the outside lead attorney in Bernstein v. Dept. of Justice, the successful First Amendment challenge to the U.S. export restrictions on cryptography. Ms. Cohn has been named to TheNonProfitTimes 2020 Power & Influence TOP 50 list, honoring 2020's movers and shakers.  In 2018, Forbes included Ms. Cohn as one of America's Top 50 Women in Tech. The National Law Journal named Ms. Cohn one of 100 most influential lawyers in America in 2013, noting: "[I]f Big Brother is watching, he better look out for Cindy Cohn." She was also named in 2006 for "rushing to the barricades wherever freedom and civil liberties are at stake online."  In 2007 the National Law Journal named her one of the 50 most influential women lawyers in America. In 2010 the Intellectual Property Section of the State Bar of California awarded her its Intellectual Property Vanguard Award and in 2012 the Northern California Chapter of the Society of Professional Journalists awarded her the James Madison Freedom of Information Award.  

 Jason Koebler 
Jason Koebler is a cofounder of 404 Media, a journalist-owned investigative tech publication. He reports on surveillance and privacy, the ways that artificial intelligence is changing the internet, labor, and society, and consumer rights. Before 404 Media, he was the editor-in-chief of Motherboard, VICE's technology publication and an executive producer on Encounters, a Netflix documentary about the search for alien life.





Melissa Srago

Admiring Our Heroes for International Women’s Day: Celebrating Women Who Have Received EFF Awards 

4 days 10 hours ago

For the last hundred years, women have had pivotal and far too often unsung roles in building and shaping the technology that we now use every day. Many have heard of Ada Lovelace’s contributions to computer programming, but far fewer know Mary Allen Wilkes, a prominent modern programmer who wrote much of the software for the LINC, one of the world’s first interactive personal computers (it could fit in a single office and cost $40,000, but it was the 60’s). Decades earlier, when the first all-electronic, digital Eniac computer was built in the 40’s, the “software” for it was written by women: Kathleen McNulty, Jean Jennings, Betty Snyder, Marlyn Wescoff, Frances Bilas and Ruth Lichterman. 

It’s thankfully become more common knowledge that actor and inventor Hedy Lamarr co-created the concept of "frequency-hopping" that became a basis for radio systems from cell phones to wireless networking systems. But too few know Laila Ohlgren, who in the 1970’s solved a major problem with the development of mobile networks and phones by recognizing that dialed numbers could be stored and sent all at once with a “call button,” rather than sent one number at a time, which created connection issues before a call was even made. 

Women in tech deserve more and brighter spotlights. At EFF, we’ve had the honor of celebrating some of our heroes at our annual EFF Awards, including many women who are leading the digital rights community. For International Women’s Day, we’re highlighting the contributions of just a few of these recipients from the last decade, whose work to protect privacy, speech, and creativity online has had a global impact.

Carolina Botero (EFF Award Winner, 2024) 

Carolina Botero is a leader in the fight for digital rights in Latin America. For over a decade, she led the Colombia-based Karisma Foundation and cultivated its regional and international impact. Botero and Karisma helped connect indigenous peoples to the internet and made it possible to contribute content to Wikipedia in their native language, expanding access to both history and modern information. They built alliances to combat disinformation, pushed for legal tools to protect cultural and heritage institutions from digital blackholes, and were, and remain, a necessary voice speaking for human rights in the online world. EFF worked closely with Karisma and Botero to help free Colombian graduate student Diego Gomez, who shared another student’s Master’s thesis with colleagues over the internet. Diego’s story demonstrates what can go wrong when nations enact severe penalties for copyright infringement, and thanks to work from Karisma, many partners, and many EFF supporters, he was cleared of the criminal charges that he faced for this harmless act of sharing scholarly research.

Carolina Botero receiving her EFF Award

Botero stepped down from the role in 2024, opening the door for a new generation. While her work continues—she’s currently on the advisory board of CELE, the Centro de Estudios en Libertad de Expresión—her EFF Award was well-deserved based on her strong and inspiring legacy for those in Latin America and beyond who advocate for a digital world that enhances rights and empowers the powerless. Learn more about Botero on her EFF Awards page and the recap of the 2024 event

Chelsea Manning (EFF Award Winner, 2017)

Chelsea Manning became famous as a whistleblower: In 2010, she disclosed classified Iraq War documents, including a video of the killings of Iraqi civilians and two Reuters reporters by U.S. troops. These documents exposed aspects of U.S. operations in Iraq and Afghanistan that infuriated the public and embarrassed the government. But she is also a transparency and transgender rights advocate, network security expert, author, and former U.S. Army intelligence analyst. 

Manning joined the military in 2007. Her role as an intelligence analyst to an Army unit in Iraq in 2009 gave her access to classified databases, but more importantly, it gave her a uniquely comprehensive view of the war in Iraq, and she became increasingly disillusioned and frustrated by what she saw, versus what was being shared. In 2010, she approached major news outlets hoping to give information to them that would reveal a new side of the war to the public. Ultimately, she shared the documents with Wikileaks. 

Manning’s bravery did not end there. When she was arrested a few months later, she endured "cruel, inhuman and degrading" treatment, according to the UN Special Rapporteur on torture. She was locked up alone for 23 hours a day over an 11-month period, before her trial. The mistreatment resulted in public outcry and advocacy by organizations like Amnesty International. Even a State Department spokesperson, Philip Crowley, criticized the treatment as "ridiculous, counterproductive, and stupid," and resigned. She was moved to a medium-security facility in April 2011. 

The government’s charges against Manning were outrageous, but in 2013 she was convicted of 19 of 22 counts as a result of her whistleblowing activities. She became one of fewerthan a dozen people prosecuted for espionage in the entire history of the United States, and she was sentenced to the longest punishment ever imposed on a whistleblower. Then, the day after her conviction, isolated from her community and in all likelihood expecting to remain in prison for years if not decades, she courageously issued a statement identifying herself as a trans woman, which she’d wanted to reveal for years. 

Over the next several years, while imprisoned, she became an advocate both for government transparency and for transgender rights. Her conviction and sentence pointed to the need for legal reform of both the Computer Fraud and Abuse Act (CFAA) and the Espionage Act.  EFF filed an amicus brief to the U.S. Army Court of Criminal Appeals arguing that the CFAA was never meant to criminalize violations of private policies like those of government systems, and EFF also pushed, and continues to fight for, narrower interpretations of the Espionage Act and stronger protections for whistleblowers, particularly to take into account both the motivation of individuals who pass on documents and the disclosure’s ramifications. 

Even after President Obama commuted her sentence in 2017, and EFF celebrated her work and her release with an EFF award in September, 2017, her fight wasn’t over. She was imprisoned again twice in 2019 and ultimately fined $256,000 for refusing to testify before grand juries investigating WikiLeaks founder Julian Assange. The U.N. Special Rapporteur on torture again criticized Manning’s treatment, writing that "the practice of coercive detention appears to be incompatible with the international human rights obligations of the United States." 

Manning was released in 2020 after having spent almost a decade in total imprisoned for her courage. She wrote a memoir, README.txt, in 2022, to take back control over her story.

EFF Award Winners Mike Masnick, Annie Game, and Chelsea Manning

Annie Game (EFF Award Winner, 2017)

Annie Game spent over 16 years as the Executive Director of IFEX, a global network of journalism and civil liberties organizations working together to defend freedom of expression.  IFEX (formerly International Freedom of Expression Exchange) began in the 1990s, when a group of organizations and the Canadian Committee to Protect Journalists came together to consider how to respond as a single voice to free-expression violations around the world. IFEX now is a global hub for the protection of free speech and journalism. 

Game recognized early on that digital rights and freedom of expression groups needed one another. Under her leadership, IFEX paired more traditional free-expression organizations with their more digital counterparts, with a focus on building organizational security capacities. IFEX Initiatives under Game’s leadership have been expansive. For example, the International Day to End Impunity for Crimes against Journalists, November 2, has been an annual wake-up call and reminder for UN member states to live up to their commitments to protecting journalists. UNESCO observed more than 1,700 journalists were killed globally between 2006 and 2024, and nearly 90% of these cases went unsolved in the courts. 

Game and IFEX have also focused on high-profile cases of journalists threatened by governments for their work, such as Bahey eldin Hassan in Egypt. Bahey is the director of the Cairo Institute for Human Rights Studies (CIHRS) and has advocated for freedom of expression and the basic human rights of Egyptians, but has lived in exile since 2014. The charges against him, of “disseminating false information” and “insulting the judiciary,” are common tactics of intimidation and harassment. Bahey’s supposed crimes were sharing social media posts criticising the Egyptian judiciary’s lack of independence, and speaking about the killing in Egypt of Italian researcher Giulio Regeni. Bahey—an IFEX member—is just one of many reporters and human rights workers in danger when they speak. But when journalists and those defending their rights online speak out as one voice, as IFEX helps them do, it makes a difference. 

Another initiative has been the Faces of Free Expression project, a partnership between IFEX and the International Free Expression Project. If you’re looking for more heroes, this project details the stories of “risk-takers and change-makers – individuals who put their careers, their freedom, their safety, and sometimes even their lives on the line,” while reporting, or defending free expression and the right to information. 

Wherever authoritarianism and repression of speech have been on the rise, Game has unapologetically called out injustices and made it safer for journalists to do their work, while ensuring accountability when crimes are committed. The work is more critical now than ever, and since leaving IFEX in 2022, she’s remained an activist while focusing increasingly on environmental protection. 

Twelve More Heroes 

EFF has honored many more women with awards over the years—from Anita Borg and Hedy Lamarr to Amy Goodman and Beth Givens. This blog from 2012 looks back and acknowledges the important contributions from twelve more EFF Award winners. 

We’ve also asked five women at EFF about women in digital rights, freedom of expression, technology, and tech activism who have inspired us. You can read that here.

Donate to Support EFF's Work

Your donations empower EFF to do even more.

Jason Kelley

Admiring Our Heroes for International Women’s Day: Five Women In Tech That EFF Admires

4 days 12 hours ago

In honor of International Women’s Day, we asked five women at EFF about women in digital rights, freedom of expression, technology, and tech activism who have inspired us.  

Anna Politkovskaya 

Jillian York, Activist 
This International Women’s Day, I want to honor the memory of Anna Politkovskaya, the Russian investigative journalist who relentlessly exposed political and social abuses, endured harassment and violence for her work, and was ultimately killed for telling the truth. I had just started my career when I learned of her death, and it forced me to confront that freedom of expression isn’t an abstract principle but rather something people risk—and sometimes lose—their lives for. 

Her story reminds me that journalism at its best is an act of moral courage, not just a profession. In the face of threats, poison, and relentless pressure to stay silent, she chose to continue writing about what she saw, insisting that ordinary people’s lives were worth the world’s attention. She refused to compromise with power, even when she knew it could cost her life. To me, defending freedom of expression means defending those like Anna who bear witness to injustice, prioritize truth, and hold power to account for those whose voices are silenced.  

Cindy Cohn 

Corynne McSherry, Legal Director 
There are so many women who have shaped tech history—most of whom are still unsung heroes—that it’s hard to single out just one. But it’s easier this year because it’s a chance to celebrate my boss, Cindy Cohn, before she leaves EFF for her next adventure.  

Cindy has been fighting for our digital rights for 30 years, leading EFF’s legal work and eventually the whole organization. She helped courts understand that code is speech deserving of constitutional protections at a time when many judges weren’t entirely sure what code even was. She led the fight against NSA spying, and even though outdated and ill-fitting doctrines like the state secrets privilege prevented courts from ruling on the obvious unconstitutionality of the NSA’s mass surveillance program, the fight itself led to real reforms that have expanded over time.   

I’ve worked closely with her for much of her EFF career, starting in 2005 when we sued Sony for installing spyware in millions of computers, and I’ve seen firsthand her work as a visionary lawyer, outstanding writer, and tireless champion for user privacy, free expression, and innovation. She’s also warm and funny, with the biggest heart in the world, and I’m proud to call her a friend as well as a mentor.  

Donate to Support EFF's Work

Your donations empower EFF to do even more.

Jane

Sarah Hamid, Activist 
When talking about women in tech, we usually mean founders, engineers, and executives. But just as important are the women who quietly built the practices that underpin today’s movement security culture. 

For as long as social movements have organized in the shadow of state surveillance, women have been designing the protocols, mutual aid networks, and information flows that keep people alive. Those threats feel ever-escalating: fusion‑center monitoring of protests, federal agencies infiltrating and subpoenaing encrypted Signal and social media chats, prosecutors mining search histories.  

In the late 1960s and early 1970s, the underground Jane abortion counseling service—formally the Abortion Counseling Service of Women’s Liberation—built what we would now recognize as a feminist infosec project for abortion access. Jane connected an estimated 11,000 people with safer abortions before Roe v. Wade, using a single public phone number—Call Jane—paired with code names, compartmentalized roles, and minimal records so no one person held the full story of who needed care, who was providing it, and where. When Chicago police raided the collective in 1972, members destroyed their index‑card files rather than let them become a ready‑made map of patients and helpers—an analog secure‑deletion choice that should feel familiar to anyone who has ever wiped a phone or locked down a shared drive. 

The lesson we should take from Jane is a set of principles that still hold in our encrypted‑but‑insecure present: Collect less, separate what you do collect, and be ready to burn the file box. When a search query, a location ping, or a solidarity post can become evidence, treating information as both lifeline and liability is not paranoia—it is care work.  

Ebele Okobi

Babette Ngene, Director of Public Interest Technology 
In the winter of 2013, I had just landed my first job at the intersection of tech and human rights, working for a prominent nonprofit and I was encouraged to attend regular tech and policy events around town. One such event on internet governance was happening at George Washington University,  focusing on multi-stakeholder engagement on internet policy and governance issues, with companies, nonprofits, and government representatives in attendance. I was inexperienced with these topics, and I’ll admit I was a bit intimidated. 

Then I saw her. She was the only woman on the opening panel, an African woman, an accomplished woman. Not only was she a respected lawyer at Yahoo at the time, but her impressive background, presence, and confident speaking style immediately inspired me. She made me feel like I, too, belonged in that room and could become a powerful voice. 

Ebele Okobi would go on to become one of the most powerful and respected voices in the tech and human rights space, known for her advocacy for digital rights and responsible innovation across Africa and the broader global majority during her tenure at Facebook. Beyond her corporate advocacy, Ebele has consistently championed ethical technology and social justice. She embodies the leadership qualities I value most: empathy, speaking truth to power, integrity, and authenticity. 

I remain in the tech and human rights space because I saw her, because seeing her made me feel seen. Representation truly does matter.  

Ada Lovelace 

Allison Morris, Chief Development Director 
I’m not a lawyer, activist, or technologist; I’m a fundraiser and a lover of stories. And what storyteller at EFF couldn’t help but love Ada Lovelace? The daughter of Lord Byron—the human embodiment of Romanticism—Ada was an innovator in math and science and, ultimately, the writer of the first computer program.  

Lovelace saw the potential in Charles Babbage’s theoretical General Purpose Computer (which was never actually built) and created the foundations of modern computing long before the digital age. In creating the first computer code, Lovelace took Babbage’s concept of a machine that could perform mathematical calculations and realized that it could manipulate symbols as well as numbers. 

Given the expectations of women in her time and the controversy of what work should be attributed to Lovelace as opposed to the man she often worked with, I can’t help but be inspired by her story.  

Women in tech deserve more and brighter spotlights. At EFF, we’ve had the honor of celebrating some of our heroes at our annual EFF Awards, including many women who are leading the digital rights community. For International Women’s Day, we also highlighted the contributions of just a few of these recipients from the last decade, whose work to protect privacy, speech, and creativity online has had a global impact.

Allison Morris

Weasel Words: OpenAI’s Pentagon Deal Won’t Stop AI‑Powered Surveillance

4 days 19 hours ago

OpenAI, the maker of ChaptGPT, is rightfully facing widespread criticism for its decisions to fill the gap the U.S. Department of Defense (DoD) created when rival Anthropic refused to drop its restrictions against using its AI for surveillance and autonomous weapons systems. After protests from both users and employees who did not sign up to support government mass surveillance—early reports show that ChaptGPT uninstalls rose nearly 300% after the company announced the deal—Sam Altman, CEO of OpenAI, conceded that the initial agreement was “opportunistic and sloppy.” He then re-published an internal memo on social media stating that additions to the agreement made clear that “Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, [and] FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”

Trouble is, the U.S. government doesn’t believe “consistent with applicable laws” means “no domestic surveillance.” Instead, for the most part, the government has embraced a lax interpretation of “applicable law” that has blessed mass surveillance and large-scale violations of our civil liberties, and then fought tooth and nail to prevent courts from weighing in. 

"After all, many of the world’s most notorious human rights atrocities have historically been “legal” under existing laws at the time."

“Intentionally” is also doing an awful lot of work in that sentence. For years the government has insisted that the mass surveillance of U.S. persons only happens incidentally (read: not intentionally) because their communications with people both inside the United States and overseas are swept up in surveillance programs supposedly designed to only collect communications outside the United States. 

The company’s amendment to the contract continues in a similar vein, “For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.” Here, “deliberate” is the red flag given how often intelligence and law enforcement agencies rely on incidental or commercially purchased data to sidestep stronger privacy protections.

Here’s another one: “The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.” What, one wonders, does “unconstrained” mean, precisely—and according to whom? 

Lawyers sometimes call these “weasel words” because they create ambiguity that protects one side or another from real accountability for contract violations. As with the Anthropic negotiations, where the Pentagon reportedly agreed to adhere to Anthropic’s red lines only “as appropriate,” the government is likely attempting to publicly commit to limits in principle, but retain broad flexibility in practice.

OpenAI also notes that the Pentagon promised the NSA would not be allowed to use OpenAI’s tools absent a new agreement, and that its deployment architecture will help it verify that no red lines are crossed. But secret agreements and technical assurances have never been enough to rein in surveillance agencies, and they are no substitute for strong, enforceable legal limits and transparency.

OpenAI executives may indeed be trying, as claimed, to use the company’s contractual relationship with the Pentagon to help ensure that the government should use AI tools only in a way consistent with democratic processes. But based on what we know so far, that hope seems very naïve.

Moreover, that naïvete is dangerous. In a time when governments are willing to embrace extreme and unfounded interpretations of “applicable laws,” companies need to put some actual muscle behind standing by their commitments. After all, many of the world’s most notorious human rights atrocities have historically been “legal” under existing laws at the time. OpenAI promises the public that it will  “avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power,” but we know that enabling mass surveillance does both.     

OpenAI isn’t the only consumer-facing company that is, on the one hand, seeking to reassure the public that they aren’t participating in actions that violate human rights while, on the other, seeking to cash in on government mass surveillance efforts.  Despite this marketing double-speak, it is very clear that companies just cannot do both. It’s also clear that companies shouldn’t be given that much power over the limits of our privacy to begin with. The public should not have to rely on a small group of people—whether CEOs or Pentagon officials—to protect our civil liberties.

Corynne McSherry

The Government Uses Targeted Advertising to Track Your Location. Here's What We Need to Do.

5 days 21 hours ago

We've all had the unsettling experience of seeing an ad online that reveals just how much advertisers know about our lives. You're right to be disturbed. Those very same online ad systems have been used by the government to warrantlessly track peoples' locations, new reporting has confirmed.

For years, the internet advertising industry has been sucking up our data, including our location data, to serve us "more relevant ads." At the same time, we know that federal law enforcement agencies have been buying up our location data from shady data brokers that most people have never heard of.

Now, a new report gives us direct evidence that Customs and Border Protection (CBP) has used location data taken from the internet advertising ecosystem to track phones. In a document uncovered by 404 Media, CBP admits what we’ve been saying for years: The technical systems powering creepy targeted ads also allow federal agencies to track your location.

The document acknowledges that a program by the agency to use "commercially available marketing location data" for surveillance drew from the process used to select the targeted ads shown to you on nearly every website and app you visit. In this blog post, we'll tell you what this process is, how it can and is being used for state surveillance, and what can be done about it—by individuals, by lawmakers, and by the tech companies that enable these abuses.

Advertising Surveillance Enables Government Surveillance

The online advertising industry has built a massive surveillance machine, and the government can co-opt it to spy on us. 

In the absence of strong privacy laws, surveillance-based advertising has become the norm online. Companies track our online and offline activity, then share it with ad tech companies and data brokers to help target ads. Law enforcement agencies take advantage of this advertising system to buy information about us that they would normally need a warrant for, like location data. They rely on the multi-billion-dollar data broker industry to buy location data harvested from people’s smartphones.

We’ve known for years that location data brokers are one part of federal law enforcement's massive surveillance arsenal, including immigration enforcement agencies like CBP and Immigration and Customs Enforcement (ICE). ICE, CBP and the FBI have purchased location data from the data broker Venntell and used it to identify immigrants who were later arrested. Last year, ICE purchased a spy tool called Webloc that gathers the locations of millions of phones and makes it easy to search for phones within specific geographic areas over a period of time. Webloc also allows them to filter location data by the unique advertising IDs that Apple and Google assign to our phones.

But a document recently obtained by 404 Media is the first time CBP has acknowledged the location data it buys is partially sourced from the system powering nearly every ad you see online: real-time bidding (RTB). As CBP puts it, “RTB-sourced location data is recorded when an advertisement is served.” 

Even though this document is about a 2019-2021 pilot use of this data, CBP and other federal agencies have continued to purchase and use commercially obtained location data. ICE has purchased location tracking tools since then and recently requested information on “Ad Tech” tools it could use for investigations. 

The CBP document acknowledges two sources of location data that it relies on: software development kits (SDKs) and RTB, both methods of location-tracking that EFF has written about before. Apps for weather, navigation, dating, fitness, and “family safety” often request location permissions to enable key features. But once an app has access to your location, it could share it with data brokers directly through SDKs or indirectly (and often without the app developers' knowledge) through RTB. Data brokers can collect location data from SDKs that they pay developers to put in their apps. When relying on RTB, data brokers don’t need any direct relationship with the apps and websites they’re collecting location data from. RTB is facilitated by ad companies that are already plugged into most websites and apps. 

Donate to Support EFF's Work

Your donations empower EFF to do even more.

How Real-Time Bidding Works

RTB is the process by which most websites and apps auction off their ad space. Unfortunately, the milliseconds-long auctions that determine which ads you see also expose your information, including location data, to thousands of companies a day. At a high-level, here’s how RTB works:

  1. The moment you visit a website or app with ad space, it asks an ad tech company to determine which ads to display for you. 
  2. This ad tech company packages all the information they can gather about you into a “bid request” and broadcasts it to thousands of potential advertisers. 
  3. The bid request may contain information like your unique advertising ID, your GPS coordinates, IP address, device details, inferred interests, demographic information, and the app or website you’re visiting. The information in bid requests is called “bidstream data” and typically includes identifiers that can be linked to real people. 
  4. Advertisers use the personal information in each bid request, along with data profiles they’ve built about you over time, to decide whether to bid on the ad space. 
  5. The highest bidder gets to display an ad for you, but advertisers (or the adtech companies that represent them) can collect your bidstream data regardless of whether or not they bid on the ad space.   

A key vulnerability of real-time bidding is that while only one advertiser wins the auction, all participants receive data about the person who would see their ad. As a result, anyone posing as an ad buyer can access a stream of sensitive data about billions of individuals a day. Data brokers have taken advantage of this vulnerability to harvest data at a staggering scale. For example, the FTC found that location data broker Mobilewalla collected data on over a billion people, with an estimated 60% sourced from RTB auctions. Leaked data from another location data broker, Gravy Analytics, referenced thousands of apps, including Microsoft apps, Candy Crush, Tinder, Grindr, MyFitnessPal, pregnancy trackers and religious-focused apps. When confronted, several of these apps’ developers said they had never heard of Gravy Analytics. 

As Venntel, one of the location data brokers that has sold to ICE, puts it, “Commercially available bidstream data from the advertising ecosystem has long been one of the most comprehensive sources of real-time location and device data available.” But the privacy harms of RTB are not just a matter of misuse by individual data brokers. RTB auctions broadcast the average person’s data to thousands of companies, hundreds of times per day, with no oversight of how this information is ultimately exploited. Once your information is broadcast through RTB, it’s almost impossible to know who receives it or control how it’s used. 

What You Can Do To Protect Yourself

Revelations about the government's exploitation of this location data shows how dangerous online tracking has become, but we’re not powerless. Here are two basic steps you can take to better protect your location data:

  1. Disable your mobile advertising ID (see instructions for iPhone/Android). Apple and Google assign unique advertising IDs to each of their phones. Location data brokers use these advertising IDs to stitch together the information they collect about you from different apps. 
  2. Review apps you’ve granted location permissions to. Apps that have access to your location could share it with other companies, so make sure you’re only granting location permission to apps that really need it in order to function. If you can’t disable location access completely for an app, limit it to only when you have the app open or only approximate location instead of precise location. 

For more tips, check out EFF’s guide to protecting yourself from mobile-device based location tracking. Keep in mind that the security plan that’s best for you will vary in different situations. For example, you may want to take stronger steps to protect your location data when traveling to a sensitive location, like a protest. 

What Tech Companies and Lawmakers Must Do

Legislators and tech companies must act so that individuals don’t bear the burden of defending their data every time they use the internet.

Ad tech companies must reckon with their role in warrantless government surveillance, among other privacy harms. The systems they built for targeted advertising are actively used to track people’s location. The best way to prevent online ads from fueling surveillance is to stop targeting ads based on detailed behavioral profiles. Ads can still be targeted contextually—based on the content people are viewing—without collecting or exposing their sensitive personal information. Short of moving to contextual advertising, tech companies can limit the use of their systems for government location tracking by:

  • Stopping the use of precise location data for targeted advertising. Ad tech companies facilitating ad auctions can and should remove precise location data from bid requests. Ads can be targeted based on people’s coarse location, like the city they’re in, without giving data brokers people’s exact GPS coordinates. Precise location data can reveal where we work, where we live, who we meet, where we protest, where we worship, and more. Broadcasting it to thousands of companies a day through RTB is dangerous.
  • Removing advertising IDs from devices, or at minimum, disabling them by default. Advertising IDs have become a linchpin of the data broker economy and are actively used by law enforcement to track people’s location. Advertising IDs were added to phones in 2012 to let companies track you, and removing them is not a far-fetched idea. When Apple forced apps to request access to people’s advertising IDs starting in 2021 (if you have an iPhone you’ve probably seen the "Ask App Not to Track" pop-ups), 96% of U.S. users opted out, essentially disabling advertising IDs on most iOS devices. One study found that iPhone users were less likely to be victims of financial fraud after Apple implemented this change. Google should follow Apple’s lead and disable advertising IDs by default.

Lawmakers also need to step up to protect their constituents' privacy. We need strong, federal privacy laws to stop companies from spying on us and selling our personal information. EFF advocates for data privacy legislation with teeth and a ban on ad targeting based on online behavioral profiles, as it creates a financial incentive for companies to track our every move.

Legislators can and must also close the "data broker loophole" on the Fourth Amendment. Instead of obtaining a warrant signed by a judge, law enforcement agencies can just buy location data from private brokers to find out where you've been. Last year, Montana became the first state in the U.S. to pass a law blocking the government from buying sensitive data it would otherwise need a warrant to obtain. And in 2024, Senator Ron Wyden's EFF-endorsed Fourth Amendment is Not for Sale Act passed the House before dying in the Senate. Others should follow suit to stop this end-run around constitutional protections.

Online behavioral advertising isn’t just creepy–it’s dangerous. It's wrong that our personal information is being silently harvested, bought by shadow-y data brokers, and sold to anyone who wants to invade our privacy. This latest revelation of warrantless government surveillance should serve as a frightening wakeup call of how dangerous online behavioral advertising  has become.

Donate to Support EFF's Work

Your donations empower EFF to do even more.

Lena Cohen

Speaking Freely: Shin Yang

6 days 15 hours ago

*This interview has been edited for length and clarity.

David Greene: Shin, please introduce yourself to the Speaking Freely community.

 Shin Yang: My name is Shin Yang. I am a queer writer with a legal background and experience in product management. I am the steward of Lezismore, an independent, self-hosted, open-source community for sexual minorities in Taiwan. For the past decade, I have focused on platform governance as infrastructure, with a particular emphasis on anonymity, minimal data collection, and behavior-based accountability, so that people can speak about intimacy and identity without fear of extraction or exposure. I am a community architect and builder, not an influencer. I’ve spent most of the past decade working anonymously building systems, designing governance protocols, and holding space for others to speak while keeping myself in the background.

 DG: Great. And so let’s talk about how that work intersects with freedom of expression as a principle, and your own personal feelings about freedom of expression. And so with that in mind, let me just start with a basic question, what does freedom of expression mean to you?

 SHIN: For me, free expression is about possibility, and possibility always contains both and even multiple ends, the beautiful ones and the brutal in equal measure. Maybe not that equal, but you cannot just speak about the beautiful or good things. I think it's not about pushing discomfort out of the room. If we refuse all discomfort, we end up in echo chambers, which is safe, predictable; but dead. What matters to me is the equipment and principles: Who carries through that discomfort, self-discipline, mutual support, and the infrastructure and governance that can let people grow over time. Keeps a workable gray space open: room to make mistakes, learn, repair, and keep speaking.

 DG: How does that resonate with you personally? Why are you passionate about that?

 SHIN: Around 2013 in Taiwan's context, when Facebook started to take over the digital ecosystem in Taiwan, many local independent bulletin boards (BBS) that had been formed for sexual minorities were shut down because they had no income from advertisements, and people were pushed into mainstream platforms—like Facebook, Instagram, Meta, whatever, Twitter now X—where sexual expression was usually reported or flagged, and where I watched sharp intra-community exclusionary voices saying “bisexual and trans people were not pure enough”, or that talking openly about sex would harm our image, or that it was inappropriate to children, or it would invite harassment. Those oppressions are even fiercer within the queer community itself, which is self-censoring in order to gain approval from mainstream society.

 So, the community itself says that the best way to do it is don't talk about it. Never talk about it. Never mention a single thing about it. It was a wakeup call for me, because I think it's not right. And also, there's another more private story for me, it's a story I heard from our sexual minority community. I once heard about a butch student who was sexually assaulted by a group of men because she dated a beautiful classmate, a beautiful woman in the class.

 And when I learned what happened to her, that story changed my focus. Because, you know, when people hear this kind of story, they always focus on punishing those men, punishing those criminals—but what matters for me most is building conditions where someone like her could someday still have a chance at intimacy on her own terms, and finally be free from fear. That's more important for me. I may never meet her, but I know who I am and what I'm here to build. I have been building an infrastructure –– not just “safe space” as a slogan, but an “ecospace” designed to make survival and growth possible. So that's why I believe that a well-governed space is what matters for communities now.

 DG: Why is it so important for sexual minorities to have forums where they can communicate in that way? When it was just the bulletin boards, before social media, what worked really well and what didn’t work well?

 SHIN: That’s a wonderful question. Okay, the bulletin boards I used before, the registration process doesn't require a lot of information. You just need email.

 What I miss about bulletin boards is the sense of structure. You didn’t enter a personalized feed—you entered a place with visible rooms and topics. Even in the spaces you visited daily, you’d encounter views you didn’t like, and you had to live with that—and learn how to argue, or leave, or build something parallel. In some boards, moderators were community-chosen, which created a practical kind of participation—not perfect democracy, but civic practice.

 You have to provide the information of which school you are in, because it's based on school. But it's not that difficult to use that. And also they have some kinds of logistics, like you log into different boards with different topics, and you can see that there are huge topics along with several small topics. So when you log into that, you can sense and feel the whole structure of that community. It's not a personal feed bombing you with everything you like. So you know, even in the board you’re most likely to visit every day, you will definitely encounter some speech you don't like, and you argue with them, you fight with them, or build something parallel, that's the civic foundation of democracy. You experience the everyday practice of civic democracy. People can vote for moderators or even recall them.

 DG: You mean, the community can ask them to leave the bulletin boards?

 SHIN: No, they don't actually leave the bulletin board. It's more that the moderator no longer has the right to perform administrative tasks, but they can still be part of the community, and ordinary users can vote in the election for this.

 DG: Okay, and then what were the shortcomings of the bulletin boards?

 SHIN: Yeah, it’s brutal. Really brutal. And I’ve seen people literally organize to push others out. I didn’t expect this to turn into story time, but I actually love this. So—back in Taiwan, we had this big BBS forum called PTT. There was a board called the ‘Sex’ board, where people could talk about sexual topics and share sexual health info. But around 2010, the space was dominated by mainstream straight cis men. And whenever a woman or a sexual minority posted anything, they often got harassed or attacked. So, women created another board inside the forum—basically a separate space—called ‘Feminine Sex.’ And from then on, the original Sex board and the Feminine Sex board were in conflict all the time. And honestly, if this happened today on Facebook, Threads, or X… we’d just block each other. Easy. Clean. Done.

But the problem is: when blocking becomes the default, we don’t really learn how to argue well, how to organize our reasons, or even how to sit with discomfort and understand why the other side thinks the way they do. We lose that practice—because it’s just so easy to delete people from our world now. I’m not saying blocking is always wrong. But there’s a trade-off.

 DG: I get that. Then when Facebook and the other social media platforms that followed came along and the users migrated over to the commercial services, what was lost? 

 SHIN: What was lost? I think our behavior got shaped—personal branding became the default setting for joining an online community. If you don't do it, like me, you basically don't exist.  Influence can be shaped by the number of social media followers; people define each other based on this. Choosing not to obey the logic of mainstream platforms means being unseen, and being unseen means having no influence.

And sure, personal branding can be useful—but I don’t believe it’s the only way to express yourself or connect with a community. The problem is, on mainstream platforms, the whole system is built for visibility. So clout becomes the game. Look at what they push: stories, reels, short-form visuals. And as a former product manager, I can tell you—this is not accidental. It’s designed. It’s designed around human nature: to avoid friction as much as possible. So they keep you scrolling, to make reacting effortless. One tap and you’ve sent a smiley face. Engagement becomes easier… but also cheaper.

And the scary part is, people start thinking that’s the whole internet. It’s not. But the more we get trained by these interfaces, the harder it becomes to even imagine other ways of building community. It is becoming more difficult for people to imagine that the "right" amount of friction can actually help us to grow, and coexist with the diversity.

 DG: So did you find that there were certain things you couldn't talk about on Facebook or on the other social media platforms because they were sexual, because sexual speech was not as welcome as it was earlier?

 SHIN: Yes, when I first started building my community, I knew nothing about technology. Like everyone else, I just created a fan page on Facebook, which was then flagged and deleted. This happened. I think it still happens to this day. At first, I was so angry about it. I felt it was unjust. But every time I wrote to Facebook, they just said that I had violated the user terms. At first I was furious. But I don’t stop at anger. I dig deeper. I thought, “Why do you say I violated the user terms?”

I read the terms, compared policies across platforms and applications, and realized the pattern: All of the terms of use forbid adult or erotic content in fine print. Because these are profit-driven systems optimized to minimize legal and business risk. So, I don’t frame it as “evil platforms.” I frame it as incentives. Once I understood this, I realized that we should not only protest and ask those big tech platforms to “give” us a voice –– that's a good approach, but it shouldn't be the only one. I believe we should build our own community. That's why I started researching open-source software and building my own self-hosted community.

 DG: Please talk a little bit more about what you're building, and how what you're building is consistent with your view of free expression.

 SHIN: Sure. It’s a long process but the reason why I use open-source software is, for a person knowing nothing about technology, I can come to the open-source community and ask questions about it. It’s more reliable than building it myself.

 And the second example is about how I designed Lezismore’s registration and community access, mostly through trial and error.

 We don’t require any real-name or ID verification. In fact, you can register with just an email. But instead of “verifying people,” we redesigned the "space".

 Lezismore is built as a two-layer structure. The main website is searchable, but it looks almost… boring on purpose—advocacy articles, writers’ posts, slow content. The truly active community space is inside that main site, and the entry point is not something you casually discover through search. Most people learn how to get in through word of mouth. We also block search engines, bots, and crawlers from the community area. So from day one, we gave up visibility on purpose—we traded reach for resilience.

 Then there’s the onboarding. New users go through an “apprenticeship” period. You can’t immediately post, comment, or DM people. You first have to read, observe, and understand how the community works. We don’t even tell you exactly how long it takes—you just have to be patient. In the fast-content era, people constantly complain that this is “annoying” or “hard to use.” And yes, it is friction indeed.

 But that friction buys something valuable: a space that can stay anonymous, inclusive, and high-trust—without being instantly overwhelmed by harassment or bad-faith users. It also means we don’t need to depend on Big Tech’s third-party verification APIs. With relatively low technical cost, we’re using governance design—not data collection—to balance inclusion and protection.

And honestly, as a platform owner, I have to be real about what users “actually” need. If this was truly “just terrible UX,” the site wouldn’t survive in today’s hyper-competitive platform environment. But Lezismore has been running for over a decade, and we still have tens of thousands of people quietly reading and interacting every month. This is one of the biggest tradeoffs in my governance design. In an attention economy, choosing low visibility is a bold decision, and maintaining it has a real cost.

 On top of that, we rely on human, context-based moderation. We use posts, replies, and Q&A threads to actively teach community norms—why diversity and conflict exist, how to handle risk, and how to protect yourself. Users also share practical safety tips and real interaction experiences with each other. There are many more small mechanisms built into the system, but that’s the core logic.

 And there’s one more layer: the legal environment. In Taiwan, the legal climate around sex and speech can create chilling effects for smaller platforms. Platform owners can be criminally liable in certain scenarios. That’s exactly why governance design matters—it’s how we keep lawful expression possible without over-collecting data.

 DG: Ah, so you need to be careful. I’m curious whether you’ve had any examples of offline repression. Do you have any experiences with censorship or feeling like you didn’t have your full freedom of expression in your offline experiences? Any experiences that might inform what an ideal online community might look like?

 SHIN: Yes—actually, most of my earliest experiences with repression were offline, and they shaped how I later understood the internet as an escape route.

 Back when I was a high school student, I was already involved in student movements and gender-related advocacy. One very concrete example was dress codes. The school restricted what female students could wear, and students organized to push for change. At one point we even had a vote—something like 98% of students supported revising the policy. But when the issue entered the “official” system, the administration simply ignored it. They bypassed procedure, dismissed the consensus, and used authority to shut it down completely.

That was my first clear lesson about repression: it’s not always someone telling you “you’re forbidden to speak.” Sometimes it’s a system designed so that even if students, women, or sexual minorities spend enormous effort building agreement, once our voices enter the institution, they can be treated as if they don’t exist.

That’s why, in the early 2010s, online space became my breakthrough. This was still the blog era, before social platforms fully standardized everything, and even before “share” mechanisms were built into everyday activism. I started experimenting with things like blog-based petitions, and a lot of students joined. The internet became a way to bypass institutional gatekeeping.

In college, I saw another layer. There was serious sexism from people in authority—military-style discipline officers, some teachers, and administrators. When gender-related controversies happened on campus, the media sometimes showed up and reported in ways that were harmful: exposing people, sensationalizing stories, and ignoring the realities of sexual minority students. Meanwhile, the administration would shut down student demands with authority, and at the same time use incentives and pressure behind the scenes, especially around housing or “benefits”—so some student representatives were afraid to speak honestly in meetings.

And this was before livestreaming was a normal tool. But even then, I was already using audio-based live channels to connect students across campuses. Online networks became a lifeline for young advocates, especially those of us who didn’t “fit” the institution and needed each other to survive.

I came from a literature background. I had zero technical training at the beginning. But I’ve always been the kind of person who loves trying new technology. And I was lucky, because I was born in that strange window when the internet was rapidly expanding, but not yet fully swallowed by Big Tech. So, I grew up in this tension between nostalgia and innovation, and I kept pushing, resisting, and experimenting. I’ve experienced both sides of speech: how beautiful freedom can be, and how terrifying it can become. 

 DG: Going back to Lezismore, I’m curious: When you ask people to observe before they post, what are you hoping they learn about the community before they more actively participate in it?

 SHIN: I hope people understand that this is a community rather than a dating app focused on results. The community needs people to support and nurture each other. Some people see us as a dating app and expect a frictionless experience; naturally, they are disappointed. If you're only looking for a fast-food relationship, that's fine. Here, however, it is a community that offers more than just hooking up. The design focuses on words and a person’s behavioral history rather than just a photo. Dopamine bombing is not how we do things here.

 We’ve also built a library of community safety notes, FAQs, and governance reminders over time. Some written by the team, some contributed by members. Not everyone reads them, and that’s fine. But the design makes it easier for people who want a slower, more intentional space to stay—and for people who want something frictionless to self-select out.

 SHIN: I run the platform anonymously by design. People may know that there’s an admin called “Shin”, but I don’t associate a face or personal brand with the role because I don’t want the community to depend on my visibility for their trust.

 We maintain a clear distinction between work and private life. Admin power is never a shortcut to social capital. In a sex-positive space, this boundary is a matter of ethics. The moment a founder’s identity becomes central, the space starts to orbit that person, and expectations, fan-service dynamics and power asymmetries creep in. Then speech becomes performance.

It also means I’m less “marketable” to attention-driven media—but that tradeoff protects the community’s integrity. Some media outlets only want a face and a persona. However, I accept this cost because I am trying to build a community that can thrive independently of an idol, where people relate to each other through behavior and shared norms, not proximity to the founder.

 DG: It sounds like a lot of what you’re doing is about people being authentic on the site, not using personas or using it to create a personal platform for themselves for marketing purposes.

 SHIN: Exactly, people can share links, but if a post is purely self-promotion with no contribution to the community, we don’t encourage this. I hope people here can respect the reciprocity.

 DG: I want to shift a bit and talk about freedom of expression as a principle for a while. Do you think freedom of expression should be regulated by governments?

 SHIN: Speech regulation is hard, because speech is freaking messy. And once you turn messy human speech into rules that scale, nuance gets flattened. Minority communities usually pay first, because large systems choose efficiency over lived reality.

 I also don’t think the answer is “erase all conflict.” Some friction is the price of pluralism, and with good guidance and interface design, conflict can become a point of learning instead of a point of collapse. From a platform owner’s perspective, legal liability is real and often cruel. So if we expect platforms to be free, frictionless, allow everything we like, erase everything we dislike, and still amplify our visibility—then we’re really asking for magic. That’s why we need to talk seriously about alternatives and procedural safeguards, not just louder demands.

 Age verification is a good example. I get that the goal is to protect minors. But identity-based age gates often turn into identity infrastructure. They chill lawful adult speech, concentrate gatekeeping power, and push everyone to hand over personal data just to access legal content. From my experience, there are other tools that can reduce harm with less damage—things like community design, visibility gating, and human, context-based moderation. Those approaches can protect people without building a personal-data checkpoint for everyone.

 DG: You talked about minority voices, and minority speech. Are you concerned that any regulation will end up trying to silence minority speakers, or won’t benefit minority speakers. How are these speakers more vulnerable to speech regulations than others?

 SHIN: Hmmm......a lot of minority speech is context-heavy. The same words can be support, education, or harassment depending on who says it and why. When regulation turns into broad categories, sexual health education, self-explore experiment sharing, trans healthcare discussions, or reclaimed language can be treated as “harmful” out of context (at both sides). So the risk isn’t only censorship, it’s misclassification at scale.

 DG: Are there certain types of speech that don’t deserve the conversation. Some people might say that hate speech or speech that’s dehumanizing doesn’t deserve the conversation. Are there any categories of speech that you would say we shouldn’t consider, or do we get to talk about everything?

 SHIN: Okay, I don't think the issue is about saying certain kinds of speech don't deserve to be discussed; the problem lies in the definition. As soon as we suggest that some speech doesn't merit discussion, some people will exploit this to silence their opponents. Whether it's right-wing, left-wing or anything else, if we say that we don't allow any kind of hate speech, the next thing someone will do is define your speech as hate speech. It's an endless war that draws us all into an eagerness to silence others and grab the mic, instead of creating more space for conversations and learning from each other.

 We should go further than just regulation and create spaces where people can coexist in a grey area, endure some discomfort and engage with each other. I prefer this approach to trying to draw lines.

 DG: So even well-intentioned restrictions might always be used against minority speakers?

 SHIN: I wouldn’t say restriction is not good. There always has to be some kind of restriction, but people will always find a way to overcome or take advantage of it. So, the thing I believe is that regulation is regulation, but community should be an open-source archive. How we govern community, how we dialogue between each other when we disagree with each other…how can we create a space where those things can exist? I believe that those things should be open source. People always talk about open source like it’s just coding, but I believe governance should be open source too.

 DG: So when you said before some restrictions are necessary but then we talk about open source governance, we’re talking about the same thing. When you say some restrictions are necessary, you’re not necessarily saying government restrictions, but that restrictions should come from somewhere else: that’s an open source governance model?

 SHIN: Yes. And it should include restrictions in law, and how people deal with it, the way we deal with it. I’m not saying every rule or detection signal should be public. By “open-source governance,” I mean shareable governance playbooks: proportional steps, appeals templates, community norms, and design patterns that small communities can adapt. The goal is portability and adaptability of methods, not making systems easy to game. Because malice is always part of the environment.

 DG: Is there anything else you want to say about your theory of open-source governance or what it means to you?

 SHIN: I noticed there was a question in another interview about fostering transparency in social media, and how to appeal, and that the reason [for a takedown] should be more transparent. The interesting thing is that before our interview today I was joining a law and technology policy research group, and they’re reading a book called “Law and Technology: A Methodical Approach”. It's worth mentioning that it's very interesting. Apparently, scientists tend to place emphasis on complexity, which often trips up pragmatic reform efforts, so the recommendations often only call for greater transparency or participation.

 I think this echoes what we were talking about before and the transparency thing. I heard this podcast in Taiwan about cybersecurity where they interview an outsourced ex-moderator from Meta and how the platform moderates speech. Because most of the information is confidential, the moderator can’t say too much, but she told us that every day Meta provided a whole set of lists with things they should ban, and every day it changes. Sometimes it even changes on an hourly basis. And they can never really put those fully transparent to the world. The reason they can’t do that is because those words are partially forbidding scams, because the scale is too big. So, when they show the transparency of how they ban things, the scammers will use this against them. Like, “now you’ve banned this word so I’ll just use another one.” It’s an endless war. So, I think transparency matters, but it shouldn’t be the only thing we think about, we should think about governance as well. And when we talk about governance, we shouldn’t just think about some high authority in government or a law just forcing the platform into something we like. We should go back and think about what we can do. We’ve got lots of open-source software now and we can literally build those things by ourselves. That’s what I’m trying to say.

 DG: Okay, one last question. This is the last question we ask everybody. Who’s your free speech hero?

 SHIN: This is the question I saw everyone answering, and I honestly struggled with it. Because I’m Taiwanese, and the names that often come up in U.S. free speech conversations aren’t the names I’m familiar with. I’m sorry about this.

 DG: That’s okay, it doesn’t have to be a perfect answer.

 SHIN: If you want a public figure from Taiwan, I think of the journalists and dissidents who pushed for press freedom during Taiwan’s democratization—Nylon (Tēnn Lâm-iông) is one name many Taiwanese recognize.

 If I answer this as truthfully as I can, my hero is my family. My father taught me that integrity is not a slogan. It’s the ability to keep your ethics when it costs you something. My mother is the opposite kind of teacher: she’s relentless in a practical way: she doesn’t easily back down, and she keeps finding room to move even when the room is small. Put together, that’s what free expression means to me. It’s not “I can say anything.” It's about whether you can continue to think independently and live with integrity through layers of fear, pressure, temptation and coercion, while still moving forward and creating more possibilities for others.

David Greene

EFF to Third Circuit: Electronic Device Searches at the Border Require a Warrant

1 week ago

EFF, along with the national ACLU and the ACLU affiliates in Pennsylvania, Delaware, and New Jersey, filed an amicus brief in the U.S. Court of Appeals for the Third Circuit urging the court to require a warrant for border searches of electronic devices, an argument EFF has been making in the courts and Congress for nearly a decade.

The case, U.S. v. Roggio, involves a man who had been under ongoing criminal investigation for illegal exports when he returned to the United States from an international trip via JFK airport. Border officers used the opportunity to bypass the Fourth Amendment’s warrant requirement when they seized several of his electronic devices (laptop, tablet, cell phone, and flash drive) and conducted forensic searches of them. As the district court explained, “investigative agents had a case coordination meeting and border search authority was discussed in early January 2017,” before Mr. Roggio traveled internationally in February 2017.

The district court denied Mr. Roggio’s motion to suppress the emails and other data obtained from the warrantless searches of his devices. He was subsequently convicted of illegally exporting gun manufacturing parts to Iraq (he was also charged in a superseding indictment with torture and also convicted of that).

The number of warrantless device searches at the border and the significant invasion of privacy they represent is only increasing. In Fiscal Year 2025, U.S. Customs and Border Protection (CBP) conducted 55,318 device searches, both manual (“basic”) and forensic (“advanced”).

While a manual search involves a border officer tapping or mousing around a device, a forensic search involves connecting another device to the traveler’s device and using software to extract and analyze the data to create a detailed report the device owner’s activities and communications. Border officers have access to forensic tools that help gain access to data on a locked or encrypted device they have physical access to. From public reporting, we know that more recent devices (and ones that have had the latest security updates applied) are more resistant to these type of tools, especially if they are turned off or turned on but not yet unlocked.

The U.S. Supreme Court has recognized for a century a border search exception to the Fourth Amendment’s warrant requirement, allowing not only warrantless but also often suspicionless “routine” searches of luggage, vehicles, and other items crossing the border.

The primary justification for the border search exception has been to find—in the items being searched—goods smuggled to avoid paying duties (i.e., taxes) and contraband such as drugs, weapons, and other prohibited items, thereby blocking their entry into the country. But a traveler’s privacy interests in their suitcase and its contents are minimal compared to those in all the personal data on the person’s phone or laptop.

In our amicus brief, we argue that the U.S. Supreme Court’s balancing test in Riley v. California (2014) should govern the analysis here. In that case, the Court weighed the government’s interests in warrantless and suspicionless access to cell phone data following an arrest against an arrestee’s privacy interests in the depth and breadth of personal information stored on a cell phone. The Court concluded that the search-incident-to-arrest warrant exception does not apply, and that police need to get a warrant to search an arrestee’s phone.

Travelers’ privacy interests in their cell phones, laptops and other electronic devices are, of course, the same as those considered in Riley. Modern devices, over a decade later, contain even more data that together reveal the most personal aspects of our lives, including political affiliations, religious beliefs and practices, sexual and romantic affinities, financial status, health conditions, and family and professional associations.

In considering the government’s interests in warrantless access to digital data at the border, Riley requires analyzing how closely such searches hew to the original purpose of the warrant exception—preventing the entry of prohibited goods themselves via the items being searched. We argue that the government’s interests are weak in seeking unfettered access to travelers’ electronic devices.

First, physical contraband (like drugs) can’t be found in digital data.

Second, digital contraband (such as child sexual abuse material) can’t be prevented from entering the country through a warrantless search of a device at the border because it’s likely, given the nature of cloud technology and how internet-connected devices work, that identical copies of the files are already in the country on servers accessible via the internet.

Finally, searching devices for evidence of contraband smuggling (for example, the emails here revealing details of the illegal import scheme) and other evidence for general law enforcement (i.e., investigating non-border-related domestic crimes) are too “untethered” from the original purpose of the border search exception, which is to find prohibited items themselves and not evidence to support a criminal prosecution. Therefore, emails or other data found on a digital device searched without a warrant at the border cannot and should not be used as evidence in court.

If the Third Circuit is not inclined to require a warrant for electronic device searches at the border, we also argue that such a search—whether manual or forensic—should be justified only by reasonable suspicion that the device contains digital contraband and be limited in scope to looking for digital contraband.

This extends the Ninth Circuit’s rule from U.S. v. Cano (2019) in which the court held that only forensic device searches at the border require reasonable suspicion that the device contains digital contraband—that is, some set of already known facts pointing to this possibility—while manual searches may be conducted without suspicion. But the Cano court also held that all searches must be limited in scope to looking for digital contraband (for example, call logs are off limits because they can’t contain digital contraband in the form of photos or files).

We hope that the Third Circuit will rise to the occasion and be the first circuit to fully protect travelers’ Fourth Amendment rights at the border.

Sophia Cope

The Anthropic-DOD Conflict: Privacy Protections Shouldn’t Depend On the Decisions of a Few Powerful People

1 week ago

The U.S. military has officially ended its $200 million contract with AI company Anthropic and has ordered all other military contractors to cease use of their products. Why? Because of a dispute over what the government could and could not use Anthropic’s technology to do. Anthropic had made it clear since it first signed the contract with the Pentagon in 2025 that it did not want its technology to be used for mass surveillance of people in the United States or for fully autonomous weapons systems. Starting in January, that became a problem for the Department of Defense, which ordered Anthropic to give them unrestricted use of the technology. Anthropic refused, and the DoD retaliated.

There is a lot we could learn from this conflict, but the biggest take away is this: the state of your privacy is being decided by contract negotiations between giant tech companies and the U.S. government—two entities with spotty track records for caring about your civil liberties. It’s good when CEOs step up and do the right thing—but it's not a sustainable or reliable solution to build our rights on. Given the government’s loose interpretations of the law, ability to find loopholes to surveil you, and willingness to do illegal spying, we needs serious and proactive legal restrictions to prevent it from gobbling up all the personally data it can acquire and using even routine bureaucratic data for punitive ends.

Imposing and enforcing such those restrictions is properly a role for Congress and the courts, not the private sector. 

The companies know this. When speaking about the specific risk that AI poses to privacy, the CEO of Anthropic Dario Amodei said in an interview, “I actually do believe it is Congress’s job. If, for example, there are possibilities with domestic mass surveillance—the government buying of bulk data has been produced on Americans, locations, personal information, political affiliations, to build profiles, and it’s not possible to analyze all of that with AI—the fact that that is legal—that seems like the judicial interpretation of the Fourth Amendment has not caught up or the laws passed by Congress have not caught up.” 

The example he cites here is a scarily realistic one—because it’s already happening. Customs and Border Protection has tapped into the online advertising world to buy data on Americans for surveillance purposes. Immigration and Customs Enforcement has been using a tool that maps millions of peoples’ devices based on purchased cell phone data. The Office of the Director of National Intelligence has proposed a centralized data broker marketplace to make it easier for intelligence agencies to buy commercially available data. Considering the government’s massive contracts with a bunch of companies that could do analysis, including Palantir, a company which does AI-enabled analysis of huge amounts of data, then the concerns are incredibly well founded. 

But Congress is sadly neglecting its duties. For example, a bill that would close the loophole of the government buying personal information passed the House of Representatives in 2024, but the Senate stopped it.  And because Congress did not act, Americans must rely on a tech company CEO has to try to protect our privacy—or at least refuse to help the government violate it.

Privacy in the digital age should be an easy bipartisan issue. Given that it’s wildly popular (71% of American adults are concerned about the government's use of their data and among adults that have heard of AI 70% have little to no trust in how companies use those products) you would think politicians would be leaping over each other to create the best legislation and companies would be promising us the most high-end privacy protecting features. Instead, for the time being, we are largely left adrift in a sea of constant surveillance, having to paddle our own life rafts.

EFF has, and always will, fight for real and sustainable protections for our civil liberties including  a world where our privacy does not rest upon the whims of CEOs and back room deals with the surveillance state. 

Matthew Guariglia

EFF to Supreme Court: Shut Down Unconstitutional Geofence Searches

1 week ago
Digital Dragnets Violate Fourth Amendment, Brief Argues

WASHINGTON, D.C. – The Electronic Frontier Foundation (EFF), the American Civil Liberties Union (ACLU), the ACLU of Virginia, and the Center on Privacy & Technology at Georgetown Law filed a brief Monday urging the U.S. Supreme Court to rule that invasive geofence warrants are unconstitutional.

The brief argues that geofence warrants—which compel companies to provide information on every electronic device in a given area during a given time period—are the digital version of the exploratory rummaging that the drafters of the Fourth Amendment specifically intended to prevent. 

Unlike typical warrants, geofence warrants do not name a suspect or even target a specific individual or device. Instead, police cast a digital dragnet, demanding location data on every device in a geographic area during a certain time period, regardless of whether the device owner has any connection to the crime under investigation. These searches simultaneously impact the privacy of millions and turn innocent bystanders into suspects, just for being in the wrong place at the wrong time. 

The Supreme Court agreed earlier this year to hear Chatrie v. United States, in which a 2019 geofence warrant  compelled Google to search the accounts of all its hundreds of millions of users to see if any one of them was within a radius police drew around a Northern Virginia crime scene. This area amounted to several football fields in size and encompassed numerous homes, businesses, and a church. In an amicus brief filed Monday, the brief argues that allowing this sweeping power to go unchecked is inconsistent with the basic freedoms of a democratic society. 

"This is not traditional police work, but rather the leveraging of new and powerful technology to claim a novel and formidable power over the people," the brief states. "By their very nature, geofence searches turn innocent bystanders into suspects and leverage even purportedly limited searches into larger dragnets, causing intrusions at a scale far beyond those held unconstitutional in the physical world." 

The brief also cautioned the Court not to authorize future geofence warrants based on the facts of the Chatrie case, which reflect how such searches were conducted in 2019. Since July 2025, mass geofence searches of Google users’ location data have not been possible. However, Google is not the only company collecting location data, nor the only way for police to access mass amounts of data on people with no connection to a crime. All suspicionless searches drag a net through vast swaths of information in hopes of identifying previously unknown suspects—ensnaring innocent bystanders along the way. 

"To courts, to lawmakers, and to tech companies themselves, EFF has repeatedly argued that these high-tech efforts to pull suspects out of thin air cannot be constitutional, even with a warrant," said EFF Surveillance Litigation Director Andrew Crocker. "The Supreme Court should find once and for all that geofence searches are just the kind of impermissible general warrants that the Framers of the Constitution so reviled."

For the brief: https://www.eff.org/document/chatrie-v-united-states-eff-supreme-court-amicus-brief

Tags: geofence warrantsContact:  AndrewCrockerSurveillance Litigation Directorandrew@eff.org
Hudson Hongo

EFF to Court: Don’t Make Embedding Illegal

1 week 1 day ago

Who should be directly liable for online infringement – the entity that serves it up or a user who embeds a link to it? For almost two decades, most U.S. courts have held that the former is responsible, applying a rule called the server test. Under the server test, whomever controls the server that hosts a copyrighted work—and therefore determines who has access to what and how—can be directly liable if that content turns out to be infringing. Anyone else who merely links to it can be secondarily liable in some circumstances (for example, if that third party promotes the infringement), but isn’t on the hook under most circumstances.

The test just makes sense. In the analog world, a person is free to tell others where they may view a third party’s display of a copyrighted work, without being directly liable for infringement if that display turns out to be unlawful. The server test is the straightforward application of the same principle in the online context. A user that links to a picture, video, or article isn’t in charge of transmitting that content to the world, nor are they in a good position to know whether that content violates copyright. In fact, the user doesn’t even control what’s located on the other end of the link—the person that controls the server can change what’s on it at any time, such as swapping in different images, re-editing a video or rewriting an article.

But a news publisher, Emmerich Newspapers, wants the Fifth Circuit to reject the server test, arguing that the entity that embeds links to the content is responsible for “displaying” it and, therefore, can be directly liable if the content turns out to be infringing. If they are right, the common act of embedding is a legally fraught activity and a trap for the unwary.

The Court should decline, or risk destabilizing fundamental, and useful, online activities. As we explain in an amicus brief filed with several public interest and trade organizations, linking and embedding are not unusual, nefarious, or misleading practices. Rather, the ability to embed external content and code is a crucial design feature of internet architecture, responsible for many of the internet’s most useful functions. Millions of websites—including EFF’s—embed external content or code for everything from selecting fonts and streaming music to providing services like customer support and legal compliance. The server test provides legal certainty for internet users by assigning primary responsibility to the person with the best ability to prevent infringement. Emmerich’s approach, by contrast, invites legal chaos.

Emmerich also claims that altering a URL violates the Digital Millennium Copyright Act’s prohibition on changing or deleting copyright management information. If they are correct, using a link shortener could put users at risks of statutory penalties—an outcome Congress surely did not intend.

Both of these theories would make common internet activities legally risky and undermine copyright’s Constitutional purpose: to promote the creation of and access to knowledge. The district court recognized as much and we hope the appeals court agrees.

Related Cases: Emmerich Newspapers v. Particle Media
Corynne McSherry

National Book Tour for Cindy Cohn’s Memoir, ‘Privacy’s Defender’

1 week 1 day ago
MIT Press Publishes EFF Executive Director’s Book As She Prepares to Depart Organization After 25 Years

SAN FRANCISCO – Electronic Frontier Foundation Executive Director Cindy Cohn will launch her memoir, Privacy’s Defender: My Thirty-Year Fight Against Digital Surveillance (MIT Press, March 10), with events in San Francisco and Berkeley before embarking on a national book tour

In Privacy’s Defender, Cohn weaves her own personal story with her role as a leading legal voice representing the rights and interests of technology users, innovators, whistleblowers, and researchers during the Crypto Wars of the 1990s, battles over NSA’s dragnet internet spying revealed in the 2000s, and the fight against FBI gag orders.  

The book will be Cohn’s swansong at EFF as she’s stepping down as executive director later this year after 25 years with the organization. And there’s no timelier topic: Everyone should be concerned about privacy right now, as the federal government consolidates and weaponizes data, companies track our every click, and law enforcement from local police to ICE keep tabs on all of us, everywhere we go, every day. 

The Privacy’s Defender tour will begin with a free event at San Francisco’s famed City Lights Bookstore (261 Columbus Ave., San Francisco, CA 94133) moderated by bestselling author and EFF Special Advisor Cory Doctorow, at 7pm PST Tuesday, March 10.  

Then EFF will host a launch party at Berkeley’s Ciel Creative Space (940 Parker St., Berkeley, CA 94710) moderated by bestselling author Annalee Newitz at 7 p.m. PT on Thursday, March 12; tickets cost $12.50-$20. 

The book tour will also include events in Portland, OR; Seattle; Denver; Cambridge, MA; Ann Arbor, MI; and Iowa City, IA. Later events are being planned in New York City and Washington, D.C., as well as a May 13 event at Commonwealth Club World Affairs in San Francisco. 

Proceeds from sales of the book benefit EFF. 

“These beautifully written stories show why the fight for privacy is worth having and reveal all that Cindy Cohn and EFF have done to establish the modern privacy doctrine as the essential core of a free society.” -- Lawrence Lessig, Harvard University; author of How to Steal a Presidential Election 

“Cindy Cohn gives readers a first-person window into some of the pivotal legal disputes of the digital era and reminds us that action and activism are crucial to preserving Americans’ freedom.” -- U.S. Sen. Ron Wyden, D-OR, author of It Takes Chutzpah: How to Fight Fearlessly for Progressive Change 

Privacy’s Defender is a compelling account of a life well lived and an inspiring call to action for the next generation of civil liberties champions.” -- Edward Snowden, whistleblower; author of Permanent Record 

For the San Francisco event: https://citylights.com/events/cindy-cohn-launch-party-for-privacys-defender/ 

For the Berkeley event: https://www.eff.org/event/privacys-defender-book-launch-party  

For more on Privacy’s Defender and the book tour: https://www.eff.org/Privacys-Defender 

Contact:  KarenGulloSenior Writer for Free Speech and Privacykaren@eff.org
Josh Richman

Victory! Tenth Circuit Finds Fourth Amendment Doesn’t Support Broad Search of Protesters’ Devices and Digital Data

1 week 5 days ago

In a big win for protesters’ rights, the U.S. Court of Appeals for the Tenth Circuit overturned a lower court’s dismissal of a challenge to sweeping warrants to search a protester’s devices and digital data and a nonprofit’s social media data.

The case, Armendariz v. City of Colorado Springs, arose after a housing protest in 2021, during which Colorado Springs police arrested protesters for obstructing a roadway. After the demonstration, police also obtained warrants to seize and search through the devices and data of Jacqueline Armendariz Unzueta, who they claimed threw a bike at them during the protest. The warrants included a search through all of her photos, videos, emails, text messages, and location data over a two-month period, as well as a time-unlimited search for 26 keywords, including words as broad as “bike,” “assault,” “celebration,” and “right,” that allowed police to comb through years of Armendariz’s private and sensitive data—all supposedly to look for evidence related to the alleged simple assault. Police further obtained a warrant to search the Facebook page of the Chinook Center, the organization that spearheaded the protest, despite the Chinook Center never having been accused of a crime.

The district court dismissed the civil rights lawsuit brought by Armendariz and the Chinook Center, holding that the searches were justified and that, in any case, the officers were entitled to qualified immunity. The plaintiffs, represented by the ACLU of Colorado, appealed. EFF—joined by the Center for Democracy and Technology, the Electronic Privacy Information Center, and the Knight First Amendment Institute at Columbia University—wrote an amicus brief in support of that appeal.

In a 2-1 opinion, the Tenth Circuit reversed the district court’s dismissal of the lawsuit’s Fourth Amendment search and seizure claims. The court painstakingly picked apart each of the three warrants and found them to be overbroad and lacking in particularity as to the scope and duration of the searches. The court further held that in furnishing such facially deficient warrants, the officers violated “clearly established” law and thus were not entitled to qualified immunity. Although the court did not explicitly address the First Amendment concerns raised by the lawsuit, it did note the backdrop against how these searches were carried out, including animus by Colorado Springs police leading up to the housing protest.

It is rare for appellate courts to call into question any search warrants. It’s even rarer for them to deny qualified immunity defenses. The Tenth Circuit’s decision should be celebrated as a big win for protesters and anyone concerned about police immunity for violating people’s constitutional rights. The case is now remanded back to the district court to proceed—and hopefully further vindicate the privacy rights we all have in our devices and digital data.

Saira Hussain

☺️ Trust Us With Your Face | EFFector 38.4

1 week 6 days ago

Do you remember the last time you were carded at a bar or restaurant? It was probably such a quick and normal experience, that you barely remember it. But have you ever been carded to use the internet? Being required to present your ID to access content online is becoming a growing reality for many. We're explaining the dangers of age verification laws, and the latest in the fight for privacy and free speech online, with our EFFector newsletter.

For over 35 years, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This issue covers Discord's controversial rollout of mandatory age verification, a leaked Meta memo on face-scanning smart glasses, and a Super Bowl surveillance ad that said the quiet part out loud.

Prefer to listen in? In our audio companion, EFF Associate Director of State Affairs Rin Alajaji explains how online age verification hurts free expression for all users. Find the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 38.4 - ☺️ Trust Us With Your Face

Want to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight against mandatory age verification laws when you support EFF today!

Christian Romero

How to Pick Your Password Manager

1 week 6 days ago

Phishing and data breaches are a constant on the internet. The single best defense against both is to use a password manager to generate and automatically fill a unique password for every site. While 1Password has recently raised their prices, and researchers have recently published potential flaws in some implementations, using a password manager is still a critical investment in keeping yourself safe on the internet. There are free options, and even ones built into your operating system or browser. We can help you choose.

Password managers protect you from phishing by memorizing the connection between a password and a website, and, if you use the browser integration, filling each password only on the website it belongs to. They protect you from data breaches by making it feasible to use a long, random, unique password on each site. When bad actors get their hands on a data breach that includes email addresses and password data, they will typically try to crack those passwords, and then attempt to login on dozens of different websites with the email address/password combinations from the breach. If you use the same password everywhere, this can turn one site’s data breach into a personal disaster, as many of your accounts get compromised at once.

In recent years, the built-in password managers in browsers and operating systems have come a long way but still stumble on cross-platform support. Within the Apple ecosystem, you can use iCloud Keychain, with support for generating passwords, autofill in Safari, and end-to-end encrypted synchronization, so long as you don’t need access to your passwords in Google Chrome or Android (Windows is supported, though). Within the Google ecosystem, you can use Google Password Manager, which also supports password generation, autofill, and sync. Crucially, though, Google Password manager does not end-to-end encrypt credentials ​​unless you manually enable on-device encryption. Firefox and Microsoft also offer password managers. All of these platform-based options are free, and may already be on your devices. But they tend to lock you into a single-vendor world.

There are also a variety of third-party password managers, some paid, and some free, and some open source. Most of these have the advantage of letting you sync your passwords across a wide variety of devices, operating systems, and browsers. Here are four key things to look out for. First, when synchronizing between devices, your passwords should be encrypted end-to-end using a password that only you know (a “master” or “primary” password). Second, support for autofill can reduce the chance that you’ll get phished. Third, security audits performed by third parties can increase confidence that the software really does what it is designed to do. And finally, of course, random generation of unique passwords is a must.

Don’t let uncertainty or price increases dissuade you from using a password manager. There’s a good choice for everyone, and using one can make your online life a lot safer. Want more help choosing? Check out our Surveillance Self-Defense guide.

Jacob Hoffman-Andrews

Tech Companies Shouldn’t Be Bullied Into Doing Surveillance

2 weeks ago

The Secretary of Defense has given an ultimatum to the artificial intelligence company Anthropic in an attempt to bully them into making their technology available to the U.S. military without any restrictions for their use. Anthropic should stick by their principles and refuse to allow their technology to be used in the two ways they have publicly stated they would not support: autonomous weapons systems and surveillance. The Department of Defense has reportedly threatened to label Anthropic a “supply chain risk,” in retribution for not lifting restrictions on how their technology is used. According to WIRED, that label would be, “a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic’s AI in their defense work.”

Anthropic should stick by their principles and refuse to allow their technology to be used in the two ways they have publicly stated they would not support: autonomous weapons systems and surveillance.

In 2025, reportedly Anthropic became the first AI company cleared for use in relation to classified operations and to handle classified information. This current controversy, however, began in January 2026 when, through a partnership with defense contractor Palantir, Anthropic came to suspect their AI had been used during the January 3 attack on Venezuela. In January 2026, Anthropic CEO Dario Amodei wrote to reiterate that surveillance against US persons and autonomous weapons systems were two “bright red lines” not to be crossed, or at least topics that needed to be handled with “extreme care and scrutiny combined with guardrails to prevent abuses.” You can also read Anthropic’s self-proclaimed core views on AI safety here, as well as their LLM, Claude’s, constitution here

Now, the U.S. government is threatening to terminate the government’s contract with the company if it doesn’t switch gears and voluntarily jump right across those lines.  

Companies, especially technology companies, often fail to live up to their public statements and internal policies related to human rights and civil liberties for all sorts of reasons, including profit. Government pressure shouldn’t be one of those reasons. 

Whatever the U.S. government does to threaten Anthropic, the AI company should know that their corporate customers, the public, and the engineers who make their products are expecting them not to cave. They, and all other technology companies, would do best to refuse to become yet another tool of surveillance.

Matthew Guariglia
Checked
5 hours 35 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed