Fourth Amendment Victory: Michigan Supreme Court Reins in Digital Device Fishing Expeditions

4 days 8 hours ago

EFF legal intern Noam Shemtov was the principal author of this post.

When police have a warrant to search a phone, should they be able to see everything on the phone—from family photos to communications with your doctor to everywhere you’ve been since you first started using the phone—in other words, data that is in no way connected to the crime they’re investigating? The Michigan Supreme Court just ruled no. 

In People v. Carson, the court held that to satisfy the Fourth Amendment, warrants authorizing searches of cell phones and other digital devices must contain express limitations on the data police can review, restricting searches to data that they can establish is clearly connected to the crime.

The realities of modern cell phones call for a strict application of rules governing the scope of warrants.

EFF, along with ACLU National and the ACLU of Michigan, filed an amicus brief in Carson, expressly calling on the court to limit the scope of cell phone search warrants. We explained that the realities of modern cell phones call for a strict application of rules governing the scope of warrants. Without clear limits, warrants would  become de facto licenses to look at everything on the device, a great universe of information that amounts to “the sum of an individual’s private life.” 

The Carson case shows just how broad many cell phone search warrants can be. Defendant Michael Carson was suspected of stealing money from a neighbor’s safe. The warrant to search his phone allowed the police to access:

Any and all data including, text messages, text/picture messages, pictures and videos, address book, any data on the SIM card if applicable, and all records or documents which were created, modified, or stored in electronic or magnetic form and, any data, image, or information.

There were no temporal or subject matter limitations. Consequently, investigators obtained over 1,000 pages of information from Mr. Carson’s phone, the vast majority of which did not have anything to do with the crime under investigation.

The Michigan Supreme Court held that this extremely broad search warrant was “constitutionally intolerable” and violated the particularity requirement of the Fourth Amendment. 

The Fourth Amendment requires that warrants “particularly describ[e] the place to be searched, and the persons or things to be seized.” This is intended to limit authorization to search to the specific areas and things for which there is probable cause to search and to prevent police from conducting “wide-ranging exploratory searches.” 

Cell phones hold vast and varied information, including our most intimate data.

Across two opinions, a four-Justice majority joined a growing national consensus of courts recognizing that, given the immense and ever-growing storage capacity of cell phones, warrants must spell out up-front limitations on the information the government may review, including the dates and data categories that constrain investigators’ authority to search. And magistrates reviewing warrants must ensure the information provided by police in the warrant affidavit properly supports a tailored search.

This ruling is good news for digital privacy. Cell phones hold vast and varied information, including our most intimate data—“privacies of life” like our personal messages, location histories, and medical and financial information. The U.S. Supreme Court has recognized as much, saying that application of Fourth Amendment principles to searches of cell phones must respond to cell phones’ unique characteristics, including the weighty privacy interests in our digital data. 

We applaud the Michigan Supreme Court’s recognition that unfettered cell phone searches pose serious risks to privacy. We hope that courts around the country will follow its lead in concluding that the particularity rule applies with special force to such searches and requires clear limitations on the data the government may access.

Jennifer Pinsof

Victory! Pen-Link's Police Tools Are Not Secret

1 week ago

In a victory for transparency, the government contractor Pen-Link agreed to disclose the prices and descriptions of surveillance products that it sold to a local California Sheriff's office.

The settlement ends a months-long California public records lawsuit with the Electronic Frontier Foundation and the San Joaquin County Sheriff’s Office. The settlement provides further proof that the surveillance tools used by governments are not secret and shouldn’t be treated that way under the law.

Last year, EFF submitted a California public records request to the San Joaquin County Sheriff’s Office for information about its work with Pen-Link and its subsidy Cobwebs Technology. Pen-Link went to court to try to block the disclosure, claiming the names of its products and prices were trade secrets. EFF later entered the case to obtain the records it requested.  

The Records Show the Sheriff Bought Online Monitoring Tools

The records disclosed in the settlement show that in late 2023, the Sheriff’s Office paid $180,000 for a two-year subscription to the Tangles “Web Intelligence Platform,” which is a Cobwebs Technologies product that allows the Sheriff to monitor online activity. The subscription allows the Sheriff to perform hundreds of searches and requests per month. The source of information includes the “Dark Web” and “Webloc,” according to the price quotation. According to the settlement, the Sheriff’s Office was offered but did not purchase a series of other add-ons including “AI Image processing” and “Webloc Geo source data per user/Seat.”

Have you been blocked from receiving similar information? We’d like to hear from you.

The intelligence platform overall has been described in other documents as analyzing data from the “open, deep, and dark web, to mobile and social.” And Webloc has been described as a platform that “provides access to vast amounts of location-based data in any specified geographic location.” Journalists at multiple news outlets have chronicled Pen-Link's technology and have published Cobwebs training manuals that demonstrate that its product can be used to target activists and independent journalists. Major local, state, and federal agencies use Pen-Link's technology.

The records also show that in late 2022 the Sheriff’s Office purchased some of Pen-Link’s more traditional products that help law enforcement execute and analyze data from wiretaps and pen-registers after a court grants approval. 

Government Surveillance Tools Are Not Trade Secrets

The public has a right to know what surveillance tools the government is using, no matter whether the government develops its own products or purchases them from private contractors. There are a host of policy, legal, and factual reasons that the surveillance tools sold by contractors like Pen-Link are not trade secrets.

Public information about these products and prices helps communities have informed conversations and make decisions about how their government should operate. In this case, Pen-Link argued that its products and prices are trade secrets partially because governments rely on the company to “keep their data analysis capabilities private.” The company argued that clients would “lose trust” and governments may avoid “purchasing certain services” if the purchases were made public. This troubling claim highlights the importance of transparency. The public should be skeptical of any government tool that relies on secrecy to operate.

Information about these tools is also essential for defendants and criminal defense attorneys, who have the right to discover when these tools are used during an investigation. In support of its trade secret claim, Pen-Link cited terms of service that purported to restrict the government from disclosing its use of this technology without the company’s consent. Terms like this cannot be used to circumvent the public’s right to know, and governments should not agree to them.

Finally, in order for surveillance tools and their prices to be protected as a trade secret under the law, they have to actually be secret. However, Pen-Link’s tools and their prices are already public across the internet—in previous public records disclosures, product descriptions, trademark applications, and government websites.

 Lessons Learned

Government surveillance contractors should consider the policy implications, reputational risks, and waste of time and resources when attempting to hide from the public the full terms of their sales to law enforcement.

Cases like these, known as reverse-public records act lawsuits, are troubling because a well-resourced company can frustrate public access by merely filing the case. Not every member of the public, researcher, or journalist can afford to litigate their public records request. Without a team of internal staff attorneys, it would have cost EFF tens of thousands of dollars to fight this lawsuit.

 Luckily in this case, EFF had the ability to fight back. And we will continue our surveillance transparency work. That is why EFF required some attorneys’ fees to be part of the final settlement.

Related Cases: Pen-Link v. County of San Joaquin Sheriff’s Office
Mario Trujillo

Victory! Ninth Circuit Limits Intrusive DMCA Subpoenas

1 week 1 day ago

The Ninth Circuit upheld an important limitation on Digital Millenium Copyright Act (DMCA) subpoenas that other federal courts have recognized for more than two decades. The DMCA, a misguided anti-piracy law passed in the late nineties, created a bevy of powerful tools, ostensibly to help copyright holders fight online infringement. Unfortunately, the DMCA’s powerful protections are ripe for abuse by “copyright trolls,” unscrupulous litigants who abuse the system at everyone else’s expense.

The DMCA’s “notice and takedown” regime is one of these tools. Section 512 of the DMCA creates “safe harbors” that protect service providers from liability, so long as they disable access to content when a copyright holder notifies them that the content is infringing, and fulfill some other requirements. This gives copyright holders a quick and easy way to censor allegedly infringing content without going to court. 

Unfortunately, the DMCA’s powerful protections are ripe for abuse by “copyright trolls”

Section 512(h) is ostensibly designed to facilitate this system, by giving rightsholders a fast and easy way of identifying anonymous infringers. Section 512(h) allows copyright holders to obtain a judicial subpoena to unmask the identities of allegedly infringing anonymous internet users, just by asking a court clerk to issue one, and attaching a copy of the infringement notice. In other words, they can wield the court’s power to override an internet user’s right to anonymous speech, without permission from a judge.  It’s easy to see why these subpoenas are prone to misuse.

Internet service providers (ISPs)—the companies that provide an internet connection (e.g. broadband or fiber) to customers—are obvious targets for these subpoenas. Often, copyright holders know the Internet Protocol (IP) address of an alleged infringer, but not their name or contact information. Since ISPs assign IP addresses to customers, they can often identify the customer associated with one.

Fortunately, Section 512(h) has an important limitation that protects users.  Over two decades ago, several federal appeals courts ruled that Section 512(h) subpoenas cannot be issued to ISPs. Now, in In re Internet Subscribers of Cox Communications, LLC, the Ninth Circuit agreed, as EFF urged it to in our amicus brief.

As the Ninth Circuit held:

Because a § 512(a) service provider cannot remove or disable access to infringing content, it cannot receive a valid (c)(3)(A) notification, which is a prerequisite for a § 512(h) subpoena. We therefore conclude from the text of the DMCA that a § 512(h) subpoena cannot issue to a § 512(a) service provider as a matter of law.

This decision preserves the understanding of Section 512(h) that internet users, websites, and copyright holders have shared for decades. As EFF explained to the court in its amicus brief:

[This] ensures important procedural safeguards for internet users against a group of copyright holders who seek to monetize frequent litigation (or threats of litigation) by coercing settlements—copyright trolls. Affirming the district court and upholding the interpretation of the D.C. and Eighth Circuits will preserve this protection, while still allowing rightsholders the ability to find and sue infringers.

EFF applauds this decision. And because three federal appeals courts have all ruled the same way on this question—and none have disagreed—ISPs all over the country can feel confident about protecting their customers’ privacy by simply throwing improper DMCA 512(h) subpoenas in the trash.

Tori Noble

From Book Bans to Internet Bans: Wyoming Lets Parents Control the Whole State’s Access to The Internet

1 week 1 day ago

If you've read about the sudden appearance of age verification across the internet in the UK and thought it would never happen in the U.S., take note: many politicians want the same or even more strict laws. As of July 1st, South Dakota and Wyoming enacted laws requiring any website that hosts any sexual content to implement age verification measures. These laws would potentially capture a broad range of non-pornographic content, including classic literature and art, and expose a wide range of platforms, of all sizes, to civil or criminal liability for not using age verification on every user. That includes social media networks like X, Reddit, and Discord; online retailers like Amazon and Barnes & Noble; and streaming platforms like Netflix and Rumble—essentially, any site that allows user-generated or published content without gatekeeping access based on age.

These laws expand on the flawed logic from last month’s troubling Supreme Court decision,  Free Speech Coalition v. Paxton, which gave Texas the green light to require age verification for sites where at least one-third (33.3%) of the content is sexual materials deemed “harmful to minors.” Wyoming and South Dakota seem to interpret this decision to give them license to require age verification—and potential legal liability—for any website that contains ANY image, video, or post that contains sexual content that could be interpreted as harmful to minors. Platforms or websites may be able to comply by implementing an “age gate” within certain sections of their sites where, for example, user-generated content is allowed, or at the point of entry to the entire site.

Although these laws are in effect, we do not believe the Supreme Court’s decision in FSC v. Paxton gives these laws any constitutional legitimacy. You do not need a law degree to see the difference between the Texas law—which targets sites where a substantial portion (one third) of content is “sexual material harmful to minors”—and these laws, which apply to any site that contains even a single instance of such material. In practice, it is the difference between burdening adults with age gates for websites that host “adult” content, and burdening the entire internet, including sites that allow user-generated content or published content.

The law invites parents in Wyoming to take enforcement for the entire state—every resident, and everyone else's children—into their own hands

But lawmakers, prosecutors, and activists in conservative states have worked for years to aggressively expand the definition of “harmful to minors” and use other methods to censor a broad swath of content: diverse educational materials, sex education resources, art, and even award-winning literature. Books like The Bluest Eye by Toni Morrison, The Handmaid’s Tale by Margaret Atwood, and And Tango Makes Three have all been swept up in these crusades—not because of their overall content, but because of isolated scenes or references.

Wyoming’s law is also particularly extreme: rather than provide enforcement by the Attorney General, HB0043 is a “bounty” law that deputizes any resident with a child to file civil lawsuits against websites they believe are in violation, effectively turning anyone into a potential content cop. There is no central agency, no regulatory oversight, and no clear standard. Instead, the law invites parents in Wyoming to take enforcement for the entire state—every resident, and everyone else's children—into their own hands by suing websites that contain a single example of objectionable content. Though most other state age-verification laws often allow individuals to make reports to state Attorneys General who are responsible for enforcement, and some include a private right of action allowing parents or guardians to file civil claims for damages, the Wyoming law is similar to laws in Louisiana and Utah that rely entirely on civil enforcement. 

This is a textbook example of a “heckler’s veto,” where a single person can unilaterally decide what content the public is allowed to access. However, it is clear that the Wyoming legislature explicitly designed the law this way in a deliberate effort to sidestep state enforcement and avoid an early constitutional court challenge, as many other bounty laws targeting people who assist in abortions, drag performers, and trans people have done. The result? An open invitation from the Wyoming legislature to weaponize its citizens, and the courts, against platforms, big or small. Because when nearly anyone can sue any website over any content they deem unsafe for minors, the result isn’t safety. It’s censorship.

That also means your personal website or blog—if it includes any “sexual content harmful to minors”—is also at risk. 

Imagine a Wyomingite stumbling across an NSFW subreddit or a Tumblr fanfic blog and deciding it violates the law. If they were a parent of a minor, that resident could sue the platform, potentially forcing those websites to restrict or geo-block access to the entire state in order to avoid the cost and risk of litigation. And because there’s no threshold for how much “harmful” content a site must host, a single image or passage could be enough. That also means your personal website or blog—if it includes any “sexual content harmful to minors”—is also at risk. 

This law will likely be challenged, and eventually, halted, by the courts. But given that the state cannot enforce it, those challenges will not come until a parent sues a website. Until then, its mere existence poses a serious threat to free speech online. Risk-averse platforms may over-correct, over-censor, or even restrict access to the state entirely just to avoid the possibility of a lawsuit, as Pornhub has already done. And should sites impose age-verification schemes to comply, they will be a speech and privacy disaster for all state residents.

And let’s be clear: these state laws are not outliers. They are part of a growing political movement to redefine terms like “obscene,” “pornographic,” and “sexually explicit”  as catchalls to restrict content for both adults and young people alike. What starts in one state and one lawsuit can quickly become a national blueprint. 

If we don’t push back now, the internet as we know it could disappear behind a wall of fear and censorship.

Age-verification laws like these have relied on vague language, intimidating enforcement mechanisms, and public complacency to take root. Courts may eventually strike them down, but in the meantime, users, platforms, creators, and digital rights advocacy groups need to stay alert, speak up against these laws, and push back while they can. When governments expand censorship and surveillance offline, it's our job at EFF to protect your access to a free and open internet. Because if we don’t push back now, the internet as we know it— the messy, diverse, and open internet we know—could disappear behind a wall of fear and censorship.

Ready to join us? Urge your state lawmakers to reject harmful age-verification laws. Call or email your representatives to oppose KOSA and any other proposed federal age-checking mandates. Make your voice heard by talking to your friends and family about what we all stand to lose if the age-gated internet becomes a global reality. Because the fight for a free internet starts with us.

Rindala Alajaji

New Documents Show First Trump DOJ Worked With Congress to Amend Section 230

1 week 4 days ago

In the wake of rolling out its own proposal to significantly limit a key law protecting internet users’ speech in the summer of 2020, the Department of Justice under the first Trump administration actively worked with lawmakers to support further efforts to stifle online speech.

The new documents, disclosed in an EFF Freedom of Information Act (FOIA) lawsuit, show officials were talking with Senate staffers working to pass speech- and privacy-chilling bills like the EARN IT Act and PACT Act (neither became law). DOJ officials also communicated with an organization that sought to condition Section 230’s legal protections on websites using age-verification systems if they hosted sexual content.

Section 230 protects users’ online speech by protecting the online intermediaries we all rely on to communicate on blogs, social media platforms, and educational and cultural platforms like Wikipedia and the Internet Archive. Section 230 embodies the principle that we should all be responsible for our own actions and statements online, but generally not those of others. The law prevents most civil suits against users or services that are based on what others say.

DOJ’s work to weaken Section 230 began before President Donald Trump issued an executive order targeting social media services in 2020, and officials in DOJ appeared to be blindsided by the order. EFF was counsel to plaintiffs who challenged the order, and President Joe Biden later rescinded it. EFF filed two FOIA suits seeking records about the executive order and the DOJ’s work to weaken Section 230.

The DOJ’s latest release provides more detail on a general theme that has been apparent for years: that the DOJ in 2020 flexed its powers to try to undermine or rewrite Section 230. The documents show that in addition to meeting with congressional staffers, DOJ was critical of a proposed amendment to the EARN IT Act, with one official stating that it “completely undermines” the sponsors’ argument for rejecting DOJ’s proposal to exempt so-called “Bad Samaritan” websites from Section 230.

Further, DOJ reviewed and proposed edits to a rulemaking petition to the Federal Communications Commission that tried to reinterpret Section 230. That effort never moved forward given the FCC lacked any legal authority to reinterpret the law.

You can read the latest release of documents here, and all the documents released in this case are here.

Related Cases: EFF v. OMB (Trump 230 Executive Order FOIA)
Aaron Mackey

President Trump’s War on “Woke AI” Is a Civil Liberties Nightmare

1 week 5 days ago

The White House’s recently-unveiled “AI Action Plan” wages war on so-called “woke AI”—including large language models (LLMs) that provide information inconsistent with the administration’s views on climate change, gender, and other issues. It also targets measures designed to mitigate the generation of racial and gender biased content and even hate speech. The reproduction of this bias is a pernicious problem that AI developers have struggled to solve for over a decade.

A new executive order called “Preventing Woke AI in the Federal Government,” released alongside the AI Action Plan, seeks to strong-arm AI companies into modifying their models to conform with the Trump Administration’s ideological agenda.

The executive order requires AI companies that receive federal contracts to prove that their LLMs are free from purported “ideological biases” like “diversity, equity, and inclusion.” This heavy-handed censorship will not make models more accurate or “trustworthy,” as the Trump Administration claims, but is a blatant attempt to censor the development of LLMs and restrict them as a tool of expression and information access. While the First Amendment permits the government to choose to purchase only services that reflect government viewpoints, the government may not use that power to influence what services and information are available to the public. Lucrative government contracts can push commercial companies to implement features (or biases) that they wouldn't otherwise, and those often roll down to the user. Doing so would impact the 60 percent of Americans who get information from LLMs, and it would force developers to roll back efforts to reduce biases—making the models much less accurate, and far more likely to cause harm, especially in the hands of the government. 

Less Accuracy, More Bias and Discrimination

It’s no secret that AI models—including gen AI—tend to discriminate against racial and gender minorities. AI models use machine learning to identify and reproduce patterns in data that they are “trained” on. If the training data reflects biases against racial, ethnic, and gender minorities—which it often does—then the AI model will “learn” to discriminate against those groups. In other words, garbage in, garbage out. Models also often reflect the biases of the people who train, test, and evaluate them. 

This is true across different types of AI. For example, “predictive policing” tools trained on arrest data that reflects overpolicing of black neighborhoods frequently recommend heightened levels of policing in those neighborhoods, often based on inaccurate predictions that crime will occur there. Generative AI models are also implicated. LLMs already recommend more criminal convictions, harsher sentences, and less prestigious jobs for people of color. Despite that people of color account for less than half of the U.S. prison population, 80 percent of Stable Diffusion's AI-generated images of inmates have darker skin. Over 90 percent of AI-generated images of judges were men; in real life, 34 percent of judges are women. 

These models aren’t just biased—they’re fundamentally incorrect. Race and gender aren’t objective criteria for deciding who gets hired or convicted of a crime. Those discriminatory decisions reflected trends in the training data that could be caused by bias or chance—not some “objective” reality. Setting fairness aside, biased models are just worse models: they make more mistakes, more often. Efforts to reduce bias-induced errors will ultimately make models more accurate, not less. 

Biased LLMs Cause Serious Harm—Especially in the Hands of the Government

But inaccuracy is far from the only problem. When government agencies start using biased AI to make decisions, real people suffer. Government officials routinely make decisions that impact people’s personal freedom and access to financial resources, healthcare, housing, and more. The White House’s AI Action Plan calls for a massive increase in agencies’ use of LLMs and other AI—while all but requiring the use of biased models that automate systemic, historical injustice. Using AI simply to entrench the way things have always been done squanders the promise of this new technology.

We need strong safeguards to prevent government agencies from procuring biased, harmful AI tools. In a series of executive orders, as well as his AI Action Plan, the Trump Administration has rolled back the already-feeble Biden-era AI safeguards. This makes AI-enabled civil rights abuses far more likely, putting everyone’s rights at risk. 

And the Administration could easily exploit the new rules to pressure companies to make publicly available models worse, too. Corporations like healthcare companies and landlords increasingly use AI to make high-impact decisions about people, so more biased commercial models would also cause harm. 

We have argued against using machine learning to make predictive policing decisions or other punitive judgments for just these reasons, and will continue to protect your right not to be subject to biased government determinations influenced by machine learning.

Tori Noble

🫥 Spotify Face Scans Are Just the Beginning | EFFector 37.10

1 week 6 days ago

Catching up on your backlog of digital rights news has never been easier! EFF has a one-stop-shop to keep you up to date on the latest in the fight against censorship and surveillance—our EFFector newsletter.

This time we're covering an act of government intimidation in Florida when the state subpoenaed a venue for surveillance video after hosting an LGBTQ+ pride event, calling out data brokers in California for failing to respond to requests for personal data—even though responses are required by state law, and explaining why Canada's Bill C-2 would open the floodgates for U.S. surveillance.

Don't forget to also check out our audio companion to EFFector as well! We're interviewing staff about some of the important work that they're doing. This time, EFF Senior Speech and Privacy Activist Paige Collings covers the harms of age verification measures that are being passed across the globe. Listen now on YouTube or the Internet Archive.

Listen TO EFFECTOR

EFFECTOR 37.10 - Spotify Face Scans Are Just the Beginning

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Torture Victim’s Landmark Hacking Lawsuit Against Spyware Maker Can Proceed, Judge Rules

1 week 6 days ago
EFF is Co-Counsel in Case Detailing Harms Caused by Export of U.S. Cybersurveillance Technology and Training to Repressive Regimes

PORTLAND, OR – Saudi human rights activist Loujain Alhathloul’s groundbreaking lawsuit concerning spying software that enabled her imprisonment and torture can advance, a federal judge ruled in an opinion unsealed Tuesday

U.S. District Judge Karin J. Immergut of the District of Oregon ruled that Alhathloul’s lawsuit against DarkMatter Group and three of its former executives can proceed on its claims under the Computer Fraud and Abuse Act – the first time that a human rights case like this has gone so far under this law. The judge dismissed other claims made under the Alien Tort Statute. 

Alhathloul is represented in the case by the Electronic Frontier Foundation (EFF), the Center for Justice and Accountability, Foley Hoag, and Tonkon Torp LLP

"This important ruling is the first to let a lawsuit filed by the victim of a foreign government’s human rights abuses, enabled by U.S. spyware used to hack the victim’s devices, proceed in our federal courts,” said EFF Civil Liberties Director David Greene. “This case is particularly important at a time when transnational human rights abuses are making daily headlines, and we are eager to proceed with proving our case.” 

“Transparency in such times and circumstances is a cornerstone that enacts integrity and drives accountability as it offers the necessary information to understand our reality and act upon it. The latter presents a roadmap to a safer world,” Alhathloul said. “Today’s judge’s order has become a public court document only to reinforce those rooted concepts of transparency that will one day lead to accountability.” 

Alhathloul, 36, a nominee for the 2019 and 2020 Nobel Peace Prize, has been a powerful advocate for women’s rights in Saudi Arabia for more than a decade. She was at the forefront of the public campaign advocating for women’s right to drive in Saudi Arabia and has been a vocal critic of the country’s male guardianship system.  

The lawsuit alleges that defendants DarkMatter Group, Marc Baier, Ryan Adams, and Daniel Gericke were hired by the UAE to target Alhathloul and other perceived dissidents as part of the UAE’s broader cooperation with Saudi Arabia. According to the lawsuit, the defendants used U.S. cybersurveillance technology, along with their U.S. intelligence training, to install spyware on Alhathloul’s iPhone and extract data from it, including while she was in the United States and communicating with U.S. contacts. After the hack, Alhathloul was arbitrarily detained by the UAE security services and forcibly rendered to Saudi Arabia, where she was imprisoned and tortured. She is no longer in prison, but she is currently subject to an illegal travel ban and unable to leave Saudi Arabia. 

The case was filed in December 2021; Judge Immergut dismissed it in March 2023 with leave to amend, and the amended complaint was filed in May 2023.  

“This Court concludes that Plaintiff has shown that her claims arise out of Defendants’ forum-related contacts,” Judge Immergut wrote in her opinion. “Defendants’ forum-related contacts include (1) their alleged tortious exfiltration of data from Plaintiff’s iPhone while she was in the U.S. and (2) their acquisition, use, and enhancement of U.S.-created exploits from U.S. companies to create the Karma hacking tool used to accomplish their tortious conduct. Plaintiff’s CFAA claims arise out of these U.S. contacts.” 

For the judge’s opinion:  https://www.eff.org/document/alhathloul-v-darkmatter-opinion-and-order-motion-dismiss

For more about the case: https://www.eff.org/cases/alhathloul-v-darkmatter-group 

Contact:  DavidGreeneCivil Liberties Directordavidg@eff.org
Josh Richman

Podcast Episode: Separating AI Hope from AI Hype

1 week 6 days ago

If you believe the hype, artificial intelligence will soon take all our jobs, or solve all our problems, or destroy all boundaries between reality and lies, or help us live forever, or take over the world and exterminate humanity. That’s a pretty wide spectrum, and leaves a lot of people very confused about what exactly AI can and can’t do. In this episode, we’ll help you sort that out: For example, we’ll talk about why even superintelligent AI cannot simply replace humans for most of what we do, nor can it perfect or ruin our world unless we let it.

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F49181a0e-f8b4-4b2a-ae07-f087ecea2ddd%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

    

(You can also find this episode on the Internet Archive and on YouTube.) 

 Arvind Narayanan studies the societal impact of digital technologies with a focus on how AI does and doesn’t work, and what it can and can’t do. He believes that if we set aside all the hype, and set the right guardrails around AI’s training and use, it has the potential to be a profoundly empowering and liberating technology. Narayanan joins EFF’s Cindy Cohn and Jason Kelley to discuss how we get to a world in which AI can improve aspects of our lives from education to transportation—if we make some system improvements first—and how AI will likely work in ways that we barely notice but that help us grow and thrive. 

In this episode you’ll learn about:

  • What it means to be a “techno-optimist” (and NOT the venture capitalist kind)
  • Why we can’t rely on predictive algorithms to make decisions in criminal justice, hiring, lending, and other crucial aspects of people’s lives
  • How large-scale, long-term, controlled studies are needed to determine whether a specific AI application actually lives up to its accuracy promises
  • Why “cheapfakes” tend to be more (or just as) effective than deepfakes in shoring up political support
  • How AI is and isn’t akin to the Industrial Revolution, the advent of electricity, and the development of the assembly line 

Arvind Narayanan is professor of computer science and director of the Center for Information Technology Policy at Princeton University. Along with Sayash Kapoor, he publishes the AI Snake Oil newsletter, followed by tens of thousands of researchers, policy makers, journalists, and AI enthusiasts; they also have authored “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference” (2024, Princeton University Press). He has studied algorithmic amplification on social media as a visiting senior researcher at Columbia University's Knight First Amendment Institute; co-authored an online a textbook on fairness and machine learning; and led Princeton's Web Transparency and Accountability Project, uncovering how companies collect and use our personal information. 

Resources:

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

ARVIND NARAYANAN: The people who believe that super intelligence is coming very quickly tend to think of most tasks that we wanna do in the real world as being analogous to chess, where it was the case that initially chessbots were not very good.t some points, they reached human parity. And then very quickly after that, simply by improving the hardware and then later on by improving the algorithms, including by using machine learning, they're vastly, vastly superhuman.
We don't think most tasks are like that. This is true when you talk about tasks that are integrated into the real world, you know, require common sense, require a kind of understanding of a fuzzy task description. It's not even clear when you've done well and when you've not done well.
We think that human performance is not limited by our biology. It's limited by our state of knowledge of the world, for instance. So the reason we're not better doctors is not because we're not computing fast enough, it's just that medical research has only given us so much knowledge about how the human body works and you know, how drugs work and so forth.
And the other is you've just hit the ceiling of performance. The reason people are not necessarily better writers is that it's not even clear what it means to be a better writer. It's not as if there's gonna be a magic piece of text, you know, that's gonna, like persuade you of something that you never wanted to believe, for instance, right?
We don't think that sort of thing is even possible. And so those are two reasons why in the vast majority of tasks, we think AI is not going to become better or at least much better than human professionals.

CINDY COHN: That's Arvind Narayanan explaining why AIs cannot simply replace humans for most of what we do. I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY: And I'm Jason Kelley, EFF’s Activism Director. This is our podcast series, How to Fix the Internet.

CINDY COHN: On this show, we try to get away from the dystopian tech doomsayers – and offer space to envision a more hopeful and positive digital future that we can all work towards.

JASON KELLEY: And our guest is one of the most level-headed and reassuring voices in tech.

CINDY COHN: Arvind Narayanan is a professor of computer science at Princeton and the director of the Center for Information Technology Policy. He’s also the co-author of a terrific newsletter called AI Snake Oil – which has also become a book – where he and his colleague Sayash Kapoor debunk the hype around AI and offer a clear-eyed view of both its risks and its benefits.
He is also a self-described “techno-optimist”, but he means that in a very particular way – so we started off with what that term means to him.

ARVIND NARAYANAN: I think there are multiple kinds of techno-optimism. There's the Mark Andreessen kind where, you know, let the tech companies do what they wanna do and everything will work out. I'm not that kind of techno-optimist. My kind of techno-optimism is all about the belief that we actually need folks to think about what could go wrong and get ahead of that so that we can then realize what our positive future is.
So for me, you know, AI can be a profoundly empowering and liberating technology. In fact, going back to my own childhood, this is a story that I tell sometimes, I was growing up in India and, frankly, the education system kind of sucked. My geography teacher thought India was in the Southern Hemisphere. That's a true story.

CINDY COHN: Oh my God. Whoops.

ARVIND NARAYANAN: And, you know, there weren't any great libraries nearby. And so a lot of what I knew, and I not only had to teach myself, but it was hard to access reliable, good sources of information. We had had a lot of books of course, but I remember when my parents saved up for a whole year and bought me a computer that had a CD-Rom encyclopedia on it.
That was a completely life-changing moment for me. Right. So that was the first time I could get close to this idea of having all information at our fingertips. That was even before I kind of had internet access even. So that was a very powerful moment. And I saw that as a lesson in information technology having the ability to level the playing field across different countries. And that was part of why I decided to get into computer science.
Of course I later realized that my worldview was a little bit oversimplified. Tech is not automatically a force for good. It takes a lot of effort and agency to ensure that it will be that way. And so that led to my research interest in the societal aspects of technology as opposed to more of the tech itself.
Anyway, all of that is a long-winded way of saying I see a lot of that same potential in AI that existed in the way that internet access, if done right, has the potential and, and has been bringing, a kind of liberatory potential to so many in the world who might not have the same kinds of access that we do here in the western world with our institutions and so forth.

CINDY COHN: So let's drill down a second on this because I really love this image. You know, I was a little girl growing up in Iowa and seeing the internet made me feel the same way. Like I could have access to all the same information that people who were in the big cities and had the fancy schools could have access to.
So, you know, from I think all around the world, there's this experience and depending on how old you are, it may be that you discovered Wikipedia as opposed to a CD Rom of an encyclopedia, but it's that same moment and, I think that that is the promise that we have to hang on to.
So what would an educational world look like? You know, if you're a student or a teacher, if we are getting AI right?

ARVIND NARAYANAN: Yeah, for sure. So let me start with my own experience. I kind of actually use AI a lot in the way that I learn new topics. This is something I was surprised to find myself doing given the well-known limitations of these chatbots and accuracy, but it turned out that there are relatively easy ways to work around those limitations.
Uh, one kind of example of uh, if a user adaptation to it is to always be in a critical mode where you know that out of 10 things that AI is telling you, one is probably going to be wrong. And so being in that skeptical frame of mind, actually in my view, enhances learning. And that's the right frame of mind to be in anytime you're learning anything, I think so that's one kind of adaptation.
But there are also technology adaptations, right? Just the simplest example: If you ask AI to be in Socratic mode, for instance, in a conversation, uh, a chat bot will take on a much more appropriate role for helping the user learn as opposed to one where students might ask for answers to homework questions and, you know, end up taking shortcuts and it actually limits their critical thinking and their ability to learn and grow, right? So that's one simple example to make the point that a lot of this is not about AI itself, but how we use AI.
More broadly in terms of a vision for how integrating this into the education system could look like, I do think there is a lot of promise in personalization. Again, this has been a target of a lot of overselling that AI can be a personalized tutor to every individual. And I think there was a science fiction story that was intended as a warning sign, but a lot of people in the AI industry have taken as a, as a manual or a vision for what this should look like.
But even in my experiences with my own kids, right, they're five and three, even little things like, you know, I was, uh, talking to my daughter about fractions the other day, and I wanted to help her visualize fractions. And I asked Claude to make a little game that would help do that. And within, you know, it was 30 seconds or a minute or whatever, it made a little game where it would generate a random fraction, like three over five, and then ask the child to move a slider. And then it will divide the line segment into five parts, highlight three, show how close the child did to the correct answer, and, you know, give feedback and that sort of thing, and you can kind of instantly create that, right?
So this convinces me that there is in fact a lot of potential in AI and personalization if a particular child is struggling with a particular thing, a teacher can create an app on the spot and have the child play with it for 10 minutes and then throw it away, never have to use it again. But that can actually be meaningfully helpful.

JASON KELLEY: This kind of AI and education conversation is really close to my heart because I have a good friend who runs a school, and as soon as AI sort of burst onto the scene he was so excited for exactly the reasons you're talking about. But at the same time, a lot of schools immediately put in place sort of like, you know, Chat GPT bans and things like that.
And we've talked a little bit on EFF’s Deep Links blog about how, you know, that's probably an overstep in terms of like, people need to know how to use this, whether they're students or not. They need to understand what the capabilities are so they can have this sort of uses of it that are adapting to them rather than just sort of like immediately trying to do their homework.
So do you think schools, you know, given the way you see it, are well positioned to get to the point you're describing? I mean, how, like, that seems like a pretty far future where a lot of teachers know how AI works or school systems understand it. Like how do we actually do the thing you're describing because most teachers are overwhelmed as it is.

ARVIND NARAYANAN: Exactly. That's the root of the problem. I think there needs to be, you know, structural changes. There needs to be more funding. And I think there also needs to be more of an awareness so that there's less of this kind of adversarial approach. Uh, I think about, you know, the levers for change where I can play a little part. I can't change the school funding situation, but just as one simple example, I think the way that researchers are looking at this maybe right, right now today is not the most helpful and can be reframed in a way that is much more actionable to teachers and others. So there's a lot of studies that look at what is the impact of AI in the classroom that, to me, are the equivalent of, is eating food good for you? It’s addressing the question of the wrong level of abstraction.

JASON KELLEY: Yeah.

ARVIND NARAYANAN: You can't answer the question at that high level because you haven't specified any of the details that actually matter. Whether food is good and entirely depends on what food it is, and if you're, if the way you studied that was to go into the grocery store and sample the first 15 items that you saw, you're measuring properties of your arbitrary sample instead of the underlying phenomena that you wanna study.
And so I think researchers have to drill down much deeper into what does AI for education actually look like, right? If you ask the question at the level of are chatbots helping or hurting students, you're gonna end up with nonsensical answers. So I think the research can change and then other structural changes need to happen.

CINDY COHN: I heard you on a podcast talk about AI as, and saying kind of a similar point, which is that, you know, what, if we were deciding whether vehicles were good or bad, right? Nobody would, um, everyone could understand that that's way too broad a characterization for a general purpose kind of device to come to any reasonable conclusion. So you have to look at the difference between, you know, a truck, a car, a taxi, other, you know, all the, or, you know, various other kinds of vehicles in order to do that. And I think you do a good job of that in your book, at least in kind of starting to give us some categories, and the one that we're most focused on at EFF is the difference between predictive technologies, and other kinds of AI. Because I think like you, we have identified these kind of predictive technologies as being kind of the most dangerous ones we see right now in actual use. Am I right about that?

ARVIND NARAYANAN: That's our view in the book, yes, in terms of the kinds of AI that has the biggest consequences in people's lives, and also where the consequences are very often quite harmful. So this is AI in the criminal justice system, for instance, used to predict who might fail to show up to court or who might commit a crime and then kind of prejudge them on that basis, right? And deny them their freedom on the basis of something they're predicted to do in the future, which in turn is based on the behavior of other similar defendants in the past, right? So there are two questions here, a technical question and a moral one.
The technical question is, how accurate can you get? And it turns out when we review the evidence, not very accurate. There's a long section in our book at the end of which we conclude that one legitimate way to look at it is that all that these systems are predicting is the more prior arrests you have, the more likely you are to be arrested in the future.
So that's the technical aspect, and that's because, you know, it's just not known who is going to commit a crime. Yes, some crimes are premeditated, but a lot of the others are spur of the moment or depend on things, random things that might happen in the future.
It's something we all recognize intuitively, but when the words AI or machine learning are used, some of these decision makers seem to somehow suspend common sense and somehow believe in the future as actually accurately predictable.

CINDY COHN: The other piece that I've seen you talk about and others talk about is that the only data you have is what the cops actually do, and that doesn't tell you about crime it tells you about what the cops do. So my friends at the human rights data analysis group called it predicting the police rather than predicting policing.
And we know there's a big difference between the crime that the cops respond to and the general crime. So it's gonna look like the people who commit crimes are the people who always commit crimes when it's just the subset that the police are able to focus on, and we know there's a lot of bias baked into that as well.
So it's not just inside the data, it's outside the data that you have to think about in terms of these prediction algorithms and what they're capturing and what they're not. Is that fair?

ARVIND NARAYANAN: That's totally, yeah, that's exactly right. And more broadly, you know, beyond the criminal justice system, these predictive algorithms are also used in hiring, for instance, and, and you know, it's not the same morally problematic kind of use where you're denying someone their freedom. But a lot of the same pitfalls apply.
I think one way in which we try to capture this in the book is that AI snake oil, or broken AI, as we sometimes call it, is appealing to broken institutions. So the reason that AI is so appealing to hiring managers is that yes, it is true that something is broken with the way we hire today. Companies are getting hundreds of applications, maybe a thousand for each open position. They're not able to manually go through all of them. So they want to try to automate the process. But that's not actually addressing what is broken about the system, and when they're doing that, the applicants are also using AI to increase the number of positions they can apply to. And so it's only escalating the arms race, right?
I think the reason this is broken is that we fundamentally don't have good ways of knowing who's going to be a good fit for which position, and so by pretending that we can predict it with AI, we're just elevating this elaborate random number generator into this moral arbiter. And there can be moral consequences of this as well.
Like, obviously, you know, someone who deserved a job might be denied that job, but it actually gets amplified when you think about some of these AI recruitment vendors providing their algorithm to 10 different companies. And so every company that someone applies to is judging someone in the same way.
So in our view, the only way to get away from this is to make necessary. Organizational reforms to these broken processes. Just as one example, in software, for instance, many companies will offer people, students especially, internships, and use that to have a more in-depth assessment of a candidate. I'm not saying that necessarily works for every industry or every level of seniority, but we have to actually go deeper and emphasize the human element instead of trying to be more superficial and automated with AI.

JASON KELLEY: One of the themes that you bring up in the newsletter and the book is AI evaluation. Let's say you have one of these companies with the hiring tool: why is it so hard to evaluate the sort of like, effectiveness of these AI models or the data behind them? I know that it can be, you know, difficult if you don't have access to it, but even if you do, how do we figure out the shortcomings that these tools actually have?

ARVIND NARAYANAN: There are a few big limitations here. Let's say we put aside the data access question, the company itself wants to figure out how accurate these decisions are.

JASON KELLEY: Hopefully!

ARVIND NARAYANAN: Yeah. Um, yeah, exactly. They often don't wanna know, but even if you do wanna know that in terms of the technical aspect of evaluating this, it's really the same problem as the medical system has in figuring out whether a drug works or not.
And we know how hard that is. That actually requires a randomized, controlled trial. It actually requires experimenting on people, which in turn introduces its own ethical quandaries. So you need oversight for the ethics of it, but then you have to recruit hundreds, sometimes thousands of people, follow them for a period of several years. And figure out whether the treatment group for which you either, you know, gave the drug, or in the hiring case you implemented, your algorithm has a different outcome on average from the control group for whom you either gave a placebo or in the hiring case you used, the traditional hiring procedure.
Right. So that's actually what it takes. And, you know, there's just no incentive in most companies to do this because obviously they don't value knowledge for their own sake. And the ROI is just not worth it. The effort that they're gonna put into this kind of evaluation is not going to, uh, allow them to capture the value out of it.
It brings knowledge to the public, to society at large. So what do we do here? Right? So usually in cases like this, the government is supposed to step in and use public funding to do this kind of research. But I think we're pretty far from having a cultural understanding that this is the sort of thing that's necessary.
And just like the medical community has gotten used to doing this, we need to do this whenever we care about the outcomes, right? Whether it's in criminal justice, hiring, wherever it is. So I think that'll take a while, and our book tries to be a very small first step towards changing public perception that this is not something you can somehow automate using AI. These are actually experiments on people. They're gonna be very hard to do.

JASON KELLEY: Let's take a quick moment to thank our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also want to thank EFF members and donors. You are the reason we exist. EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever, so please, if you like what we do, go to eff.org/pod to donate. Also, we’d love for you to join us at this year’s EFF awards, where we celebrate the people working towards the better digital future that we all care so much about. Those are coming up on September 12th in San Francisco. You can find more information about that at eff.org/awards.
We also wanted to share that our friend Cory Doctorow has a new podcast – have a listen to this.
[WHO BROKE THE INTERNET TRAILER]
And now back to our conversation with Arvind Narayanan.

CINDY COHN: So let's go to the other end of AI world. The people who, you know, are, I think they call it AI safety, where they're really focused on the, you know, robots are gonna kill us. All kind of concerns. 'cause that's a, that's a piece of this story as well. And I'd love to hear your take on, you know, kind of the, the, the doom loop, um, version of ai.

ARVIND NARAYANAN: Sure. Yeah. So there's uh, a whole chapter in the book where we talk about concerns around catastrophic risk from future more powerful AI systems, and we have also elaborated a lot of those in a new paper we released called AI as Normal Technology. If folks are interested in looking that up and look, I mean, I'm glad that folks are studying AI safety and the kinds of unusual, let's say, kinds of risks that might arise in the future that are not necessarily direct extrapolations of the risks that we have currently.
But where we object to these arguments is the claim that we have enough knowledge and evidence of those risks being so urgent and serious that we have to put serious policy measures in place now, uh, you know, such as, uh, curbing open weights AI, for instance, because you never know who's gonna download these systems and what they're gonna do with them.
So we have a few reasons why we think those kinds of really strong arguments are going too far. One reason is that the kinds of interventions that we will need, if we want to control this at the level of the technology, as opposed to the use and deployment of the technology, those kind of non-proliferation measures as we call them, are, in our view, almost guaranteed not to work.
And to even try to enforce that you're kind of inexorably led to the idea of building a world authoritarian government that can monitor all, you know, AI development everywhere and make sure that the companies, the few companies that are gonna be licensed to do this, are doing it in a way that builds in all of the safety measures, the alignment measures, as this community calls them, that we want out of these AI models.
Because models that took, you know, hundreds of millions of dollars to build just a few years ago can now be built using a cluster of enthusiasts’ machines in a basement, right? And if we imagine that these safety risks are tied to the capability level of these models, which is an assumption that a lot of people have in order to call for these strong policy measures, then the predictions that came out of that line of thinking, in my view, have already repeatedly been falsified.
So when GPT two was built, right, this was back in 2019, OpenAI claimed that that was so dangerous in terms of misinformation being out there, that it was going to have potentially deleterious impacts on democracy, that they couldn't release it on an open weights basis.
That's a model that my students now build just to, you know, in an afternoon just to learn the process of building models, right? So that's how cheap that has gotten six years later, and vastly more powerful models than GPT two have now been made available openly. And when you look at the impact on AI generated misinformation, we did a study. We looked at the Wired database of the use of AI in election related activities worldwide. And those fears associated with AI generated misinformation have simply not come true because it turns out that the purpose of election misinformation is not to convince someone of the other tribe, if you will, who is skeptical, but just to give fodder for your own tribe so that they will, you know, continue to support whatever it is you're pushing for.
And for that purpose, it doesn't have to be that convincing or that deceptive, it just has to be cheap fakes as it's called. It's the kind of thing that anyone can do, you know, in 10 minutes with Photoshop. Even with the availability of sophisticated AI image generators. A lot of the AI misinformation we're seeing are these kinds of cheap fakes that don't even require that kind of sophistication to produce, right?
So a lot of these supposed harms really have the wrong theory in mind of how powerful technology will lead to potentially harmful societal impacts. Another great one is in cybersecurity, which, you know, as you know, I worked in for many years before I started working in AI.
And if the concern is that AI is gonna find software vulnerabilities and exploit them and exploit critical infrastructure, whatever, better than humans can. I mean, we crossed that threshold a decade or two ago. Automated methods like fuzzing have long been used to find new cyber vulnerabilities, but it turns out that it has actually helped defenders over attackers. Because software companies can and do, and this is, you know, really almost the first line of defense. Use these automated vulnerability discovery methods to find vulnerabilities and fix those vulnerabilities in their own software before even putting it out there where attackers can a chance to, uh, to find those vulnerabilities.
So to summarize all of that, a lot of the fears are based on a kind of incorrect theory of the interaction between technology and society. Uh, we have other ways to defend in, in fact, in a lot of ways, AI itself is, is the defense against some of these AI enabled threats we're talking about? And thirdly, the defenses that involve trying to control AI are not going to work. And they are, in our view, pretty dangerous for democracy.

CINDY COHN: Can you talk a little bit about the AI as normal technology? Because I think this is a world that we're headed into that you've been thinking about a little more. 'cause we're, you know, we're not going back.
Anybody who hangs out with people who write computer code, knows that using these systems to write computer code is like normal now. Um, and it would be hard to go back even if you wanted to go back. Um, so tell me a little bit about, you know, this, this version of, of AI as normal technology. 'cause I think it, it feels like the future now, but actually I think depending, you know, what do they say, the future is here, it's just not evenly distributed. Like it is not evenly distributed yet. So what, what does it look like?

ARVIND NARAYANAN: Yeah, so a big part of the paper takes seriously the prospect of cognitive automation using AI, that AI will at some point be able to do, you know, with some level of accuracy and reliability, most of the cognitive tasks that are valuable in today's economy at least, and asks, how quickly will this happen? What are the effects going to be?
So a lot of people who think this will happen, think that it's gonna happen this decade and a lot of this, you know, uh, brings a lot of fear to people and a lot of very short term thinking. But our paper looks at it in a very different way. So first of all, we think that even if this kind of cognitive automation is achieved, to use an analogy to the industrial revolution, where a lot of physical tasks became automated. It didn't mean that human labor was superfluous, because we don't take powerful physical machines like cranes or whatever and allow them to operate unsupervised, right?
So with those physical tasks that became automated, the meaning of what labor is, is now all about the supervision of those physical machines that are vastly more physically powerful than humans. So we think, and this is just an analogy, but we have a lot of reasoning in the paper for why we think this will be the case. What jobs might mean in a future with cognitive automation is primarily around the supervision of AI systems.
And so for us, that's a, that's a very positive view. We think that for the most part, that will still be fulfilling jobs in certain sectors. There might be catastrophic impacts, but it's not that across the board you're gonna have drop-in replacements for human workers that are gonna make human jobs obsolete. We don't really see that happening, and we also don't see this happening in the space of a few years.
We talk a lot about what are the various sources of inertia that are built into the adoption of any new technology, especially general purpose technology like electricity. We talk about, again, another historic analogy where factories took several decades to figure out how to replace their steam boilers in a useful way with electricity, not because it was technically hard, but because it required organizational innovations, like changing the whole layout of factories around the concept of the assembly line. So we think through what some of those changes might have to be when it comes to the use of AI. And we, you know, we say that we have a, a few decades to, to make this transition and that, even when we do make the transition, it's not going to be as scary as a lot of people seem to think.

CINDY COHN: So let's say we're living in the future, the Arvind future where we've gotten all these AI questions, right. What does it look like for, you know, the average person or somebody doing a job?

ARVIND NARAYANAN: Sure. A few big things. I wanna use the internet as an analogy here. Uh, 20, 30 years ago, we used to kind of log onto the internet, do a task, and then log off. But now. The internet is simply the medium through which all knowledge work happens, right? So we think that if we get this right in the future, AI is gonna be the medium through which knowledge work happens. It's kind of there in the background and automatically doing stuff that we need done without us necessarily having to go to an AI application and ask it something and then bring the result back to something else.
There is this famous definition of AI that AI is whatever hasn't been done yet. So what that means is that when a technology is new and it's not working that well and its effects are double-edged, that's when we're more likely to call it AI.
But eventually it starts working reliably and it kind of fades into the background and we take it for granted as part of our digital or physical environment. And we think that that's gonna happen with generative AI to a large degree. It's just gonna be invisibly making all knowledge work a lot better, and human work will be primarily about exercising judgment over the AI work that's happening pervasively, as opposed to humans being the ones doing, you know, the nuts and bolts of the thinking in any particular occupation.
I think another one is, uh, I hope that we will have. gotten better at recognizing the things that are intrinsically human and putting more human effort into them, that we will have freed up more human time and effort for those things that matter. So some folks, for instance, are saying, oh, let's automate government and replace it with a chat bot. Uh, you know, we point out that that's missing the point of democracy, which is to, you know, it's if a chat bot is making decisions, it might be more efficient in some sense, but it's not in any way reflecting the will of the people. So whatever people's concerns are with government being inefficient, automation is not going to be the answer. We can think about structural reforms and we certainly should, you know, maybe it will, uh, free up more human time to do the things that are intrinsically human and really matter, such as how do we govern ourselves and so forth.
Um. And, um, maybe if I can have one last thought around what does this positive vision of the future look like? Uh, I, I would go back to the very thing we started from, which is AI and education. I do think there's orders of magnitude, more human potential to open up and AI is not a magic bullet here.
You know, technology on, on the whole is only one small part of it, but I think as we more generally become wealthier and we have. You know, lots of different reforms. Uh, hopefully one of those reforms is going to be schools and education systems, uh, being much better funded, being able to operate much more effectively, and, you know, e every child one day, being able to perform, uh, as well as the highest achieving children today.
And there's, there's just an enormous range. And so being able to improve human potential, to me is the most exciting thing.

CINDY COHN: Thank you so much, Arvind.

ARVIND NARAYANAN: Thank you Jason and Cindy. This has been really, really fun.

CINDY COHN:  I really appreciate Arvind's hopeful and correct idea that actually what most of us do all day isn't really reducible to something a machine can replace. That, you know, real life just isn't like a game of chess or, you know, uh, the, the test you have to pass to be a lawyer or, or things like that. And that there's a huge gap between, you know, the actual job and the thing that the AI can replicate.

JASON KELLEY:  Yeah, and he's really thinking a lot about how the debates around AI in general are framed at this really high level, which seems incorrect, right? I mean, it's sort of like asking if food is good for you, are vehicles good for you, but he's much more nuanced, you know? AI is good in some cases, not good in others. And his big takeaway for me was that, you know, people need to be skeptical about how they use it. They need to be skeptical about the information it gives them, and they need to sort of learn what methods they can use to make AI work with you and for you and, and how to make it work for the application you're using it for.
It's not something you can just apply, you know, wholesale across anything which, which makes perfect sense, right? I mean, no one I think thinks that, but I think industries are plugging AI into everything or calling it AI anyway. And he's very critical of that, which I think is, is good and, and most people are too, but it's happening anyway. So it's good to hear someone who's really thinking about it this way point out why that's incorrect.

CINDY COHN:  I think that's right. I like the idea of normalizing AI and thinking about it as a general purpose tool that might be good for some things and, and it's bad for others, honestly, the same way computers are, computers are good for some things and bad for others. So, you know, we talk about vehicles and food in the conversation, but actually think you could talk about it for, you know, computing more broadly.
I also liked his response to the doomers, you know, pointing out that a lot of the harms that people are claiming will end the world, kind of have the wrong theory in mind about how a powerful technology will lead to bad societal impact. You know, he's not saying that it won't, but he's pointing out that, you know, in cybersecurity for example, you know, some of the AI methods which had been around for a while, he talked about fuzzing, but there are others, you know, that those techniques, while they were, you know, bad for old cybersecurity, actually have spurred greater protections in cybersecurity. And the lesson is when we learn all the time in, in security, especially like the cat and mouse game is just gonna continue.
And anybody who thinks they've checkmated, either on the good side or the bad side, is probably wrong. And that I think is an important insight so that, you know, we don't get too excited about the possibilities of AI, but we also don't go all the way to the, the doomers side.

JASON KELLEY:  Yeah. You know, the normal technology thing was really helpful for me, right? It's something that, like you said with computers, it's a tool that, that has applications in some cases and not others, and people thinking, you know, I don't know if anyone thought when the internet was developed that this was going to end the world or save it. I guess people thought some people might have thought either/or, but you know, neither is true. Right? And you know, it's been many years now and we're still learning how to make the internet useful, and I think it'll be a long time before we've necessarily figure out how AI can be useful. But there's a lot of lessons we can take away from the growth of the internet about how to apply AI.
You know, my dishwasher, I don't think needs to have wifi. I don't think it needs to have AI either. I'll probably end up buying one that has to have those things because that's the way the market goes. But it seems like these are things we can learn from the way we've sort of, uh, figured out where the applications are for these different general purpose technologies in the past is just something we can continue to figure out for AI.

CINDY COHN:  Yeah, and honestly it points to competition and user control, right? I mean, the reason I think a lot of people are feeling stuck with AI is because we don't have an open market for systems where you can decide, I don't want AI in my dishwasher, or I don't want surveillance in my television.
And that's a market problem. And one of these things that he said a lot is that, you know, “just add AI” doesn't solve problems with broken institutions. And I think it circles back to the fact that we don't have a functional market, we don't have real consumer choice right now. And so that's why some of the fears about AI, it's not just consumers, I mean worker choice, other things as well, it's the problems in those systems in the way power works in those systems.
If you just center this on the tech, you're kind of missing the bigger picture and also the things that we might need to do to address it. I wanted to circle back to what you said about the internet because of course it reminds me of Barlow's declaration on the independence of cyberspace, which you know, has been interpreted by a lot of people, as saying that the internet would magically make everything better and, you know, Barlow told me directly, like, you know, what he said was that by projecting a positive version of the online world and speaking as if it was inevitable, he was trying to bring it about, right?
And I think this might be another area where we do need to bring about a better future, um, and we need to posit a better future, but we also have to be clear-eyed about the, the risks and, you know, whether we're headed in the right direction or not, despite what we, what we hope for.

JASON KELLEY: And that's our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we'd love to hear from you. Visit ff.org/podcast and click on listen or feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred Peace Loan Foundation's program and public understanding of science and technology. We'll see you next time. I'm Jason Kelley.

CINDY COHN: And I'm Cindy Cohn.

MUSIC CREDITS: This podcast is licensed Creative Commons Attribution 4.0 international, and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Additional music, theme remixes and sound design by Gaetan Harris.

 

Josh Richman

Fake Clinics Quietly Edit Their Websites After Being Called Out on HIPAA Claims

2 weeks ago

In a promising sign that public pressure works, several crisis pregnancy centers (CPCs, also known as “fake clinics”) have quietly scrubbed misleading language about privacy protections from their websites. 

Earlier this year, EFF sent complaints to attorneys general in eight states (FL, TX, AR, and MO, TN, OK, NE, and NC), asking them to investigate these centers for misleading the public with false claims about their privacy practices—specifically, falsely stating or implying that they are bound by the Health Insurance Portability and Accountability Act (HIPAA). These claims are especially deceptive because many of these centers are not licensed medical clinics or do not have any medical providers on staff, and thus are not subject to HIPAA’s protections.

Now, after an internal follow-up investigation, we’ve found that our efforts are already bearing fruit: Of the 21 CPCs we cited as exhibits in our complaints, six have completely removed HIPAA references from their websites, and one has made partial changes (removed one of two misleading claims). Notably, every center we flagged in our letters to Texas AG Ken Paxton and Arkansas AG Tim Griffin has updated its website—a clear sign that clinics in these states are responding to scrutiny.

While 14 remain unchanged, this is a promising development. These centers are clearly paying attention—and changing their messaging. We haven’t yet received substantive responses from the state attorneys general beyond formal acknowledgements of our complaints, but these early results confirm what we’ve long believed: transparency and public pressure work.

These changes (often quiet edits to privacy policies on their websites or deleting blog posts) signal that the CPC network is trying to clean up their public-facing language in the wake of scrutiny. But removing HIPAA references from a website doesn’t mean the underlying privacy issues have been fixed. Most CPCs are still not subject to HIPAA, because they are not licensed healthcare providers. They continue to collect sensitive information without clearly disclosing how it’s stored, used, or shared. And in the absence of strong federal privacy laws, there is little recourse for people whose data is misused. 

These clinics have misled patients who are often navigating complex and emotional decisions about their health, misrepresented themselves as bound by federal privacy law, and falsely referred people to the U.S. Department of Health and Human Services for redress—implying legal oversight and accountability. They made patients believe their sensitive data was protected, when in many cases, it was shared with affiliated networks, or even put on the internet for anyone to see—including churches or political organizations.

That’s why we continue to monitor these centers—and call on state attorneys general to do the same. 

Rindala Alajaji

Americans, Be Warned: Lessons From Reddit’s Chaotic UK Age Verification Rollout

2 weeks 4 days ago

Age verification has officially arrived in the UK thanks to the Online Safety Act (OSA), a UK law requiring online platforms to check that all UK-based users are at least eighteen years old before allowing them to access broad categories of “harmful” content that go far beyond graphic sexual content. EFF has extensively criticized the OSA for eroding privacy, chilling speech, and undermining the safety of the children it aims to protect. Now that it’s gone into effect, these countless problems have begun to reveal themselves, and the absurd, disastrous outcome illustrates why we must work to avoid this age-verified future at all costs.

Perhaps you’ve seen the memes as large platforms like Spotify and YouTube attempt to comply with the OSA, while smaller sites—like forums focused on parenting, green living, and gaming on Linux—either shut down or cease some operations rather than face massive fines for not following the law’s vague, expensive, and complicated rules and risk assessments. 

But even Reddit, a site that prizes anonymity and has regularly demonstrated its commitment to digital rights, was doomed to fail in its attempt to comply with the OSA. Though Reddit is not alone in bowing to the UK mandates, it provides a perfect case study and a particularly instructive glimpse of what the age-verified future would look like if we don’t take steps to stop it.

It’s Not Just Porn—LGBTQ+, Public Health, and Politics Forums All Behind Age Gates

On July 25, users in the UK were shocked and rightfully revolted to discover that their favorite Reddit communities were now locked behind age verification walls. Under the new policies, UK Redditors were asked to submit a photo of their government ID and/or a live selfie to Persona, the for-profit vendor that Reddit contracts with to provide age verification services. 

For many, this was the first time they realized what the OSA would actually mean in practice—and the outrage was immediate. As soon as the policy took effect, reports emerged from users that subreddits dedicated to LGBTQ+ identity and support, global journalism and conflict reporting, and even public health-related forums like r/periods, r/stopsmoking, and r/sexualassault were walled off to unverified users. A few more absurd examples of the communities that were blocked off, according to users, include: r/poker, r/vexillology (the study of flags), r/worldwar2, r/earwax, r/popping (the home of grossly satisfying pimple-popping content), and r/rickroll (yup). This is, again, exactly what digital rights advocates warned about. 

Every user in the country is now faced with a choice: submit their most sensitive data for privacy-invasive analysis, or stay off of Reddit entirely. Which would you choose? 

The OSA defines "harmful" in multiple ways that go far beyond pornography, so the obstacles the UK users are experiencing are exactly what the law intended. Like other online age restrictions, the OSA obstructs way more than kids’ access to clearly adult sites. When fines are at stake, platforms will always default to overcensoring. So every user in the country is now faced with a choice: submit their most sensitive data for privacy-invasive analysis, or stay off of Reddit entirely. Which would you choose? 

Again, the fact that the OSA has forced Reddit, the “heart of the internet,” to overcensor user-generated content is noteworthy. Reddit has historically succeeded where many others have failed in safeguarding digital rights—particularly the free speech and privacy of its users. It may not be perfect, but Reddit has worked harder than many large platforms to defend Section 230, a key law in the US protecting free speech online. It was one of the first platforms to endorse the Santa Clara Principles, and it was the only platform to receive every star in EFF’s 2019 “Who Has Your Back” (Censorship Edition) report due to its unique approach to moderation, its commitment to notice and appeals of moderation decisions, and its transparency regarding government takedown requests. Reddit’s users are particularly active in the digital rights world: in 2012, they helped EFF and other advocates defeat SOPA/PIPA, a dangerous censorship law. Redditors were key in forcing members of Congress to take a stand against the bill, and were the first to declare a “blackout day,” a historic moment of online advocacy in which over a hundred thousand websites went dark to protest the bill. And Reddit is the only major social media platform where EFF doesn’t regularly share our work—because its users generally do so on their own. 

If a platform with a history of fighting for digital rights is forced to overcensor, how will the rest of the internet look if age verification spreads? Reddit’s attempts to comply with the OSA show the urgency of fighting these mandates on every front. 

We cannot accept these widespread censorship regimes as our new norm. 

Rollout Chaos: The Tech Doesn’t Even Work! 

In the days after the OSA became effective, backlash to the new age verification measures spread across the internet like wildfire as UK users made their hatred of these new policies clear. VPN usage in the UK soared, over 500,000 people signed a petition to repeal the OSA, and some shrewd users even discovered that video game face filters and meme images could fool Persona’s verification software. But these loopholes aren’t likely to last long, as we can expect the age-checking technology to continuously adapt to new evasion tactics. As good as they may be, VPNs cannot save us from the harms of age verification. 

In effect, the OSA and other age verification mandates like it will increase the risk of harm, not reduce it. 

Even when the workarounds inevitably cease to function and the age-checking procedures calcify, age verification measures still will not achieve their singular goal of protecting kids from so-called “harmful” online content. Teenagers will, uh, find a way to access the content they want. Instead of going to a vetted site like Pornhub for explicit material, curious young people (and anyone else who does not or cannot submit to age checks) will be pushed to the sketchier corners of the internet—where there is less moderation, more safety risk, and no regulation to prevent things like CSAM or non-consensual sexual content. In effect, the OSA and other age verification mandates like it will increase the risk of harm, not reduce it. 

If that weren’t enough, the slew of practical issues that have accompanied Reddit’s rollout also reveals the inadequacy of age verification technology to meet our current moment. For example, users reported various bugs in the age-checking process, like being locked out or asked repeatedly for ID despite complying. UK-based subreddit moderators also reported facing difficulties either viewing NSFW post submissions or vetting users’ post history, even when the particular submission or subreddit in question was entirely SFW. 

Taking all of this together, it is excessively clear that age-gating the internet is not the solution to kids’ online safety. Whether due to issues with the discriminatory and error-prone technology, or simply because they lack either a government ID or personal device of their own, millions of UK internet users will be completely locked out of important social, political, and creative communities. If we allow age verification, we welcome new levels of censorship and surveillance with it—while further lining the pockets of big tech and the slew of for-profit age verification vendors that have popped up to fill this market void.

Americans, Take Heed: It Will Happen Here Too

The UK age verification rollout, chaotic as it is, is a proving ground for platforms that are looking ahead to implementing these measures on a global scale. In the US, there’s never been a better time to get educated and get loud about the dangers of this legislation. EFF has sounded this alarm before, but Reddit’s attempts to comply with the OSA show its urgency: age verification mandates are censorship regimes, and in the US, porn is just the tip of the iceberg

US legislators have been disarmingly explicit about their intentions to use restrictions on sexually explicit content as a Trojan horse that will eventually help them censor all sorts of other perfectly legal (and largely uncontroversial) content. We’ve already seen them move the goalposts from porn to transgender and other LGBTQ+ content. What’s next? Sexual education materials, reproductive rights information, DEI or “critical race theory” resources—the list goes on. Under KOSA, which last session passed the Senate with an enormous majority but did not make it to the House, we would likely see similar results here that we see in the UK under the OSA.

Nearly half of U.S. states have some sort of online age restrictions in place already, and the Supreme Court recently paved the way for even more age blocks on online sexual content. But Americans—including those under 18—still have a First Amendment right to view content that is not sexually explicit, and EFF will continue to push back against any legislation that expands the age mandates beyond porn, in statehouses, in courts, and in the streets. 

What can you do?

Call or email your representatives to oppose KOSA and any other federal age-checking mandate. Tell your state lawmakers, wherever you are, to oppose age verification laws. Make your voice heard online, and talk to your friends and family. Tell them about what’s happening to the internet in the UK, and make sure they understand what we all stand to lose—online privacy, security, anonymity, and expression—if the age-gated internet becomes a global reality. EFF is building a coalition to stop this enormous violation of digital rights. Join us today.

Molly Buckley

EFF to Court: Chatbot Output Can Reflect Human Expression

3 weeks ago

When a technology can have a conversation with you, it’s natural to anthropomorphize that technology—to see it as a person. It’s tempting to see a chatbot as a thinking, speaking robot, but this gives the technology too much credit. This can also lead people—including judges in cases about AI chatbots—to overlook the human expressive choices connected to the words that chatbots produce. If chatbot outputs had no First Amendment protections, the government could potentially ban chatbots that criticize the administration or reflect viewpoints the administration disagrees with.

In fact, the output of chatbots can reflect not only the expressive choices of their creators and users, but also implicates users’ right to receive information. That’s why EFF and the Center for Democracy and Technology (CDT) have filed an amicus brief in Garcia v. Character Technologies explaining how large language models work and the various kinds of protected speech at stake.

Among the questions in this case is the extent to which free speech protections extend to the creation, dissemination, and receipt of chatbot outputs. Our brief explains how the expressive choices of a chatbot developer can shape its output, such as during reinforcement learning, when humans are instructed to give positive feedback to responses that align with the scientific consensus around climate change and negative feedback for denying it (or vice versa). This chain of human expressive decisions extends from early stages of selecting training data to crafting a system prompt. A user’s instructions are also reflected in chatbot output. Far from being the speech of a robot, chatbot output often reflects human expression that is entitled to First Amendment protection.

In addition, the right to receive speech in itself is protected—even when the speaker would have no independent right to say it. Users have a right to access the information chatbots provide.

None of this is to suggest that chatbots cannot be regulated or that the harms they cause cannot be addressed. The First Amendment simply requires that those regulations be appropriately tailored to the harm to avoid unduly burdening the right to express oneself through the medium of a chatbot, or to receive the information it provides.

We hope that our brief will be helpful to the court as the case progresses, as the judge decided not to send the question up on appeal at this time.

Read our brief below.

Katharine Trendacosta

No Walled Gardens. No Gilded Cages.

3 weeks ago

Sometimes technology feels like a gilded cage, and you’re not the one holding the key. Most people can’t live off the grid, so how do we stop data brokers who track and exploit you for money? Tech companies that distort what you see and hear? Governments that restrict, censor, and intimidate? No one can do it alone, but EFF was built to protect your rights. With your support, we can take back control.

Join EFF

With 35 years of deep expertise and the support of our members, EFF is delivering bold action to solve the biggest problems facing tech users: suing the government for overstepping their bounds; empowering the people and lawmakers to help them hold the line; and creating free, public interest software toolsguides, and explainers to make the web better.

EFF members enable thousands of hours of our legal work, activism, investigation, and software development for the public good. Join us today.

No Walled Gardens. No Gilded Cages.

Think about it: in the face of rising authoritarianism and invasive surveillance, where would we be without an encrypted web? Your security online depends on researchers, hackers, and creators who are willing to take privacy and free speech rights seriously. That's why EFF will eagerly protect the beating heart of that movement at this week's summer security conferences in Las Vegas. This renowned summit of computer hacking events—BSidesLV, Black Hat USA, and DEF CON—illustrate the key role a community can play in helping you break free of the trappings of technology and retake the reins.

For summer security week, EFF’s DEF CON 33 t-shirt design Beyond the Walled Garden by Hannah Diaz is your gift at the Gold Level membership. Look closer to discover this year’s puzzle challenge! Many thanks to our volunteer puzzlemasters jabberw0nky and Elegin for all their work.

defcon-shirt-frontback-wide.png
A Token of AppreciationBecome a recurring monthly or annual Sustaining Donor this week and you'll get a numbered EFF35 Challenge Coin. Challenge coins follow a long tradition of offering a symbol of kinship and respect for great achievements—and EFF owes its strength to technology creators and users like you.

Our team is on a relentless mission to protect your civil liberties and human rights wherever they meet tech, but it’s only possible with your help.

Donate Today

Break free of tech’s walled gardens.

Aaron Jue

Blocking Access to Harmful Content Will Not Protect Children Online, No Matter How Many Times UK Politicians Say So

3 weeks ago

The UK is having a moment. In late July, new rules took effect that require all online services available in the UK to assess whether they host content considered harmful to children, and if so, these services must introduce age checks to prevent children from accessing such content. Online services are also required to change their algorithms and moderation systems to ensure that content defined as harmful, like violent imagery, is not shown to young people.

During the four years that the legislation behind these changes—the Online Safety Act (OSA)—was debated in Parliament, and in the two years since while the UK’s independent, online regulator Ofcom devised the implementing regulations, experts from across civil society repeatedly flagged concerns about the impact of this law on both adults’ and children’s rights. Yet politicians in the UK pushed ahead and enacted one of the most contentious age verification mandates that we’ve seen.

The case of safety online is not solved through technology alone.

No one—no matter their age—should have to hand over their passport or driver’s license just to access legal information and speak freely. As we’ve been saying for many years now, the approach that UK politicians have taken with the Online Safety Act is reckless, short-sighted, and will introduce more harm to the children that it is trying to protect. Here are five reasons why:

Age Verification Systems Lead to Less Privacy 

Mandatory age verification tools are surveillance systems that threaten everyone’s rights to speech and privacy. To keep children out of a website or away from certain content, online services need to confirm the ages of all their visitors, not just children—for example by asking for government-issued documentation or by using biometric data, such as face scans, that are shared with third-party services like Yoti or Persona to estimate that the age of the user is over 18. This means that adults and children must all share their most sensitive and personal information with online services to access a website. 

Once this information is shared to verify a user's age, there’s no way for people to know how it's going to be retained or used by that company, including whether it will be sold or shared with even more third parties like data brokers or law enforcement. The more information a website collects, the more chances there are for that information to get into the hands of a marketing company, a bad actor, or a state actor or someone who has filed a legal request for it. If a website, or one of the intermediaries it uses, misuses or mishandles the data, the visitor might never find out. There is also a risk that this data, once collected, can be linked to other unrelated web activity, creating an aggregated profile of the user that grows more valuable as each new data point is added. 

As we argued extensively during the passage of the Online Safety Act, any attempt to protect children online should not include measures that require platforms to collect data or remove privacy protections around users’ identities. But with the Online Safety Act, users are being forced to trust that platforms (and whatever third-party verification services they choose to partner with) are guardrailing users’ most sensitive information—not selling it through the opaque supply chains that allow corporations and data brokers to make millions. The solution is not to come up with a more sophisticated technology, but to simply not collect the data in the first place.

This Isn’t Just About Safety—It’s Censorship

Young people should be able to access information, speak to each other and to the world, play games, and express themselves online without the government making decisions about what speech is permissible. But under the Online Safety Act, the UK government—with Ofcom—are deciding what speech young people have access to, and are forcing platforms to remove any content considered harmful. As part of this, platforms are required to build “safer algorithms” to ensure that children do not encounter harmful content, and introduce effective content moderation systems to remove harmful content when platforms become aware of it. 

Because the OSA threatens large fines or even jail time for any non-compliance, platforms are forced to over-censor content to ensure that they do not face any such liability. Reports are already showing the censorship of content that falls outside the parameters of the OSA, such as footage of police attacking pro-Palestinian protestors being blocked on X, the subreddit r/cider—yes, the beverage—asking users for photo ID, and smaller websites closing down entirely. UK-based organisation Open Rights Group are tracking this censorship with their tool, Blocked.

We know that the scope for so-called “harmful content” is subjective and arbitrary, but it also often sweeps up content like pro-LGBTQ+ speech. Policies like the OSA, that claim to “protect children” or keep sites “family-friendly,” often label LGBTQ+ content as “adult” or “harmful,” while similar content that doesn't involve the LGBTQ+ community is left untouched. Sometimes, this impact—the censorship of LGBTQ+ content—is implicit, and only becomes clear when the policies are actually implemented. Other times, this intended impact is explicitly spelled out in the text of the policies. But in all scenarios, legal content is being removed at the discretion of government agencies and online platforms, all under the guise of protecting children. 

Children deserve a more intentional and holistic approach to protecting their safety and privacy online.

People Do Not Want This 

Users in the UK have been clear in showing that they do not want this. Just days after age checks came into effect, VPN apps became the most downloaded on Apple's App Store in the UK. The BBC reported that one app, Proton VPN, reported an 1,800% spike in UK daily sign-ups after the age check rules took effect. A similar spike in searches for VPNs was evident in January when Florida joined the ever growing list of U.S. states in implementing an age verification mandate on sites that host adult content, including pornography websites like Pornhub. 

Whilst VPNs may be able to disguise the source of your internet activity, they are not foolproof or a solution to age verification laws. Ofcom has already started discouraging their use, and with time, it will become increasingly difficult for VPNs to effectively circumvent age verification requirements as enforcement of the OSA adapts and deepens. VPN providers will struggle to keep up with these constantly changing laws to ensure that users can bypass the restrictions, especially as more sophisticated detection systems are introduced to identify and block VPN traffic. 

Some politicians in the Labour Party argued that a ban on VPNs will be essential to prevent users circumventing age verification checks. But banning VPNs, just like introducing age verification measures, will not achieve this goal. It will, however, function as an authoritarian control on accessing information in the UK. If you are navigating protecting your privacy or want to learn more about VPNs, EFF provides a comprehensive guide on using VPNs and protecting digital privacy—a valuable resource for anyone looking to use these tools.

 Alongside increased VPN usage, a petition calling for the repeal of the Online Safety Act recently hit more than 400,000 signatures. In its official response to the petition, the UK government said that it “has no plans to repeal the Online Safety Act, and is working closely with Ofcom to implement the Act as quickly and effectively as possible to enable UK users to benefit from its protections.” This is not good enough: the government must immediately treat the reasonable concerns of people in the UK with respect, not disdain, and revisit the OSA.

Users Will Be Exposed to Amplified Discrimination 

To check users' ages, three types of systems are typically deployed: age verification, which requires a person to prove their age and identity; age assurance, whereby users are required to prove that they are of a certain age or age range, such as over 18; or age estimation, which typically describes the process or technology of estimating ages to a certain range. The OSA requires platforms to check ages through age assurance to prove that those accessing platforms are over 18, but leaves the specific tool for measuring this at the platforms’ discretion. This may therefore involve uploading a government-issued ID, or submitting a face scan to an app that will then use a third-party platform to “estimate” your age.

From what we know about systems that use face scanning in other contexts, such as face recognition technology used by law enforcement, even the best technology is susceptible to mistakes and misidentification. Just last year, a legal challenge was launched against the Met Police after a community worker was wrongly identified and detained following a misidentification by the Met’s live facial recognition system. 

For age assurance purposes, we know that the technology at best has an error range of over a year, which means that users may risk being incorrectly blocked or locked out of content by erroneous estimations of their age—whether unintentionally or due to discriminatory algorithmic patterns that incorrectly determine people’s identities. These algorithms are not always reliable, and even if the technology somehow had 100% accuracy, it would still be an unacceptable tool of invasive surveillance that people should not have to be subject to just to access content that the government could consider harmful.

Not Everyone Has Access to an ID or Personal Device 

Many advocates of the ‘digital transition’ introduce document-based verification requirements or device-based age verification systems on the assumption that every individual has access to a form of identification or their own smartphone. But this is not true. In the UK, millions of people don’t hold a form of identification or own a personal mobile device, instead sharing with family members or using public devices like those at a library or internet cafe. Yet because age checks under the OSA involve checking a user’s age through government-issued ID documents or face scans on a mobile device, millions of people will be left excluded from online speech and will lose access to much of the internet. 

These are primarily lower-income or older people who are often already marginalized, and for whom the internet may be a critical part of life. We need to push back against age verification mandates like the Online Safety Act, not just because they make children less safe online, but because they risk undermining crucial access to digital services, eroding privacy and data protection, and limiting freedom of expression. 

The Way Forward 

The case of safety online is not solved through technology alone, and children deserve a more intentional and holistic approach to protecting their safety and privacy online—not this lazy strategy that causes more harm that it solves. Rather than weakening rights for already vulnerable communities online, politicians must acknowledge these shortcomings and explore less invasive approaches to protect all people from online harms. We encourage politicians in the UK to look into what is best, and not what is easy.

Paige Collings

EFF at the Las Vegas Security Conferences

3 weeks ago

It’s time for EFF’s annual journey to Las Vegas for the summer security conferences: BSidesLV, Black Hat USA, and DEF CON. Our lawyers, activists, and technologists are always excited to support this community of security researchers and tinkerers—the folks who push computer security forward (and somehow survive the Vegas heat in their signature black hoodies).  

As in past years, EFF attorneys will be on-site to assist speakers and attendees. If you have legal concerns about an upcoming talk or sensitive infosec research—during the Las Vegas conferences or anytime—don’t hesitate to reach out at info@eff.org. Share a brief summary of the issue, and we’ll do our best to connect you with the right resources. You can also learn more about our work supporting technologists on our Coders’ Rights Project page. 

Be sure to swing by the expo areas at all three conferences to say hello to your friendly neighborhood EFF staffers! You’ll probably spot us in the halls, but we’d love for you to stop by our booths to catch up on our latest work, get on our action alerts list, or become an EFF member! For the whole week, we’ll have our limited-edition DEF CON 33 t-shirt on hand—I can’t wait to see them take over each conference! 

defcon-shirt-frontback.png

EFF Staff Presentations

Ask EFF at BSides Las Vegas
At this interactive session, our panelists will share updates on critical digital rights issues and EFF's ongoing efforts to safeguard privacy, combat surveillance, and advocate for freedom of expression.
WHEN: Tuesday, August 5, 15:00
WHERE: Skytalks at the Tuscany Suites Hotel & Casino

Recording PCAPs from Stingrays With a $20 Hotspot
What if you could use Wireshark on the connection between your cellphone and the tower it's connected to? In this talk we present Rayhunter, a cell site simulator detector built on top of a cheap cellular hotspot. 
WHEN: Friday, August 8, 13:30
WHERE: DEF CON, LVCC - L1 - EHW3 - Track 1

Rayhunter Build Clinic
Come out and build EFF's Rayhunter! ($10 materials fee as an EFF donation)
WHEN: Friday, August 8 at 14:30
WHERE: DEF CON, Hackers.Town Community Space

Protect Your Privacy Online and on the Streets with EFF Tools
The Electronic Frontier Foundation (EFF) has been protecting your rights to privacy, free expression, and security online for 35 years! One important way we push for these freedoms is through our free, open source tools. We’ll provide an overview of how these tools work, including Privacy Badger, Rayhunter, Certbot, and Surveillance-Self Defense, and how they can help keep you safe online and on the streets.
WHEN: Friday, August 8 at 17:00
WHERE: DEF CON, Community Stage

Rayhunter Internals
Rayhunter is an open source project from EFF to detect IMSI catchers. In this follow up to our main stage talk about the project we will take a deep dive into the internals of Rayhunter. We will talk about the architecture of the project, what we have gained by using Rust, porting to other devices, how to jailbreak new devices, the design of our detection heuristics, open source shenanigans, and how we analyze files sent to us.
WHEN: Saturday, August 9, at 12:00
WHERE: DEF CON, Hackers.Town Community Space

Ask EFF at DEF CON 33
We're excited to answer your burning questions on pressing digital rights issues! Our expert panelists will offer brief updates on EFF's work defending your digital rights, before opening the floor for attendees to ask their questions. This dynamic conversation centers challenges DEF CON attendees actually face, and is an opportunity to connect on common causes.
WHEN: Saturday, August 9, at 14:30
WHERE: DEF CON, LVCC - L1 - EHW3 - Track 4

EFF Benefit Poker Tournament at DEF CON 33

The EFF Benefit Poker Tournament is back for DEF CON 33! Your buy-in is paired with a donation to support EFF’s mission to protect online privacy and free expression for all. Join us at the Planet Hollywood Poker Room as a player or spectator. Play for glory. Play for money. Play for the future of the web. 
WHEN: Friday, August 8, 2025 - 12:00-15:00
WHERE: Planet Hollywood Poker Room, 3667 Las Vegas Blvd South, Las Vegas, NV 89109

Beard and Mustache Contest at DEF CON 33

Yes, it's exactly what it sounds like. Join EFF at the intersection of facial hair and hacker culture. Spectate, heckle, or compete in any of four categories: Full beard, Partial Beard, Moustache  Only, or Freestyle (anything goes so create your own facial apparatus!). Prizes! Donations to EFF! Beard oil! Get the latest updates.
WHEN: Saturday, August 9, 10:00- 12:00
WHERE: DEF CON, Contest Stage (Look for the Moustache Flag)

Tech Trivia Contest at DEF CON 33

Join us for some tech trivia on Saturday, August 9 at 7:00 PM! EFF's team of technology experts have crafted challenging trivia about the fascinating, obscure, and trivial aspects of digital security, online rights, and internet culture. Competing teams will plumb the unfathomable depths of their knowledge, but only the champion hive mind will claim the First Place Tech Trivia Trophy and EFF swag pack. The second and third place teams will also win great EFF gear.
WHEN: Saturday, August 9, 19:00-22:00
WHERE: DEF CON, Contest Stage

Join the Cause!

Come find our table at BSidesLV (Middle Ground), Black Hat USA (back of the Business Hall), and DEF CON (Vendor Hall) to learn more about the latest in online rights, get on our action alert list, or donate to become an EFF member. We'll also have our limited-edition DEF CON 33 shirts available starting Monday at BSidesLV! These shirts have a puzzle incorporated into the design. Snag one online for yourself starting on Tuesday, August 5 if you're not in Vegas!

Join EFF

Support Security & Digital Innovation

Christian Romero

Digital Rights Are Everyone’s Business, and Yours Can Join the Fight!

3 weeks 1 day ago

Companies large and small are doubling down on digital rights, and we’re excited to see more and more of them join EFF. We’re first and always an organization who fights for users, so you might be asking: Why does EFF work with corporate donors, and why do they want to work with us?

SHOW YOUR COMPANY SUPPORTS A BETTER DIGITAL FUTURE

JOIN EFF TODAY

Businesses want to work with EFF for two reasons:

  1. They, their employees, and their customers believe in EFF’s values.
  2. They know that when EFF wins, we all win.

Both customers and employees alike care about working with organizations they know share their values. And issues like data privacy, sketchy uses of surveillance, and free expression are pretty top of mind for people these days. Research shows that today’s working adults take philanthropy seriously, whether they’re giving organizations their money or their time. For younger generations (like the Millennial EFFer writing this blog post!) especially, feeling like a meaningful part of the fight for good adds to a sense of purpose and fulfillment. Given the choice to spend hard-earned cash with techno-authoritarians versus someone willing to take a stand for digital freedom: We’ll take option two, thanks.

When EFF wins, users win. Standing up for the ability to access, use, and build on technology means that a handful of powerful interests won’t have unfair advantages over everyone else. Whether it’s the fight for net neutrality, beating back patent trolls in court, protecting the right to repair and tinker, or pushing for decentralization and interoperability, EFF’s work can build a society that supports creativity and innovation; where established players aren’t allowed to silence the next generation of creators. Simply put: Digital rights are good for business!

The trust of EFF’s membership is based on 35 years of speaking truth to power, whether it’s on Capitol Hill or in Silicon Valley (and let’s be honest, if EFF was Big Tech astroturf, we’d drive nicer cars). EFF will always lead the work and invite supporters to join us, not the other way around. EFF will gratefully thank the companies who join us and offer employees and customers ways to get involved, too. EFF won’t take money from Google, Apple, Meta, Microsoft, Amazon, or Tesla, and we won’t endorse or sponsor a company, service, or product. Most importantly: EFF won’t alter the mission or the message to meet a donor’s wishes, no matter how much they’ve donated.

A few of the ways your team can support EFF:

  1.  Cash donations
  2. Sponsoring an EFF event
  3. Providing an in-kind product or service
  4. Matching your employees’ gifts
  5. Boosting our messaging

Ready to join us in the fight for a better future? Visit eff.org/thanks.

Tierney Hamilton

Data Brokers Are Ignoring Privacy Law. We Deserve Better.

3 weeks 1 day ago

Of the many principles EFF fights for in consumer data privacy legislation, one of the most basic is a right to access the data companies have about you. It’s only fair. So many companies collect information about us without our knowledge or consent. We at least should have a way to find out what they purport to know about our lives.

Yet a recent paper from researchers at the University of Californian-Irvine found that, of 543 data brokers in California’s data broker registry at time of publishing, 43 percent failed to even respond to requests to access data.

43 percent of registered data brokers in California failed to even respond to requests to access data, one study shows.

Let’s stop there for a second. That’s more than four in ten companies from an industry that makes its money from collecting and selling our personal information, ignoring one of our most basic rights under the California Consumer Privacy Act: the right to know what information companies have about us.

Such failures violate the law. If this happens to you, you should file a complaint with the California Privacy Protection Agency (CPPA) and the California Attorney General's Office

This is particularly galling because it’s not easy to file a request in the first place. As these researchers pointed out, there is no streamlined process for these time-consuming requests. People often won’t have the time or energy to see them through. Yet when someone does make the effort to file a request, some companies still feel just fine ignoring the law and their customers completely.

Four in ten data brokers are leaving requesters on read, in violation of the law and our privacy rights. That’s not a passing grade in anyone’s book.

Without consequences to back up our rights, as this research illustrates, many companies will bank on not getting caught, or factor weak slaps on the wrist into the cost of doing business.

This is why EFF fights for bills that have teeth. For example, we demand that people have the right to sue for privacy violations themselves—what’s known as a private right of action. Companies hate this form of enforcement, because it can cost them real money when they flout the law.

When the CCPA started out as a ballot initiative, it had a private right of action, including to enforce access requests. But when the legislature enacted the CCPA (in exchange for the initiative’s proponents removing it from the ballot), corporate interests killed the private right of action in negotiations.

We encourage the California Privacy Protection Agency and the California Attorney General’s Office, which both have the authority to bring these companies to task under the CCPA, to look into these findings. Moving forward, we all have to continue to fight for better laws, to strengthen existing laws, and call on states to enforce the laws on their books to respect everyone’s privacy. Data brokers must face real consequences for brazenly flouting our privacy rights.

Hayley Tsukayama

No, the UK’s Online Safety Act Doesn’t Make Children Safer Online

3 weeks 4 days ago

Young people should be able to access information, speak to each other and to the world, play games, and express themselves online without the government making decisions about what speech is permissible. But in one of the latest misguided attempts to protect children online, internet users of all ages in the UK are being forced to prove their age before they can access millions of websites under the country’s Online Safety Act (OSA). 

The legislation attempts to make the UK the “the safest place” in the world to be online by placing a duty of care on online platforms to protect their users from harmful content. It mandates that any site accessible in the UK—including social media, search engines, music sites, and adult content providers—enforce age checks to prevent children from seeing harmful content. This is defined in three categories, and failure to comply could result in fines of up to 10% of global revenue or courts blocking services:

  1. Primary priority content that is harmful to children: 
    1. Pornographic content.
    2. Content which encourages, promotes or provides instructions for:
      1. suicide;
      2. self-harm; or 
      3. an eating disorder or behaviours associated with an eating disorder.
  2. Priority content that is harmful to children: 
    1. Content that is abusive on the basis of race, religion, sex, sexual orientation, disability or gender reassignment;
    2. Content that incites hatred against people on the basis of race, religion, sex, sexual orientation, disability or gender reassignment; 
    3. Content that encourages, promotes or provides instructions for serious violence against a person; 
    4. Bullying content;
    5. Content which depicts serious violence against or graphicly depicts serious injury to a person or animal (whether real or fictional); 
    6. Content that encourages, promotes or provides instructions for stunts and challenges that are highly likely to result in serious injury; and 
    7. Content that encourages the self-administration of harmful substances.
  3. Non-designated content that is harmful to children (NDC): 
    1. Content is NDC if it presents a material risk of significant harm to an appreciable number of children in the UK, provided that the risk of harm does not flow from any of the following:
      1. the content’s potential financial impact;
      2. the safety or quality of goods featured in the content; or
      3. the way in which a service featured in the content may be performed.

    Online service providers must make a judgement about whether the content they host is harmful to children, and if so, address the risk by implementing a number of measures, which includes, but is not limited to:

    1. Robust age checks: Services must use “highly effective age assurance to protect children from this content. If services have minimum age requirements and are not using highly effective age assurance to prevent children under that age using the service, they should assume that younger children are on their service and take appropriate steps to protect them from harm.”

      To do this, all users on sites that host this content must verify their age, for example by uploading a form of ID like a passport, taking a face selfie or video to facilitate age assurance through third-party services, or giving permission for the age-check service to access information from your bank about whether you are over 18. 

    2. Safer algorithms: Services “will be expected to configure their algorithms to ensure children are not presented with the most harmful content and take appropriate action to protect them from other harmful content.”

    3. Effective moderation: All services “must have content moderation systems in place to take swift action against content harmful to children when they become aware of it.” 

    Since these measures took effect in late July, social media platforms Reddit, Bluesky, Discord, and X all introduced age checks to block children from seeing harmful content on their sites. Porn websites like Pornhub and YouPorn implemented age assurance checks on their sites, now asking users to either upload government-issued ID, provide an email address for technology to analyze other online services where it has been used, or submit their information to a third-party vendor for age verification. Sites like Spotify are also requiring users to submit face scans to third-party digital identity company Yoti to access content labelled 18+. Ofcom, which oversees implementation of the OSA, went further by sending letters to try to enforce the UK legislation on U.S.-based companies such as the right-wing platform Gab

    The UK Must Do Better

    The UK is not alone in pursuing such a misguided approach to protect children online: the U.S. Supreme Court recently paved the way for states to require websites to check the ages of users before allowing them access to graphic sexual materials; courts in France last week ruled that porn websites can check users’ ages; the European Commission is pushing forward with plans to test its age-verification app; and Australia’s ban on youth under the age of 16 accessing social media is likely to be implemented in December. 

    But the UK’s scramble to find an effective age verification method shows us that there isn't one, and it’s high time for politicians to take that seriously. The Online Safety Act is a threat to the privacy of users, restricts free expression by arbitrating speech online, exposes users to algorithmic discrimination through face checks, and leaves millions of people without a personal device or form of ID excluded from accessing the internet.

    And, to top it all off, UK internet users are sending a very clear message that they do not want anything to do with this censorship regime. Just days after age checks came into effect, VPN apps became the most downloaded on Apple's App Store in the UK, and a petition calling for the repeal of the Online Safety Act recently hit more than 400,000 signatures. 

    The internet must remain a place where all voices can be heard, free from discrimination or censorship by government agencies. If the UK really wants to achieve its goal of being the safest place in the world to go online, it must lead the way in introducing policies that actually protect all users—including children—rather than pushing the enforcement of legislation that harms the very people it was meant to protect.

    Paige Collings

    TechEd Collab: Building Community in Arizona Around Tech Awareness

    3 weeks 5 days ago

    Earlier this year, EFF welcomed Technology Education Collaborative (TEC) into the Electronic Frontier Alliance (EFA). TEC empowers everyday people to become informed users of today's extraordinary technology, and helps people better understand the tech that surrounds them on a daily basis. TEC does this by hosting in-person, hands-on events, including right to repair workshops, privacy meetups, tech field trips, and demos. We got the chance to catch up with Connor Johnson, Chief Technology Officer of TEC, and speak with him about the work TEC is doing in the Greater Phoenix area:

    Connor, tell us how Technology Education Collaborative got started, and about its mission.

    TEC was started with the idea of creating a space where industry professionals, students, and the community at large could learn about technology together. We teamed up with Gateway Community College to build the Advanced Cyber Systems Lab. A lot of tech groups in Phoenix meet at varying locations, because they can’t afford or find a dedicated space. TEC hosts community technology-focused groups at the Advanced Cyber Systems Lab, so they can have the proper equipment to work on and collaborate on their projects.

    Speaking of projects, let's talk about some of the main priorities of TEC: right to repair, privacy, and cybersecurity. Having the only right to repair hub in the greater Phoenix metro valley, what concerns do you see on the horizon? 

    One of our big concerns is that many companies have slowly shifted away from repairability to a sense of convenience. We are thankful for the donations from iFixIt that allow people to use the tools they may otherwise not know they need or could afford. Community members and IT professionals have come to use our anti-static benches to fix everything from TVs to 3D printers. We are also starting to host ‘Hardware Happy Hour’ so anyone can bring their hardware projects in and socialize with like-minded people.

    How’s your privacy and cybersecurity work resonating with the community?

    We have had a host of different speakers discuss the current state of privacy and how it can affect different individuals. It was also wonderful to have your Surveillance Litigation Director, Andrew Crocker, speak at our July edition of Privacy PIE. So many of the attendees were thrilled to be able to ask him questions and get clarification on current issues. Christina, CEO of TEC, has done a great job leading our Privacy PIE events and discussing the legal situation surrounding many privacy rights people take for granted. One of my favorite presentations was when we discussed privacy concerns with modern cars, where she touched on aspects like how the cameras are tied to car companies' systems and data collection.

    TEC’s current goal is to focus on building a community that is not just limited to cybersecurity itself. One problem that we’ve noticed is that there are a lot of groups focused on security but don’t branch out into other fields in tech. Security affects all aspects of technology, which is why TEC has been branching out its efforts to other fields within tech like hardware and programming. A deeper understanding of the fundamentals can help us to build better systems from the ground up, rather than applying cybersecurity as an afterthought.

    In the field of cybersecurity, we have been working on a project building a small business network. The idea behind this initiative is to allow small businesses to independently set up their network, so that provides a good layer of security. Many shops don’t either have the money to afford a security-hardened network or don’t have the technical know-how to set one up. We hope this open-source project will allow people to set up the network themselves, and allow students a way to gain valuable work experience.

    It’s awesome to hear of all the great things TEC is doing in Phoenix! How can people plug in and get engaged and involved?

    TEC can always benefit from more volunteers or donations. Our goal is to build community, and we are happy to have anyone join us. All are welcome to the Advanced Cyber System lab at Gateway Community College – Washington Campus Monday through Thursday 4 pm to 8 pm. Our website is www.techedcollab.org and on facebook we’re: www.facebook.com/techedcollab People can also join our discord server for some great discussions and updates on our upcoming events!

    Christopher Vines

    👮 Amazon Ring Is Back in the Mass Surveillance Game | EFFector 37.9

    3 weeks 6 days ago

    EFF is gearing up to beat the heat in Las Vegas for the summer security conferences! Before we make our journey to the Strip, we figured let's get y'all up-to-speed with a new edition of EFFector.

    This time we're covering an illegal mass surveillance scheme by the Sacramento Municipal Utility District, calling out dating apps for using intimate data—like sexual preferences or identity—to train AI , and explaining why we're backing the Wikimedia Foundation in their challenge to the UK’s Online Safety Act.

    Don't forget to also check out our audio companion to EFFector as well! We're interviewing staff about some of the important work that they're doing. This time, EFF Senior Policy Analyst Matthew Guariglia explains how Amazon Ring is cashing in on the rising tide of techno-authoritarianism. Listen now on YouTube or the Internet Archive.

    Listen TO EFFECTOR

    EFFECTOR 37.9 - Amazon Ring Is Back in the Mass Surveillance Game

    Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

    Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

    Christian Romero
    Checked
    1 hour 33 minutes ago
    EFF's Deeplinks Blog: Noteworthy news from around the internet
    Subscribe to EFF update feed