California’s A.B. 412: A Bill That Could Crush Startups and Cement A Big Tech AI Monopoly

2 hours 8 minutes ago

California legislators have begun debating a bill (A.B. 412) that would require AI developers to track and disclose every registered copyrighted work used in AI training. At first glance, this might sound like a reasonable step toward transparency. But it’s an impossible standard that could crush small AI startups and developers while giving big tech firms even more power.

A Burden That Small Developers Can’t Bear

The AI landscape is in danger of being dominated by large companies with deep pockets. These big names are in the news almost daily. But they’re far from the only ones – there are dozens of AI companies with fewer than 10 employees trying to build something new in a particular niche. 

This bill demands that creators of any AI model–even a two-person company or a hobbyist tinkering with a small software build– identify copyrighted materials used in training.  That requirement will be incredibly onerous, even if limited just to works registered with the U.S. Copyright Office. The registration system is a cumbersome beast at best–neither machine-readable nor accessible, it’s more like a card catalog than a database–that doesn’t offer information sufficient to identify all authors of a work,  much less help developers to reliably match works in a training set to works in the system.

Even for major tech companies, meeting these new obligations  would be a daunting task. For a small startup, throwing on such an impossible requirement could be a death sentence. If A.B. 412 becomes law, these smaller players will be forced to devote scarce resources to an unworkable compliance regime instead of focusing on development and innovation. The risk of lawsuits—potentially from copyright trolls—would discourage new startups from even attempting to enter the field.

A.I. Training Is Like Reading And It’s Very Likely Fair Use 

A.B. 412 starts from a premise that’s both untrue and harmful to the public interest: that reading, scraping or searching of open web content shouldn’t be allowed without payment. In reality, courts should, and we believe will, find that the great majority of this activity is fair use. 

It’s now bedrock internet law principle that some forms of copying content online are transformative, and thus legal fair use. That includes reproducing thumbnail images for image search, or snippets of text to search books

The U.S. copyright system is meant to balance innovation with creator rights, and courts are still working through how copyright applies to AI training. In most of the AI cases, courts have yet to consider—let alone decide—how fair use applies. A.B. 412 jumps the gun, preempting this process and imposing a vague, overly broad standard that will do more harm than good.

Importantly, those key court cases are all federal. The U.S. Constitution makes it clear that copyright is governed by federal law, and A.B. 412 improperly attempts to impose state-level copyright regulations on an issue still in flux. 

A.B. 412 Is A Gift to Big Tech

The irony of A.B. 412 is that it won’t stop AI development—it will simply consolidate it in the hands of the largest corporations. Big tech firms already have the resources to navigate complex legal and regulatory environments, and they can afford to comply (or at least appear to comply) with A.B. 412’s burdensome requirements. Small developers, on the other hand, will either be forced out of the market or driven into partnerships where they lose their independence. The result will be less competition, fewer innovations, and a tech landscape even more dominated by a handful of massive companies.

If lawmakers are able to iron out some of the practical problems with A.B. 412 and pass some version of it, they may be able to force programmers to research–and effectively, pay off–copyright owners before they even write a line of code. If that’s the outcome in California, Big Tech will not despair. They’ll celebrate. Only a few companies own large content libraries or can afford to license enough material to build a deep learning model. The possibilities for startups and small programmers will be so meager, and competition will be so limited, that profits for big incumbent companies will be locked in for a generation. 

If you are a California resident and want to speak out about A.B. 412, you can find and contact your legislators through this website

Joe Mullin

EFF Joins 7amleh Campaign to #ReconnectGaza

17 hours 48 minutes ago

In times of conflict, the internet becomes more than just a tool—it is a lifeline, connecting those caught in chaos with the outside world. It carries voices that might otherwise be silenced, bearing witness to suffering and survival. Without internet access, communities become isolated, and the flow of critical information is disrupted, making an already dire situation even worse.

At this years RightsCon conference hosted in Taiwan, Palestinian non-profit organization 7amleh, in collaboration with the Palestinian Digital Rights Coalition and supported by dozens of international organizations including EFF, launched #ReconnectGaza, a global campaign to rebuild Gaza’s telecommunications network and safeguard the right to communication as a fundamental human right. 

The campaign comes on the back of more than 17 months of internet blackouts and destruction to Gaza’s telecommunications infrastructure by  the Israeli authorities.Estimates indicate that 75% of Gaza’s telecommunications infrastructure has been damaged, with 50% completely destroyed. This loss of connectivity has crippled essential services— preventing healthcare coordination, disrupting education, and isolating Palestinians from the digital economy. In response, there is an urgent and immediate need  to deploy emergency solutions, such as eSIM cards, satellite internet access, and mobile communications hubs.

At the same time, there is an opportunity to rebuild towards a just and permanent solution with modern technologies that would enable reliable, high-speed connectivity that supports education, healthcare, and economic growth. The campaign calls for this as a paramount component to reconnecting Gaza, whilst also ensuring the safety and protection of telecommunications workers on the ground, who risk their lives to repair and maintain critical infrastructure. 

Further, beyond responding to these immediate needs, 7amleh and the #ReconnectGaza campaign demands the establishment of an independent Palestinian ICT sector, free from external control, as a cornerstone of Gaza’s reconstruction and Palestine's digital sovereignty. Palestinians have been subject to Israel internet controls since the Oslo Accords, which settled that Palestine should have its own telephone, radio, and TV networks, but handed over details to a joint technical committee. Ending the deliberate isolation of the Palestinian people is critical to protecting fundamental human rights.

This is not the first time internet shutdowns have been weaponized as a tool for oppression. In 2012, Palestinians in Gaza were subject to frequent power outages and were forced to rely on generators and insecure dial-up connections for connectivity. More recently since October 7, Palestinians in Gaza have experienced repeated internet blackouts inflicted by the Israeli authorities. Given that all of the internet cables connecting Gaza to the outside world go through Israel, the Israeli Ministry of Communications has the ability to cut off Palestinians’ access with ease. The Ministry also allocates spectrum to cell phone companies; in 2015 we wrote about an agreement that delivered 3G to Palestinians years later than the rest of the world.

Access to internet infrastructure is essential—it enables people to build and create communities, shed light on injustices, and acquire vital knowledge that might not otherwise be available. And access to it becomes even more imperative in circumstances where being able to communicate and share real-time information directly with the people you trust is instrumental to personal safety and survival. It is imperative that people’s access to the internet remains protected.

The restoration of telecommunications in Gaza is deemed an urgent humanitarian need. Global stakeholders, including UN agencies, governments, and telecommunications companies, must act swiftly to ensure the restoration and modernization of Gaza’s telecommunications.

Jillian C. York

The Foilies 2025

1 day 10 hours ago
Recognize the Worst in Government Transparency 

Co-written by MuckRock's Michael Morisy, Dillon Bergin, and Kelly Kauffman

The public's right to access government information is constantly under siege across the United States, from both sides of the political aisle. In Maryland, where Democrats hold majorities, the attorney general and state legislature are pushing a bill to allow agencies to reject public records requests that they consider "harassing." At the same time, President Donald Trump's administration has moved its most aggressive government reform effort–the Department of Government Efficiency, or DOGE–outside the reach of the Freedom of Information Act (FOIA), while also beginning the mass removal of public data sets.

One of the most powerful tools to fight back against bad governance is public ridicule. That's where we come in: Every year during Sunshine Week (March 16-22). the Electronic Frontier Foundation, MuckRock and AAN Publishers team up to publish The Foilies. This annual report—now a decade old—names and shames the most repugnant, absurd, and incompetent responses to public records requests under FOIA and state transparency laws.

Sometimes the good guys win. For example, last year we highlighted the Los Angeles Police Department for using the courts to retaliate against advocates and a journalist who had rightfully received and published official photographs of police officers. The happy ending (at least for transparency): LAPD has since lost the case, and the city paid the advocates $300,000 to cover their legal bills.

Here are this year's "winners." While they may not all pay up, at least we can make sure they get the negative publicity they're owed. 

The Exorbitant FOIA Fee of the Year: Rapides Parish School District

After a church distributed a religious tract at Lessie Moore Elementary School School in Pineville, La., young students quickly dubbed its frank discussion of mature themes as “the sex book.” Hirsh M. Joshi from the Freedom From Religion Foundation, a lawyer representing a parent, filed a request with the Rapides Parish School District to try to get some basic information: How much did the school coordinate with the church distributing the material? Did other parents complain? What was the internal reaction? Joshi was stunned when the school district responded with an initial estimate of $2 million to cover the cost of processing the request. After local media picked up the story and a bit of negotiating, the school ultimately waived the charges and responded with a mere nine pages of responsive material.

While Rapides Parish’s sky-high estimate ultimately took home the gold this year, there was fierce competition. The Massachusetts State Police wanted $176,431 just to review—and potentially not even release—materials about recruits wholeave the state’s training program early. Back in Louisiana, the Jefferson Parish District Attorney’s office insisted on charging a grieving father more than $5,000 for records on the suspicious death of his own son.

The Now You See It, Now You Don’t Award: University of Wisconsin-Madison

Sports reporter Daniel Libit’s public records request is at the heart of a lawsuit that looks a lot like the Spider-Man pointing meme. In 2023, Libit filed the request for a contract between the University of Wisconsin and Altius Sports Partners, a firm that consults college athletic programs on payment strategies for college athletes ("Name, Image, Likeness" or NIL deals), after reading a university press release about the partnership.The university denied the request, claiming that Altius was actually contracted by the University of Wisconsin Foundation, a separate 501(c)(3). So, Libit asked the foundation for the contract. The foundation then denied the request, claiming it was exempt from Wisconsin’s open records laws. After the denial, Libit filed a lawsuit for the records, which was then dismissed, because the university and foundation argued that Libit had incorrectly asked for a contract between the university and Altius, as opposed to the foundation and Altius.

The foundation did produce a copy of the contract in the lawsuit, but the game of hiding the ball makes one thing clear, as Libit wrote after: “If it requires this kind of effort to get a relatively prosaic NIL consultant contract, imagine the lengths schools are willing to go to keep the really interesting stuff hidden.”

The Fudged Up Beyond All Recognition Award: Central Intelligence Agency 

A CIA official's grandma's fudge recipe was too secret for public consumption.

There are state secrets, and there are family secrets, and sometimes they mix … like a creamy, gooey confectionary.

After Mike Pompeo finished his first year as Trump's CIA director in 2017, investigative reporter Jason Leopold sent a FOIA request asking for all of the memos Pompeo sent to staff. Seven years later, the agency finally produced the records, including a "Merry Christmas and Happy New Year" message recounting the annual holiday reception and gingerbread competition, which was won by a Game of Thrones-themed entry. ("And good use of ice cream cones!" Pompeo wrote.) At the party, Pompeo handed out cards with his mom's "secret" recipe for fudge, and for those who couldn't make it, he also sent it out as an email attachment.

But the CIA redacted the whole thing, vaguely claiming it was protected from disclosure under federal law. This isn't the first time the federal government has protected Pompeo's culinary secrets: In 2021, the State Department redacted Pompeo's pizza toppings and favorite sandwich from emails.

The You Can't Handle the Truth Award: Virginia Gov. Glenn Youngkin

In Virginia, state officials have come under fire in the past few years for shielding records from the public under the broad use of a “working papers and correspondence” FOIA exemption. When a public records request came in for internal communications on the state’s Military Survivors and Dependents Education Program, which provides tuition-free college to spouses and children of military veterans killed or disabled as a result of their service, Gov. Glenn Youngkin’s office used this “working papers” exemption to reject the FOIA request.

The twist is the request was made by Kayla Owen, a military spouse and a member of the governor’s own task force studying the program. Despite Owen’s attempts to correct the parameters of the request, Youngkin’s office made the final decision in July to withhold more thantwo folders worth of communications with officials who have been involved with policy discussions about the program.

The Courts Cloaked in Secrecy Award (Tie): Solano County Superior Court, Calif., and Washoe County District Court, Nev.

Courts are usually the last place the public can go to vindicate their rights to government records when agencies flout them. When agencies lock down records, courts usually provide the key to open them up.

Except in Vallejo, Calif., where a state trial court judge decided to lock his own courtroom during a public records lawsuit—a move that even Franz Kafka would have dismissed as too surreal and ironic. The suit filed by the American Civil Liberties Union sought a report detailing a disturbing ritual in which officers bent their badges to celebrate their on-duty killings of local residents.

When public access advocates filed an emergency motion to protest the court closure, the court denied it without even letting them in to argue their case. This was not just a bad look; it violated the California and U.S. constitutions, which guarantee public access to court proceedings and a public hearing prior to barring the courtroom doors.

Not to be outdone, a Nevada trial court judge has twice barred a local group from filming hearings concerning a public records lawsuit. The request sought records of an alleged domestic violence incident at the Reno city manager’s house. Despite the Nevada Supreme Court rebuking the judge for prohibiting cameras in her courtroom, she later denied the same group from filming another hearing. The transparency group continues to fight for camera access, but its persistence should not be necessary: The court should have let them record from the get-go.      

The No Tech Support Award: National Security Agency

NSA claimed it didn't have the obsolete tech to access lecture by military computing pioneer Grace Hopper

In 1982, Rear Adm. Grace Hopper (then a captain) presented a lecture to the National Security Agency entitled “Future Possibilities: Data, Hardware, Software, and People.” One can only imagine Hopper's disappointment if she had lived long enough to learn that in the future, the NSA would claim it was impossible for its people to access the recording of the talk.

Hopper is undoubtedly a major figure in the history of computing whose records and lectures are of undeniable historical value, and Michael Ravnitzky, frequent FOIA requester and founder of Government Attic, requested this particular lecture back in 2021. Three years later, the NSA responded to tell him that they had no responsive documents.

Befuddled, Ravnitzky pointed out the lecture had been listed in the NSA’s own Television Center Catalogue. At that point, the agency copped to the actual issue. Yes, it had the record, but it was captured on AMPEX 1-inch open reel tapes, as was more common in the 1980s. Despite being a major intelligence agency with high-tech surveillance and communication capabilities, it claimed it could not find any way to access the recording.

Let’s unpack the multi-layered egregiousness of the NSA’s actions here. It took the agency three years to respond to this FOIA. When it did, the NSA claimed that it had nothing responsive, which was a lie.  But the most colossal failure by the NSA was its claim that it couldn’t find a way to make accessible to the public important moments from our history because of technical difficulties. 

But leave it to librarians to put spies to shame: The National Archives stepped in to help, and now you can watch the lecture in two parts.


Can't get enough of The Foilies? Check out our decade in review and our archives!

Dave Maass

“Guardrails” Won’t Protect Nashville Residents From AI-Enabled Camera Networks

2 days 11 hours ago

Nashville’s Metropolitan Council is one vote away from passing an ordinance that’s being branded as “guardrails” against the privacy problems that come with giving the police a connected camera system like Axon’s Fusus. But Nashville locals are right to be skeptical of just how much protection from mass surveillance products they can expect.  

"I am against these guardrails," council member Ginny Welsch told the Tennessean recently. "I think they're kind of a farce. I don't think there can be any guardrail when we are giving up our privacy and putting in a surveillance system." 

Likewise, Electronic Frontier Alliance member Lucy Parsons Labs has inveighed against Fusus and the supposed guardrails as a fix to legislators’ and residents’ concerns in a letter to the Metropolitan Council. 

While the ordinance doesn’t name the company specifically, it was introduced in response to privacy concerns over the city’s possible contract for Fusus, an Axon system that facilitates access to live camera footage for police and helps funnel such feeds into real-time crime centers. In particular, local opponents are concerned about data-sharing—a critical part of Fusus—that could impede the city’s ability to uphold its values against the criminalization of some residents, like undocumented immigrants and people seeking reproductive or gender-affirming care.

This technology product, which was acquired by the police surveillance giant Axon in 2024, facilitates two major functions for police:

  • With the click of a buttonx—or the tap of an icon on a map—officers can get access to live camera footage from public and private cameras, including the police’s Axon body-worn cameras, that have been integrated into the Fusus network.
  • Data feeds from a variety of surveillance tools—like body-worn cameras, drones, gunshot detection, and the connected camera network—can be aggregated into a system that makes those streams quickly accessible and susceptible to further analysis by features marketed as “artificial intelligence.”

From 2022 through 2023, Metropolitan Nashville Police Department (MNPD) had, unbeknownst to the public, already been using Fusus. When the contract came back under consideration, a public outcry and unanswered questions about the system led to its suspension, and the issue was deferred multiple times before the contract renewal was voted down late last year. Nashville council members determined that the Fusus system posed too great a threat to vulnerable groups that the council has sought to protect with city policies and resolutions, including pregnant residents, immigrants, and residents seeking gender-affirming care, among others. The state has criminalized some of the populations that the city of Nashville has passed ordinances to protect. 

Unfortunately, the fight against the sprawling surveillance of Fusus continues. The city council is now making its final consideration of the aforementionedan ordinance that some of its members say will protect city residents in the event that the mayor and other Fusus fans are able to get a contract signed after all.

These so-called guardrails include:

  • restricting the MNPD from accessing private cameras or installing public safety cameras in locations “where there is a reasonable expectation of privacy”; 
  • prohibiting using face recognition to identify individuals in the connected camera system’s footage; 
  • policies addressing authorized access to and use of the connected camera system, including how officers will be trained, and how they will be disciplined for any violations of the policy;
  • quarterly audits of access to the connected camera system; 
  • mandatory inclusion of a clause in procurement contracts allowing for immediate termination should violations of the ordinance be identified; 
  • mandatory reporting to the mayor and the council about any violations of the ordinance, the policies, or other abuse of access to the camera network within seven days of the discovery. 

Here’s the thing: even if these limited “guardrails” are in place, the only true protection from the improper use of the AI-enabled Fusus system is to not use it at all. 

We’ve seen that when law enforcement has access to cameras, they will use them, even if there are clear regulations prohibiting those uses: 

  • Black residents of a subsidized housing development became the primary surveillance targets for police officers with Fusus access in Toledo, Ohio. 

Firms such as Fusus and its parent company Axon are pushing AI-driven features, and databases with interjurisdictional access. Surveillance technology is bending toward a future where all of our data are being captured, including our movements by street cameras (like those that would be added to Fusus), our driving patterns by ALPR, our living habits by apps, and our actions online by web trackers, and then being combined, sold, and shared.

When Nashville first started its relationship with Fusus in 2022, the company featured only a few products, primarily focused on standardizing video feeds from different camera providers. 

Now, Fusus is aggressively leaning into artificial intelligence, claiming that its “AI on the Edge” feature is built into the initial capture phase and processes as soon as video is taken. Even if the city bans use of face recognition for the connected camera system, the Fusus system boasts that it can detect humans, objects, and combine other characteristics to identify individuals, detect movements, and set notifications based on certain characteristics and behaviors. Marketing material claims that the system comes “pre-loaded with dozens of search and analysis variables and profiles that are ready for action,” including a "robust & growing AI library.” It’s unclear how these AI recognition options are generated or how they are vetted, if at all, or whether they can even be removed as would be required by the ordinance.

A page from Fusus marketing materials, released through a public records request, featuring information on the artificial intelligence capabilities of its system

The proposed “guardrails” in Nashville are insufficient to address danger posed by mass surveillance systems, and the city of Nashville shouldn’t think they’ve protected their residents, tourists, and other visitors by passing them. Nashville residents and other advocacy groups have already raised concerns.

The only true way to protect Nashville’s residents against dragnet surveillance and overcriminalization is to block access to these invasive technologies altogether. Though this ordinance has passed its second reading, Nashville should not adopt Fusus or any other connected camera system, regardless of whether the ordinance is ultimately adopted. If Councilors care about protecting their constituents, they should hold the line against Fusus. 

Beryl Lipton

EFF to NSF: AI Action Plan Must Put People First

4 days 2 hours ago

This past January the new administration issued an executive order on Artificial Intelligence (AI), taking the place of the now rescinded Biden-era order, calling for a new AI Action Plan tasked with “unburdening” the current AI industry to stoke innovation and remove “engineered social agendas” from the industry. This new action plan for the president is currently being developed and open to public comments to the National Science Foundation (NSF).

EFF answered with a few clear points: First, government procurement of decision-making (ADM) technologies must be done with transparency and public accountability—no secret and untested algorithms should decide who keeps their job or who is denied safe haven in the United States. Second, Generative AI policy rules must be narrowly focused and proportionate to actual harms, with an eye on protecting other public interests. And finally, we shouldn't entrench the biggest companies and gatekeepers with AI licensing schemes.

Government Automated Decision Making

US procurement of AI has moved with remarkable speed and an alarming lack of transparency. By wasting money on systems with no proven track record, this procurement not only entrenches the largest AI companies, but risks infringing the civil liberties of all people subject to these automated decisions.

These harms aren’t theoretical, we have already seen a move to adopt experimental AI tools in policing and national security, including immigration enforcement. Recent reports also indicate the Department of Government Efficiency (DOGE) intends to apply AI to evaluate federal workers, and use the results to make decisions about their continued employment.

Automating important decisions about people is reckless and dangerous. At best these new AI tools are ineffective nonsense machines which require more labor to correct inaccuracies, but at worst result in irrational and discriminatory outcomes obscured by the blackbox nature of the technology.

Instead, the adoption of such tools must be done with a robust public notice-and-comment practice as required by the Administrative Procedure Act. This process helps weed out wasteful spending on AI snake oil, and identifies when the use of such AI tools are inappropriate or harmful.

Additionally, the AI action plan should favor tools developed under the principles of free and open-source software. These principles are essential for evaluating the efficacy of these models, and ensure they uphold a more fair and scientific development process. Furthermore, more open development stokes innovation and ensures public spending ultimately benefits the public—not just the most established companies.

Don’t Enable Powerful Gatekeepers

Spurred by the general anxiety about Generative AI, lawmakers have drafted sweeping regulations based on speculation, and with little regard for the multiple public interests at stake. Though there are legitimate concerns, this reactionary approach to policy is exactly what we warned against back in 2023.

For example, bills like NO FAKES and NO AI Fraud expand copyright laws to favor corporate giants over everyone else’s expression. NO FAKES even includes a scheme for a DMCA-like notice takedown process, long bemoaned by creatives online for encouraging broader and automated online censorship. Other policymakers propose technical requirements like watermarking that are riddled with practical points of failure.

Among these dubious solutions is the growing prominence of AI licensing schemes which limit the potential of AI development to the highest bidders. This intrusion on fair use creates a paywall protecting only the biggest tech and media publishing companies—cutting out the actual creators these licenses nominally protect. It’s like helping a bullied kid by giving them more lunch money to give their bully.

This is the wrong approach. Looking for easy solutions like expanding copyright, hurts everyone. Particularly smaller artists, researchers, and businesses who cannot compete with the big gatekeepers of industry. AI has threatened the fair pay and treatment of creative labor, but sacrificing secondary use doesn’t remedy the underlying imbalance of power between labor and oligopolies.

People have a right to engage with culture and express themselves unburdened by private cartels. Policymakers should focus on narrowly crafted policies to preserve these rights, and keep rulemaking constrained to tested solutions addressing actual harms.

You can read our comments here.

Rory Mir

EFF Thanks Fastly for Donated Tools to Help Keep Our Website Secure

4 days 3 hours ago

EFF’s most important platform for welcoming everyone to join us in our fight for a better digital future is our website, eff.org. We thank Fastly for their generous in-kind contribution of services helping keep EFF’s website online.

Eff.org was first registered in 1990, just three months after the organization was founded, and long before the web was an essential part of daily life. Our website and the fight for digital rights grew rapidly alongside each other. However, along with rising threats to our freedoms online, threats to our site have also grown.

It takes a village to keep eff.org online in 2025. Every day our staff work tirelessly to protect the site from everything from DDoS attacks to automated hacking attempts, and everything in between. As AI has taken off, so have crawlers and bots that scrape content to train LLMs, sometimes without respecting rate limits we’ve asked them to observe. Newly donated security add-ons from Fastly help us automate DDoS prevention and rate limiting, preventing our servers from getting overloaded when misbehaving visitors abuse our sites. Fastly also caches the content from our site around the globe, meaning that visitors from all over the world can access eff.org and our other sites quickly and easily.

EFF is member-supported by people who share our vision for a better digital future. We thank Fastly for showing their support for our mission to ensure that technology supports freedom, justice, and innovation for all people of the world with an in-kind gift of their full suite of services.

Allison Morris

EFFecting Change: Is There Hope for Social Media?

4 days 4 hours ago

Please join EFF for the next segment of EFFecting Change, our livestream series covering digital privacy and free speech. 

EFFecting Change Livestream Series:
Is There Hope for Social Media?
Thursday, March 20th
12:00 PM - 1:00 PM Pacific - Check Local Time
This event is LIVE and FREE!

Users are frustrated with legacy social media companies. Is it possible to effectively build the kinds of communities we want online while avoiding the pitfalls that have driven people away?

Join our panel featuring EFF Civil Liberties Director David Greene, EFF Director for International Freedom of Expression Jillian York, Mastodon's Felix Hlatky, Bluesky's Emily Liu, and Spill's Kenya Parham as they explore the future of free expression online and why social media might still be worth saving.

We hope you and your friends can join us live! Be sure to spread the word, and share our past livestreams. Please note that all events will be recorded for later viewing on our YouTube page.

Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates.

Melissa Srago

EFF Joins AllOut’s Campaign Calling for Meta to Stop Hate Speech Against LGBTQ+ Community

4 days 7 hours ago

In January, Meta made targeted changes to its hateful conduct policy that would allow dehumanizing statements to be made about certain vulnerable groups. More specifically, Meta’s hateful conduct policy now contains the following text:

People sometimes use sex- or gender-exclusive language when discussing access to spaces often limited by sex or gender, such as access to bathrooms, specific schools, specific military, law enforcement, or teaching roles, and health or support groups. Other times, they call for exclusion or use insulting language in the context of discussing political or religious topics, such as when discussing transgender rights, immigration, or homosexuality. Finally, sometimes people curse at a gender in the context of a romantic break-up. Our policies are designed to allow room for these types of speech. 

The revision of this policy timed to Trump’s second election demonstrates that the company is focused on allowing more hateful speech against specific groups, with a noticeable and particular focus on enabling more speech challenging LGBTQ+ rights. For example, the revised policy removed previous prohibitions on comparing people to inanimate objects, feces, and filth based on their protected characteristics, such as sexual identity.

In response, LGBTQ+ rights organization AllOut gathered social justice groups and civil society organizations, including EFF, to demand that Meta immediately reverse the policy changes. By normalizing such speech, Meta risks increasing hate and discrimination against LGBTQ+ people on Facebook, Instagram and Threads. 

The campaign is supported by the following partners: All Out, Global Project Against Hate and Extremism (GPAHE), Electronic Frontier Foundation (EFF), EDRi - European Digital Rights, Bits of Freedom, SUPERRR Lab, Danes je nov dan, Corporación Caribe Afirmativo, Fundación Polari, Asociación Red Nacional de Consejeros, Consejeras y Consejeres de Paz LGBTIQ+, La Junta Marica, Asociación por las Infancias Transgénero, Coletivo LGBTQIAPN+ Somar, Coletivo Viveração, and ADT - Associação da Diversidade Tabuleirense, Casa Marielle Franco Brasil, Articulação Brasileira de Gays - ARTGAY, Centro de Defesa dos Direitos da Criança e do Adolescente Padre, Marcos Passerini-CDMP, Agência Ambiental Pick-upau, Núcleo Ypykuéra, Kurytiba Metropole, ITTC - Instituto Terra, Trabalho e Cidadania. 

Sign the AllOut petition (external link) and tell Meta: Stop hate speech against LGBT+ people!

If Meta truly values freedom of expression, we urge it to redirect its focus to empowering some of its most marginalized speakers, rather than empowering only their detractors and oppressive voices.

Paige Collings

In Memoriam: Mark Klein, AT&T Whistleblower Who Revealed NSA Mass Spying

5 days 5 hours ago

EFF is deeply saddened to learn of the passing of Mark Klein, a bona fide hero who risked civil liability and criminal prosecution to help expose a massive spying program that violated the rights of millions of Americans.

Mark didn’t set out to change the world. For 22 years, he was a telecommunications technician for AT&T, most of that in San Francisco. But he always had a strong sense of right and wrong and a commitment to privacy.

Mark not only saw how it works, he had the documents to prove it.

When the New York Times reported in late 2005 that the NSA was engaging in spying inside the U.S., Mark realized that he had witnessed how it was happening. He also realized that the President was not telling Americans the truth about the program. And, though newly retired, he knew that he had to do something. He showed up at EFF’s front door in early 2006 with a simple question: “Do you folks care about privacy?” 

We did. And what Mark told us changed everything. Through his work, Mark had learned that the National Security Agency (NSA) had installed a secret, secure room at AT&T’s central office in San Francisco, called Room 641A. Mark was assigned to connect circuits carrying Internet data to optical “splitters” that sat just outside of the secret NSA room but were hardwired into it. Those splitters—as well as similar ones in cities around the U.S.—made a copy of all data going through those circuits and delivered it into the secret room.

A photo of the NSA-controlled 'secret room' in the AT&T facility in San Francisco (Credit: Mark Klein)

Mark not only saw how it works, he had the documents to prove it. He brought us over a hundred pages of authenticated AT&T schematic diagrams and tables. Mark also shared this information with major media outlets, numerous Congressional staffers, and at least two senators personally. One, Senator Chris Dodd, took the floor of the Senate to acknowledge Mark as the great American hero he was.

We used Mark’s evidence to bring two lawsuits against the NSA spying that he uncovered. The first was Hepting v. AT&T and the second was Jewel v. NSA. Mark also came with us to Washington D.C. to push for an end to the spying and demand accountability for it happening in secret for so many years.  He wrote an account of his experience called Wiring Up the Big Brother Machine . . . And Fighting It.

Archival EFF graphic promoting Mark Klein's DC tour

Mark stood up and told the truth at great personal risk to himself and his family. AT&T threatened to sue him, although it wisely decided not to do so. While we were able to use his evidence to make some change, both EFF and Mark were ultimately let down by Congress and the Courts, which have refused to take the steps necessary to end the mass spying even after Edward Snowden provided even more evidence of it in 2013. 

But Mark certainly inspired all of us at EFF, and he helped inspire and inform hundreds of thousands of ordinary Americans to demand an end to illegal mass surveillance. While we have not yet seen the success in ending the spying that we all have hoped for, his bravery helped to usher numerous reforms so far.

And the fight is not over. The law, called Section 702, that now authorizes the continued surveillance that Mark first revealed, expires in early 2026. EFF and others will continue to push for continued reforms and, ultimately, for the illegal spying to end entirely.

Mark’s legacy lives on in our continuing fights to reform surveillance and honor the Fourth Amendment’s promise of protecting personal privacy. We are forever grateful to him for having the courage to stand up and will do our best to honor that legacy by continuing the fight. 

Cindy Cohn

EFF Stands with Perkins Coie and the Rule of Law

5 days 7 hours ago

As a legal organization that has fought in court to defend the rights of technology users for almost 35 years, including numerous legal challenges to federal government overreach, Electronic Frontier Foundation unequivocally supports Perkins Coie’s challenge to the Trump administration’s shocking, vindictive, and unconstitutional Executive Order. In punishing the law firm for its zealous advocacy on behalf of its clients, the order offends the First Amendment, the rule of law, and the legal profession broadly in numerous ways. We commend Perkins Coie (and its legal representatives) for fighting back. 

Lawsuits against the federal government are a vital component of the system of checks and balances that undergirds American democracy. They reflect a confidence in both the judiciary to decide such matters fairly and justly, and the executive to abide by the court’s determination. They are a backstop against autocracy and a sustaining feature of American jurisprudence since Marbury v. Madison, 5 U.S. 137 (1803).  

The Executive Order, if enforced, would upend that system and set an appalling precedent: Law firms that represent clients adverse to a given administration can and will be punished for doing their jobs.  

This is a fundamental abuse of executive power. 

The constitutional problems are legion, but here are a few:  

  • The First Amendment bars the government from “distorting the legal system by altering the traditional role of attorneys” by controlling what legal arguments lawyers can make. See Legal Services Corp. v. Velasquez, 531 U.S. 533, 544 (2001). “An informed independent judiciary presumes an informed, independent bar.” Id. at 545. 
  • The Executive Order is also unconstitutional retaliation for Perkins Coie’s engaging in constitutionally protected speech during the course of representing its clients. See Nieves v. Bartlett, 587 U.S. 391, 398 (2019).  
  • And the Executive Order functions as an illegal loyalty oath for the entire legal profession, conditioning access to federal courthouses or client relationships with government contractors on fealty to the executive branch, including forswearing protected speech in opposition to it. That condition is blatantly unlawful:  The government cannot require that those it works with or hires embrace certain political beliefs or promise that they have “not engaged, or will not engage, in protected speech activities such as … criticizing institutions of government.”  See Cole v. Richardson, 405 U.S. 676, 680 (1972)

Civil liberties advocates such as EFF rely on the rule of law and access to the courts to vindicate their clients’, and the public’s, fundamental rights. From this vantage point, we can see that this Executive Order is nothing less than an attack on the foundational principles of American democracy.  

The Executive Order must be swiftly nullified by the court and uniformly vilified by the entire legal profession.

Click here for the number to listen in on a hearing on a temporary restraining order, scheduled for 2pmET/11amPT Wednesday, March 12.

David Greene

Anchorage Police Department: AI-Generated Police Reports Don’t Save Time

5 days 9 hours ago

The Anchorage Police Department (APD) has concluded its three-month trial of Axon’s Draft One, an AI system that uses audio from body-worn cameras to write narrative police reports for officers—and has decided not to retain the technology. Axon touts this technology as “force multiplying,” claiming it cuts in half the amount of time officers usually spend writing reports—but APD disagrees.

The APD deputy chief told Alaska Public Media, “We were hoping that it would be providing significant time savings for our officers, but we did not find that to be the case.” The deputy chief flagged that the time it took officers to review reports cut into the time savings from generating the report.  The software translates the audio into narrative, and officers are expected to read through the report carefully to edit it, add details, and verify it for authenticity. Moreover, because the technology relies on audio from body-worn cameras, it often misses visual components of the story that the officer then has to add themselves. “So if they saw something but didn’t say it, of course, the body cam isn’t going to know that,” the deputy chief continued.

The Anchorage Police Department is not alone in claiming that Draft One is not a time saving device for officers. A new study into police using AI to write police reports, which specifically tested Axon’s Draft One, found that AI-assisted report-writing offered no real time-savings advantage.

This news comes on the heels of policymakers and prosecutors casting doubt on the utility or accuracy of AI-created police reports. In Utah, a pending state bill seeks to make it mandatory for departments to disclose when reports have been written by AI. In King County, Washington, the Prosecuting Attorney’s Office has directed officers not to use any AI tools to write narrative reports.

In an era where companies that sell technology to police departments profit handsomely and have marketing teams to match, it can seem like there is an endless stream of press releases and local news stories about police acquiring some new and supposedly revolutionary piece of tech. But what we don’t usually get to see is how many times departments decide that technology is costly, flawed, or lacks utility. As the future of AI-generated police reports rightly remains hotly contested, it’s important to pierce the veil of corporate propaganda and see when and if police departments actually find these costly bits of tech useless or impractical.

Matthew Guariglia

Hawaii Takes a Stand for Privacy: HCR 144/HR 138 Calls for Investigation of Crisis Pregnancy Centers

6 days 4 hours ago

In a bold push for medical privacy, Hawaii's House of Representatives has introduced HCR 144/HR 138, a resolution calling for the Hawaii Attorney General to investigate whether crisis pregnancy centers (CPCs) are violating patient privacy laws. 

Often referred to as "fake clinics" or “unregulated pregnancy centers” (UPCs), these are non-medical centers that provide  free pregnancy tests and counseling, but typically do not offer essential reproductive care like abortion or contraception. In Hawaii, these centers outnumber actual clinics offering abortion and reproductive healthcare. In fact, the first CPC in the United States was opened in Hawaii in 1967 by Robert Pearson, who then founded the Pearson Foundation, a St. Louis-based organization to assist local groups in setting up unregulated crisis pregnancy centers. 

EFF has called on state AGs to investigate CPCs across the country. In particular, we are concerned that many centers have misrepresented their privacy practices, including suggesting that patient information is protected by HIPAA when it may not be. In January, EFF contacted attorneys general in Florida, Texas, Arkansas, and Missouri asking them to identify and hold accountable CPCs that engage in deceptive practices.

Rep. Kapela’s resolution specifically references EFF’s call on state Attorneys General. It reads:

“WHEREAS, the Electronic Frontiers Foundation, an international digital rights nonprofit that promotes internet civil liberties, has called on states to investigate whether crisis pregnancy centers are complying with patient privacy regulations with regard to the retention and use of collected patient data.” 

HCR 144/HR 138 underscores the need to ensure that healthcare providers handle personal data, particularly medical data, securely and transparently.. Along with EFF’s letters to state AGs, the resolution refers to the increasing body of research on the topic, such as: 

  • A 2024 Healthcare Management Associates Study showed that CPCs received $400 million in federal funding between 2017 and 2023, with little oversight from regulators.
  • A Health Affairs article from November 2024 titled "Addressing the HIPAA Blind Spot for Crisis Pregnancy Centers" noted that crisis pregnancy centers often invoke the Health Insurance Portability and Accountability Act (HIPAA) to collect personal information from clients.

Regardless of one's stance on reproductive healthcare, there is one principle that should be universally accepted: the right to privacy. As HCR 144/HR 138 moves forward, it is imperative that Hawaii's Attorney General investigate whether CPCs are complying with privacy regulations and take action, if necessary, to protect the privacy rights of individuals seeking reproductive healthcare in Hawaii. 

Without comprehensive privacy laws that offer individuals a private right of action, state authorities must be the front line in safeguarding the privacy of their constituents. As we continue to advocate for stronger privacy protections nationwide, we encourage lawmakers and advocates in other states to follow Hawaii's lead and take action to protect the medical privacy rights of all of their constituents.

Rindala Alajaji

Ten Years of The Foilies

6 days 6 hours ago
A look back at the games governments played to avoid transparency

In the year 2015, we witnessed the launch of OpenAI, a debate over the color of a dress going viral, and a Supreme Court decision that same-sex couples have the right to get married. It was also the year that the Electronic Frontier Foundation (EFF) first published The Foilies, an annual report that hands out tongue-in-cheek "awards" to government agencies and officials that respond outrageously when a member of the public tries to access public records through the Freedom of Information Act (FOIA) or similar laws.

A lot has changed over the last decade, but one thing that hasn't is the steady flow of attempts by authorities to avoid their legal and ethical obligations to be open and accountable. Sometimes, these cases are intentional, but just as often, they are due to incompetence or straight-up half-assedness.

Over the years, EFF has teamed up with MuckRock to document and ridicule these FOIA fails and transparency trip-ups. And through a partnership with AAN Publishers, we have named-and-shamed the culprits in weekly newspapers and on indie news sites across the United States in celebration of Sunshine Week, an annual event raising awareness of the role access to public records plays in a democracy.  

This year, we reflect on the most absurd and frustrating winners from the last 10 years as we prepare for the next decade, which may even be more terrible for government transparency.

The Most Infuriating FOIA Fee: U.S. Department of Defense (2016 Winner)

Assessing huge fee estimates is one way agencies discourage FOIA requesters.

Under FOIA, federal agencies are able to charge "reasonable" fees for producing copies of records. But sometimes agencies fabricate enormous price tags to pressure the requester to drop the query.

In 2015, Martin Peck asked the U.S. Department of Defense (DOD) to disclose the number of "HotPlug” devices (tools used to preserve data on seized computers) it had purchased. The DOD said it would cost $660 million and 15 million labor hours (over 1,712 years), because its document system wasn't searchable by keyword, and staff would have to comb through 30 million contracts by hand. 

Runners-up: 

City of Seattle (2019 Winner): City officials quoted a member of the public $33 million for metadata for every email sent in 2017, but ultimately reduced the fee to $40.

Rochester (Michigan) Community Schools District (2023 Winner): A group of parents critical of the district's remote-learning plan requested records to see if the district was spying on their social media. One parent was told they would have to cough up $18,641,345 for the records, because the district would have to sift through every email. 

Willacy County (Texas) Sheriff's Office (2016 Winner): When the Houston Chronicle asked for crime data, the sheriff sent them an itemized invoice that included $98.40 worth of Wite-Out–the equivalent of 55 bottles–to redact 1,016 pages of records.

The Most Ridiculous Redaction: Federal Bureau of Investigation (2015 Winner)

Ain't no party like a REDACTED FBI party!

Brad Heath, who in 2014 was a reporter at USA Today, got a tip that a shady figure had possibly attended an FBI retirement party. So he filed a request for the guest list and pictures taken at the event. In response, the FBI sent a series of surreal photos of the attendees, hugging, toasting, and posing awkwardly, but all with polygonal redactions covering their faces like some sort of mutant, Minecraft family reunion.

Runner-Up 

U.S. Southern Command (2023 Winner): Investigative journalist Jason Leopold obtained scans of paintings by detainees at Guantanamo Bay, which were heavily redacted under the claim that the art would disclose law enforcement information that could "reasonably be expected to risk circumvention of the law."

The Most Reprehensible Reprisal Against a Requester: White Castle, Louisiana (2017 Winner)

WBRZ Reporter Chris Nakamoto was cuffed for trying to obtain records in White Castle, Louisiana. Credit: WBRZ-TV

Chris Nakamoto, at the time a reporter for WBRZ, filed a public records request to probe the White Castle mayor's salary. But when he went down to check on some of the missing records, he was handcuffed, placed in a holding cell, and charged with the crime of "remaining after being forbidden.” He was summoned to appear before the "Mayor's Court" in a judicial proceeding presided over by none other than the same mayor he was investigating. The charges were dropped two months later. 

Runners-up

Jack White (2015 Winner): One of the rare non-government Foilies winners, the White Stripes guitarist verbally abused University of Oklahoma student journalists and announced he wouldn't play at the school anymore. The reason? The student newspaper, OU Daily, obtained and published White's contract for a campus performance, which included his no-longer-secret guacamole recipe, a bowl of which was demanded in his rider.

Richlands, Virginia (2024 Winner): Resident Laura Mollo used public records laws to investigate problems with the 911 system and, in response, experienced intense harassment from the city and its contractors, including the police pulling her over and the city appointing a special prosecutor to investigate her. On separate occasions, Morro even says she found her mailbox filled with spaghetti and manure. 

Worst Federal Agency of the Decade: Federal Bureau of Investigation 

Bashing the FBI has come back into vogue among certain partisan circles in recent years, but we've been slamming the feds long before it was trendy.

The agency received eight Foilies over the last decade, more than any other entity, but the FBI's hostility towards FOIA goes back much further. In 2021, the Cato Institute uncovered records showing that, since at least 1989, the FBI had been spying on the National Security Archive, a non-profit watchdog that keeps an eye on the intelligence community. The FBI’s methods included both physical and electronic surveillance, and the records show the FBI specifically cited the organization's "tenacity" in using FOIA.

Cato's Patrick G. Eddington reported it took 11 months for the FBI to produce those records, but that's actually relatively fast for the agency. We highlighted a 2009 FOIA request that the FBI took 12 years to fulfil: Bruce Alpert of the Times-Picayune had asked for records regarding the corruption case of U.S. Rep. William Jefferson, but by the time he received the 84 pages in 2021, the reporter had retired. Similarly, when George Washington University professor and documentary filmmaker Nina Seavey asked the FBI for records related to surveillance of antiwar and civil rights activists, the FBI told her it would take 17 years to provide the documents. When the agency launched an online system for accepting FOIA requests, it somehow made the process even more difficult.

The FBI was at its worst when it was attempting to use non-disclosure agreements to keep local law enforcement agencies from responding to public records requests regarding the use of cell phone surveillance technologies called cell-site simulators, or "stingrays." The agency even went so far as to threaten agencies that release technical information to media organizations with up to 20 years in prison and a $1 million fine, claiming it would be a violation of the Arms Export Control Act.

But you don't have to take our word for it: Even Micky Dolenz of The Monkees had to sue the FBI to get records on how agents collected intelligence on the 1960s band.

Worst Local Jurisdiction of the Decade: Chicago, Illinois

Some agencies, like the city of Chicago, treat FOIA requests like a plague.

Over the last decade, The Foilies have called out officials at all levels of government and in every part of the country (and even in several other countries), but time and time again, one city keeps demonstrating special antagonism to the idea of freedom of information: the Windy City.

In fact, the most ridiculous justification for ignoring transparency obligations we ever encountered was proudly championed by now-former Mayor Lori Lightfoot during the COVID-19 lockdown in April 2020. She offered a bogus choice to Chicagoans: the city could either process public records requests or provide pandemic response, falsely claiming that answering these requests would pull epidemiologists off the job. According to the Chicago Tribune, she implied that responding to FOIA requests would result in people having to "bury another grandmother." She even invoked the story of Passover, claiming that the "angel of death is right here in our midst every single day" as a reason to suspend FOIA deadlines.

If we drill down on Chicago, there's one particular department that seems to take particular pleasure in screwing the public: the Chicago Police Department (CPD). In 2021, CPD was nominated so many times (for withholding records of search warrants, a list of names of police officers, and body-worn camera footage from a botched raid) that we just threw up our hands and named them "The Hardest Department to FOIA" of the year.

In one particularly nasty case, CPD had mistakenly raided the home of an innocent woman and handcuffed her while she was naked and did not allow her to dress. Later, the woman filed a FOIA request for the body-worn camera footage and had to sue to get it. But CPD didn't leave it there: the city's lawyers tried to block a TV station from airing the video and then sought sanctions against the woman's attorney. 

If you thought these were some doozies, check out The Foilies 2025 to read the beginning of a new decade's worth of FOIA horror stories.

Dave Maass

Right to Repair: A Prime Example of Grassroots Advocacy

6 days 6 hours ago

Good old-fashioned grassroots advocacy is one of the best tools we have right now for making a positive change for our civil liberties online. When we unite toward a shared goal, anything is possible, and the right to repair movement is a prime example of this.

In July of last year, EFF and many other organizations celebrated Repair Independence Day to commemorate both California and Minnesota enacting strong right to repair laws. And, very recently, it was reported that all 50 states have introduced right to repair legislation. Now, not every state has passed laws yet, but this signals an important milestone for the movement—we want to fix the stuff we own!

And this movement has had an impact beyond specific right to repair legislation. In a similar vein, just a few months ago, the U.S. Copyright Office ruled that users can legally repair commercial food preparation equipment without breaking copyright law. Device manufacturers themselves are also starting to feel the pressure and are creating repair-friendly programs.

Years of hard work have made it possible for us to celebrate the right-to-repair movement time and time again. It's a group effort—folks like iFixit, who provide repair guides and repairability scores; the Repair Association, who’ve helped lead the movement in state legislatures; and of course, people like you who contact local representatives, are the reason this movement has gained so much momentum.

Fix Copyright! Also available in kids' sizes.

But there's still work that can be done. If you’re itching to fix your devices, you can read up on what your state’s repair laws mean for you. You can educate your friends, family, and colleagues when they’re frustrated at how expensive device repair is. And, of course, you can show your support for the right to repair movement with EFF’s latest member t-shirt. 

We live in a very tumultuous time, so it’s important to celebrate the victories, and it’s equally important to remember that your voice and support can bring about positive change that you want to see.  

Christian Romero

EFF Sends Letter to the Senate Judiciary Committee Opposing the STOP CSAM Act

6 days 7 hours ago

On Monday, March 10, EFF sent a letter to the Senate Judiciary Committee opposing the Strengthening Transparency and Obligation to Protect Children Suffering from Abuse and Mistreatment Act (STOP CSAM Act) ahead of a committee hearing on the bill. 

EFF opposed the original and amended versions of this bill in the previous Congress, and we are concerned to see the Committee moving to consider the same flawed ideas in the current Congress. 

At its core, STOP CSAM endangers encrypted messages – jeopardizing the privacy, security, and free speech of every American and fundamentally altering our online communications. In the digital world, end-to-end encryption is our best chance to maintain both individual and national security. Particularly in the wake of the major breach of telecom systems in October 2024 from Salt Typhoon, a sophisticated Chinese-government backed hacking group, legislators should focus on bolstering encryption, not weakening it. In fact, in response to this breach, a top U.S. cybersecurity chief said “encryption is your friend.”  

Given its significant problems and potential vast impact on internet users, we urge the Committee to reject this bill.

Maddie Daly

RightsCon Community Calls for Urgent Release of Alaa Abd El-Fattah

1 week ago

Last month saw digital rights organizations and social justice groups head to Taiwan for this year's RightsCon conference on human rights in the digital age. During the conference, one prominent message was spoken loud and clear: Alaa Abd El-Fattah must be immediately released from illegal detention in Egypt.

"As Alaa’s mother, I thank you for your solidarity and ask you to not to give up until Alaa is out of prison."

During the RightsCon opening ceremony, Access Now’s Executive Director, Alejandro Mayoral Baños, affirmed the urgency of Alaa’s situation in detention and called for Alaa’s freedom. The RightsCon community was also addressed by Alaa’s mother, mathematician Laila Soueif, who has been on hunger strike in London for 158 days. In a video highlighting Alaa’s work with digital rights and his role in this community, she stated: “As Alaa’s mother, I thank you for your solidarity and ask you to not to give up until Alaa is out of prison.” Laila was admitted to hospital the next day with dangerously low blood sugar, blood pressure and sodium levels.

RightsCon participants gather in solidarity with the #FreeAlaa campaign

The calls to #FreeAlaa and save Laila were again reaffirmed during the closing ceremony in a keynote by Sara Alsherif, Migrant Digital Justice Programme Manager at Open Rights Group and close friend of Alaa. Referencing Alaa’s early work as a digital activist, Alsherif said: “He understood that the fight for digital rights is at the core of the struggle for human rights and democracy.” She closed by reminding the hundreds-strong audience that “Alaa could be any one of us … Please do for him what you would want us to do for you if you were in his position.”

During RightsCon, with Laila still in hospital, calls for UK Prime Minister Starmer to get on the phone with Egyptian President Sisi reached a fever pitch, and on 28 February, one day after the closing ceremony, the UK government issued a press release affirming that Alaa’s case had been discussed, with Starmer pressing for Alaa’s freedom. 

Alaa should have been released on September 29, after serving a five-year sentence for sharing a Facebook post about a death in police custody, but Egyptian authorities have continued his imprisonment in contravention of the country’s own Criminal Procedure Code. British consular officials are prevented from visiting him in prison because the Egyptian government refuses to recognise Alaa’s British citizenship.

Laila Soueif has been on hunger strike for more than five months while she and the rest of his family have worked in concert with various advocacy groups to engage the British government in securing Alaa’s release. On December 12, she also started protesting daily outside the Foreign Office and has since been joined by numerous MPs and public figures. Laila still remains in hospital, but following Starmer’s call with Sisi agreed to take glucose, she stated that she is ready to end her hunger strike if progress is made. 

Laila Soueif and family meeting with UK Prime Minister Keir Starmer

As of March 6, Laila has moved to a partial hunger strike of 300 calories per day citing “hope that Alaa’s case might move.” However, the family has learned that Alaa himself began a hunger strike on March 1 in prison after hearing that his mother had been hospitalized. Laila has said that without fast movement on Alaa’s case she will return to a total hunger strike. Alaa’s sister Sanaa, who was previously jailed by the regime on bogus charges, visited Alaa on March 8.

If you’re based in the UK, we encourage you to write to your MP to urgently advocate for Alaa’s release (external link): https://freealaa.net/message-mp 

Supporters everywhere can share Alaa’s plight and Laila’s story on social media using the hashtags #FreeAlaa and #SaveLaila. Additionally, the campaign’s website (external link) offers additional actions, including purchasing Alaa’s book, and participating in a one-day solidarity hunger strike. You can also sign up for campaign updates by e-mail.

Every second counts, and time is running out. Keir Starmer and the British government must do everything it can to ensure Alaa’s immediate and unconditional release.

Jillian C. York

First Porn, Now Skin Cream? ‘Age Verification’ Bills Are Out of Control

1 week 3 days ago

I’m old enough to remember when age verification bills were pitched as a way to ‘save the kids from porn’ and shield them from other vague dangers lurking in the digital world (like…“the transgender”). We have long cautioned about the dangers of these laws, and pointed out why they are likely to fail. While they may be well-intentioned, the growing proliferation of age verification schemes poses serious risks to all of our digital freedoms.

Fast forward a few years, and these laws have morphed into something else entirely—unfortunately, something we expected. What started as a misguided attempt to protect minors from "explicit" content online has spiraled into a tangled mess of privacy-invasive surveillance schemes affecting skincare products, dating apps, and even diet pills, threatening everyone’s right to privacy.

Age Verification Laws: A Backdoor to Surveillance

Age verification laws do far more than ‘protect children online’—they require the  creation of a system that collects vast amounts of personal information from everyone. Instead of making the internet safer for children, these laws force all users—regardless of age—to verify their identity just to access basic content or products. This isn't a mistake; it's a deliberate strategy. As one sponsor of age verification bills in Alabama admitted, "I knew the tough nut to crack that social media would be, so I said, ‘Take first one bite at it through pornography, and the next session, once that got passed, then go and work on the social media issue.’” In other words, they recognized that targeting porn would be an easier way to introduce these age verification systems, knowing it would be more emotionally charged and easier to pass. This is just the beginning of a broader surveillance system disguised as a safety measure.

This alarming trend is already clear, with the growing creep of age verification bills filed in the first month of the 2025-2026 state legislative session. Consider these three bills: 

  1. Skincare: AB-728 in California
    Age verification just hit the skincare aisle! California’s AB-728 mandates age verification for anyone purchasing skin care products or cosmetics that contain certain chemicals like Vitamin A or alpha hydroxy acids. On the surface, this may seem harmless—who doesn't want to ensure that minors are safe from harmful chemicals? But the real issue lies in the invasive surveillance it mandates. A person simply trying to buy face cream could be forced to submit sensitive personal data through “an age verification system,” creating a system of constant tracking and data collection for a product that should be innocuous.
  2. Dating Apps: A3323 in New York
    Match made in heaven? Not without your government-issued ID. New York’s A3323 bill mandates that online dating services verify users’ age, identity, and location before allowing access to their platforms. The bill's sweeping requirements introduce serious privacy concerns for all users. By forcing users to provide sensitive personal information—such as government-issued IDs and location data—the bill creates significant risks that this data could be misused, sold, or exposed through data breaches. 
  3. Dieting products: SB 5622 in Washington State
    Shed your privacy before you shed those pounds! Washington State’s SB 5622 takes aim at diet pills and dietary supplements by restricting their sale to anyone under 18. While the bill’s intention is to protect young people from potentially harmful dieting products, it misses the mark by overlooking the massive privacy risks associated with the age verification process for everyone else. To enforce this restriction, the bill requires intrusive personal data collection for purchasing diet pills in person or online, opening the door for sensitive information to be exploited.
The Problem with Age Verification: No Solution Is Safe

Let’s be clear: no method of age verification is both privacy-protective and entirely accurate. The methods also don’t fall on a neat spectrum of “more safe” to “less safe.” Instead, every form of age verification is better described as “dangerous in one way” or “dangerous in a different way.” These systems are inherently flawed, and none come without trade-offs. Additionally, they continue to burden adults who just want to browse the internet or buy everyday items without being subjected to mass data collection.

For example, when an age verification system requires users to submit government-issued identification or a scan of their face, it collects a staggering amount of sensitive, often immutable, biometric or other personal data—jeopardizing internet users’ privacy and security. Systems that rely on credit card information, phone numbers, or other third-party material  similarly amass troves of personal data. This data is just as susceptible to being misused as any other data, creating vulnerabilities for identity theft and data breaches. These issues are not just theoretical: age verification companies can be—and already have been—hacked. These are real, ongoing concerns for anyone who values their privacy. 

We must push back against age verification bills that create surveillance systems and undermine our civil liberties, and we must be clear-eyed about the dangers posed by these expanding age verification laws. While the intent to protect children makes sense, the unintended consequence is a massive erosion of privacy, security, and free expression online for everyone. Rather than focusing on restrictive age verification systems, lawmakers should explore better, less invasive ways to protect everyone online—methods that don’t place the entire burden of risk on individuals or threaten their fundamental rights. 

EFF will continue to advocate for digital privacy, security, and free expression. We urge legislators to prioritize solutions that uphold these essential values, ensuring that the internet remains a space for learning, connecting, and creating—without the constant threat of surveillance or censorship. Whether you’re buying a face cream, swiping on a dating app, or browsing for a bottle of diet pills, age verification laws undermine that vision, and we must do better.

Rindala Alajaji

Simple Phish Bait: EFF Is Not Investigating Your Albion Online Forum Account

1 week 4 days ago

We recently learned that users of the Albion Online gaming forum have received direct messages purporting to be from us. That message, which leverages the fear of an account ban, is a phishing attempt.

If you’re an Albion Online forum user and receive a message that claims to be from “the EFF team,” don’t click the link, and be sure to use the in-forum reporting tool to report the message and the user who sent it to the moderators.

A screenshot of the message shared by a user of the forums.

The message itself has some of the usual hallmarks of a phishing attempt, including tactics like creating a sense of fear that your account may be suspended, leveraging the name of a reputable group, and further raising your heart rate with claims that the message needs a quick response. The goal appears to be to get users to download a PDF file designed to deliver malware. That PDF even uses our branding and typefaces (mostly) correctly.

A full walk through of this malware and what it does was discovered by the Hunt team. The PDF is a trojan, or malware disguised as a non malicious file or program, that has an embedded script that calls out to an attacker server. The attacker server then sends a “stage 2” payload that installs itself onto the user’s device. The attack structure used was discovered to be the Pyramid C2 framework. In this case, it is a Windows operating system intended malware. There’s a variety of actions it takes, like writing and modifying files to the victim’s physical drive. But the most worrisome discovery is that it appears to connect the user’s device to a malicious botnet and has potential access to the “VaultSvc” service. This service securely stores user credentials, such as usernames and passwords

File-based IoCs:
act-7wbq8j3peso0qc1.pages[.]dev/819768.pdf
Hash: 4674dec0a36530544d79aa9815f2ce6545781466ac21ae3563e77755307e0020

This incident is a good reminder that often, the best ways to avoid malware and phishing attempts are the same: avoid clicking strange links in unsolicited emails, keep your computer’s software updated, and always scrutinize messages claiming to come from computer support or fraud detection. If a message seems suspect, try to verify its authenticity through other channels—in this case, poking around on the forum and asking other users before clicking on anything. If you ever absolutely must open a file, do so in an online document reader, like Google Drive, or try sending the link through a tool like VirusTotal, but try to avoid opening suspicious files whenever possible.

For more information to help protect yourself, check out our guides for protecting yourself from malware and avoiding phishing attacks.

Alexis Hancock

Trump Calls On Congress To Pass The “Take It Down” Act—So He Can Censor His Critics

1 week 5 days ago

We've opposed the Take It Down Act because it could be easily manipulated to take down lawful content that powerful people simply don't like. Last night, President Trump demonstrated he has a similar view on the bill. He wants to sign the bill into law, then use it to remove content about — him. And he won't be the only powerful person to do so. 

Here’s what Trump said to a joint session of Congress:    

The Senate just passed the Take It Down Act…. Once it passes the House, I look forward to signing that bill into law. And I’m going to use that bill for myself too if you don’t mind, because nobody gets treated worse than I do online, nobody. 

%3Ciframe%20src%3D%22https%3A%2F%2Farchive.org%2Fembed%2Ftrump-take-it-down-act-cspan%22%20webkitallowfullscreen%3D%22true%22%20mozallowfullscreen%3D%22true%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22384%22%20frameborder%3D%220%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from archive.org


Video courtesy C-SPAN.

The Take It Down Act is an overbroad, poorly drafted bill that would create a powerful system to pressure removal of internet posts, with essentially no safeguards. While the bill is meant to address a serious problem—the distribution of non-consensual intimate imagery (NCII)—the notice-and-takedown system it creates is an open invitation for powerful people to pressure websites into removing content they dislike. There are no penalties for applying very broad, or even farcical definitions of what constitutes NCII, and then demanding that it be removed.  

take action

TELL CONGRESS: "Take It Down" Has No real Safeguards  

This Bill Will Punish Critics, and The President Wants It Passed Right Now 

Congress should believe Trump when he says he would use the Take It Down Act simply because he's "treated badly," despite the fact that this is not the intention of the bill. There is nothing in the law, as written, to stop anyone—especially those with significant resources—from misusing the notice-and-takedown system to remove speech that criticizes them or that they disagree with.  

Trump has frequently targeted platforms carrying content and speakers of entirely legal speech that is critical of him, both as an elected official and as a private citizen.  He has filed frivolous lawsuits against media defendants which threaten to silence critics and draw scarce resources away from important reporting work.   

Now that Trump issued a call to action for the bill in his remarks, there is a possibility that House Republicans will fast track the bill into a spending package as soon as next week. Non-consensual intimate imagery is a serious problem that deserves serious consideration, not a hastily drafted, overbroad bill that sweeps in legal, protected speech. 

How The Take It Down Act Could Silence People 

A few weeks ago, a "deepfake" video of President Trump and Elon Musk was displayed across various monitors in the Housing and Urban Development office. The video was subsequently shared on various platforms. While most people wouldn't consider this video, which displayed faked footage of Trump kissing Elon Musk's feet, "nonconsensual intimate imagery," the takedown provision of the bill applies to an “identifiable individual” engaged in “sexually explicit conduct.” This definition leaves much room for interpretation, and nudity or graphic displays are not necessarily required.  

Moreover, there are no penalties whatsoever to dissuade a requester from simply insisting that content is NCII. Apps and websites only have 48 hours to remove content once they receive a request, which means they won’t be able to verify claims. Especially if the requester is an elected official with the power to start an investigation or prosecution, what website would stand up to such a request?  

The House Must Not Pass This Dangerous Bill 

Congress should focus on enforcing and improving the many existing civil and criminal laws that address NCII, rather than opting for a broad takedown regime that is bound to be abused. Take It Down would likely lead to the use of often-inaccurate automated filters that are infamous for flagging legal content, from fair-use commentary to news reporting. It will threaten encrypted services, which may respond by abandoning encryption entirely in order to be able to monitor content—turning private conversations into surveilled spaces.   

Protecting victims of NCII is a legitimate goal. But good intentions alone are not enough to make good policy. Tell your Member of Congress to oppose censorship and to oppose H.R.633. 

take action

Tell the house to stop "Take it down" 

Jason Kelley

Meet Rayhunter: A New Open Source Tool from EFF to Detect Cellular Spying

1 week 6 days ago

At EFF we spend a lot of time thinking about Street Level Surveillance technologies—the technologies used by police and other authorities to spy on you while you are going about your everyday life—such as automated license plate readers, facial recognition, surveillance camera networks, and cell-site simulators (CSS). Rayhunter is a new open source tool we’ve created that runs off an affordable mobile hotspot that we hope empowers everyone, regardless of technical skill, to help search out CSS around the world. 

CSS (also known as Stingrays or IMSI catchers) are devices that masquerade as legitimate cell-phone towers, tricking phones within a certain radius into connecting to the device rather than a tower

CSS operate by conducting a general search of all cell phones within the device’s radius. Law enforcement use CSS to pinpoint the location of phones often with greater accuracy than other techniques such as cell site location information (CSLI)  and without needing to involve the phone company at all. CSS can also log International Mobile Subscriber Identifiers (IMSI numbers) unique to each SIM card, or hardware serial numbers (IMEIs) of all of the mobile devices within a given area. Some CSS may have advanced features allowing law enforcement to intercept communications in some circumstances.

What makes CSS especially interesting, as compared to other street level surveillance, is that so little is known about how commercial CSS work. We don’t fully know what capabilities they have or what exploits in the phone network they take advantage of to ensnare and spy on our phones, though we have some ideas

We also know very little about how cell-site simulators are deployed in the US and around the world. There is no strong evidence either way about whether CSS are commonly being used in the US to spy on First Amendment protected activities such as protests, communication between journalists and sources, or religious gatherings. There is some evidence—much of it circumstantial—that CSS have been used in the US to spy on protests. There is also evidence that CSS are used somewhat extensively by US law enforcement, spyware operators, and scammers. We know even less about how CSS are being used in other countries, though it's a safe bet that in other countries CSS are also used by law enforcement.

Much of these gaps in our knowledge are due to a lack of solid, empirical evidence about the function and usage of these devices. Police departments are resistant to releasing logs of their use, even when they are kept. The companies that manufacture CSS are unwilling to divulge details of how they work. 

Until now, to detect the presence of CSS, researchers and users have had to either rely on Android apps on rooted phones, or sophisticated and expensive software-defined radio rigs. Previous solutions have also focused on attacks on the legacy 2G cellular network, which is almost entirely shut down in the U.S. Seeking to learn from and improve on previous techniques for CSS detection we have developed a better, cheaper alternative that works natively on the modern 4G network.

Introducing Rayhunter

To fill these gaps in our knowledge, we have created an open source project called Rayhunter.1 It is developed to run on an Orbic mobile hotspot (Amazon, Ebay) which is available for $20 or less at the time of this writing. We have tried to make Rayhunter as easy as possible to install and use, regardless of your level of technical knowledge. We hope that activists, journalists, and others will run these devices all over the world and help us collect data about the usage and capabilities of cell-site simulators (please see our legal disclaimer.) 

Rayhunter works by intercepting, storing, and analyzing the control traffic (but not user traffic, such as web requests) between the mobile hotspot Rayhunter runs on and the cell tower to which it’s connected. Rayhunter analyzes the traffic in real-time and looks for suspicious events, which could include unusual requests like the base station (cell tower) trying to downgrade your connection to 2G which is vulnerable to further attacks, or the base station requesting your IMSI under suspicious circumstances. 

Rayhunter notifies the user when something suspicious happens and makes it easy to access those logs for further review, allowing users to take appropriate action to protect themselves, such as turning off their phone and advising other people in the area to do the same. The user can also download the logs (in PCAP format) to send to an expert for further review. 

The default Rayhunter user interface is very simple: a green (or blue in colorblind mode) line at the top of the screen lets the user know that Rayhunter is running and nothing suspicious has occurred. If that line turns red, it means that Rayhunter has logged a suspicious event. When that happens the user can connect to the device's WiFi access point and check a web interface to find out more information or download the logs. 


Rayhunter in action

Installing Rayhunter is relatively simple. After buying the necessary hardware, you’ll need to download the latest release package, unzip the file, plug the device into your computer, and then run an install script for either Mac or Linux (we do not support Windows as an installation platform at this time.)

We have a few different goals with this project. An overarching goal is to determine conclusively if CSS are used to surveil free expression such as protests or religious gatherings, and if so, how often it’s occurring. We’d like to collect empirical data (through network traffic captures, i.e. PCAPs) about what exploits CSS are actually using in the wild so the community of cellular security researchers can build better defenses. We also hope to get a clearer picture of the extent of CSS usage outside of the U.S., especially in countries that do not have legally enshrined free speech protections.

Once we have gathered this data, we hope we can help folks more accurately engage in threat modeling about the risks of cell-site simulators, and avoid the fear, uncertainty, and doubt that comes from a lack of knowledge. We hope that any data we do find will be useful to those who are fighting through legal process or legislative policy to rein in CSS use where they live. 

If you’re interested in running Rayhunter for yourself, pick up an Orbic hotspot (Amazon, Ebay), install Rayhunter, check out the project's Frequently Asked Questions, and help us collect data about how IMSI catchers operate! Together we can find out how cell site simulators are being used, and protect ourselves and our communities from this form of surveillance.

Legal disclaimer: Use Rayhunter at your own risk. We believe running this program does not currently violate any laws or regulations in the United States. However, we are not responsible for civil or criminal liability resulting from the use of this software. If you are located outside of the US please consult with an attorney in your country to help you assess the legal risks of running this program

  • 1. A note on the name: Rayhunter is named such because Stingray is a brand name for cell-site simulators which has become a common term for the technology. One of the only natural predators of the stingray in the wild is the orca, some of which hunt stingrays for pleasure using a technique called wavehunting. Because we like Orcas, we don’t like stingray technology (though the animals are great!), and because it was the only name not already trademarked, we chose Rayhunter.
Cooper Quintin
Checked
12 minutes 6 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed