Victory! End-to-End Encrypted RCS Comes to Apple and Android Chats

3 hours 4 minutes ago

This week, Apple released iOS 26.5, an update that supports end-to-end encryption for Rich Communication Services (RCS), meaning conversations between Android and iPhone will soon be encrypted in the default chat apps. This has been a long time coming, and is a welcome delivery on a promise both Google and Apple made.

With this update, conversations that take place between Apple’s Messages app and Google Messages on Android will be end-to-end encrypted by default, as long as the carrier supports both RCS and encrypted messages (you can find a list of carriers here). RCS messages are a replacement for SMS, and in 2024 Apple started supporting it, making for a marked improvement in the quality of images and other media shared between Android and iPhones. 

Now, those conversations can also benefit from the increased privacy and security that end-to-end encryption offers, making it so neither Google, Apple, nor the cellular carriers have access to the contents of messages. This feature comes courtesy of both Apple and Google supporting the GSMA RCS Universal Profile 3.0, which implements the Messaging Layer Security protocol for encryption. Metadata will likely still be collected and stored for these conversations, making alternatives like Signal still a better option for many conversations. Likewise, if you back up those conversations to the cloud, they may be stored unencrypted unless you enable Advanced Data Protection on iOS (Google Messages end-to-end encrypts the text of messages in backups, but not the media, so we’d like to see a similar offering as ADP on Android). Still, this is a significant step forward for the privacy of millions of conversations worldwide.

End-to-end encrypted RCS messaging is still marked as beta on Apple devices, likely because the rollout is dependent on carriers as well as the Android phone running the most recent version of Google Messages. 

It might take some time before you get this feature in your chats and until you do, remember that the conversations are not protected with end-to-end encryption. But once everyone in the conversation is on the right software version and the carrier support is implemented, you will see a lock icon and the text, “Encrypted” at the top of the conversation for any chats you have over RCS, as seen here:

We applaud Apple and Google for getting this across the finish line and Encrypting It Already! More companies should take these sorts of difficult but necessary steps to protect the privacy of our conversations and our data.

Thorin Klosowski

EFF Launches New Offline Campaign for Saudi Wikipedian Osama Khalid

3 hours 12 minutes ago

Osama Khalid was just twelve years old when he began contributing to Wikipedia Arabic. In the height of the blogging era, he became a prolific blogger, publishing writings on his home country of Saudi Arabia, meetups he attended, and his opinions and observations about open source technology and freedom of expression. He advocated for internet freedom, contributed time and translations to various projects—including EFF’s HTTPS Everywhere—and was a thoughtful presence at the conferences he attended around the world…all while training to become a pediatrician.

In July of 2020, he was detained amid a wave of arbitrary arrests carried out by the Saudi authorities during the Covid-19 lockdown and initially given a five-year prison sentence. That sentence was later increased on appeal to 32 years, then reduced in 2023 to 25 years, and again to 14 years this past September. In a joint letter that we signed on to in April, the Saudi human rights organization ALQST, which has been leading the campaign for Osama’s release, wrote: “The huge discrepancy between sentences handed down at different stages in the case underscores the arbitrary manner in which sentencing is carried out in the Saudi judicial system.”

So, what was his “crime”? Sharing information online that conflicted with official narratives. Osama’s Wikipedia contributions included pages on critical human rights issues in Saudi Arabia, including the treatment of women’s rights activist Loujain al-Hathloul (herself an EFF client) and Saudi Arabia’s infamous al-Ha’ir prison. His blog, which has since been taken offline, included articles such as one criticizing government plans for the surveillance of encrypted platforms.

Over the years, we’ve campaigned for the release of a number of individuals imprisoned for their speech. Our contributions to the campaigns of Ola Bini, the Swedish software developer who has been targeted by the government of Ecuador for the past seven years, and Alaa Abd El Fattah, have had real impact. These cases are reminders that attacks on free expression are rarely confined to borders: governments around the world continue to use vague cybercrime laws, national security claims, and politically motivated prosecutions to silence critics, technologists, journalists, and activists.

Supporting these two—and others we’ve highlighted in our Offline project—has never been about defending only individuals. It has also been about defending the principle that writing code, sharing ideas, criticizing governments, and organizing online should not be treated as crimes. Public pressure, international solidarity, legal advocacy, and sustained campaigning can shift the political cost of repression—and, in some cases, help secure meaningful protections for those targeted.

That’s why we’re highlighting Osama’s case and will continue to work with partners including ALQST to advocate for his release. Osama Khalid, like so many human rights defenders, journalists, and internet users detained by the Saudi government, deserves to be free.

Jillian C. York

A Hackers Guide to Circumventing Internet Shutdowns 

4 hours 8 minutes ago

Internet shutdowns are devastating for human rights. When people are disconnected from the internet and digital services, it impacts all aspects of their life—from accessing essential information, to seeking medical care, or communicating with loved ones, both in that country and externally. But on January 8th, 2026, the government of Iran shut down internet communications for the entire country as a rebellion threatened to topple the authoritarian government. The government then proceeded to execute as many as 656 dissidents over the next 3 months, though the actual number could be much higher. Which is part of the point: shutdowns often precede government acts of violence. 

Iran’s shutdown was hardly an isolated incident. Earlier this month, the U.S. military invaded Venezuela and kidnapped the Venezuelan president shortly after US cyber forces shut down all internet access and power grids for the capital city of Caracas. India routinely shuts off internet access in the Kashmir region, and Syria shut down internet communications as many as 73 times, most recently in 2025. Even the UK recently had a localized temporary internet shutdown. At the time of this writing there are 14 ongoing internet shutdowns worldwide.  

Government shutdowns aren’t the only reason an entire region or country might lose internet access. Hurricanes, earthquakes, and wildfires can take out internet connections in many regions of the world, and will only increase as climate change ramps up. They can completely disable the communications infrastructure relied upon by victims, their families, first responders, and disaster relief efforts. Having an alternate way to communicate in such times can save lives.  


One way to limit the impact of such shutdowns is to prepare in advance by setting up systems and structure for circumvention and resiliency. 

To keep people connected during internet shutdowns and blackouts, communication networks must be operational before and after the disaster or shutdown. To be effective, they must be widespread so that people can get access to them reliably, and they must be usable by a majority of the community. And any viable solution must be accessible and sustainable on a community level, not just to people with vast financial resources or technical knowledge. You shouldn’t have to be a tech wizard to be able to communicate with your neighbors!

Radios

There are many ways for a community to build their own disaster resilient communications. Radios, for example, are cheap, decentralized, and resilient. Many people with moderate technical skill have set up Meshtastic repeaters. Meshtastic is a way to use a common unlicensed radio spectrum and a technology called LoRA to have peer-to-peer decentralized communications with people in your neighborhood or city. When you buy a Meshtastic device (cheap ones cost around $20) you can link it to your phone and send text messages to people in your area without ever touching the telephone network or the internet. Messages are delivered directly from person to person over public radio waves.

There is also amateur radio, also known as ham radio, which has been used in disaster communications for decades. Ham radio requires a license, but allows you to communicate farther than Meshtastic, using repeaters or even bouncing signals off the stratosphere to talk to people on the other side of the planet or even on the International Space Station. It is even possible to access the internet over ham radio. 

Peer-to-peer messaging apps 

Another option for internet communication during a shutdown is peer-to-peer messaging apps. One such project,called Briar, uses the Bluetooth functionality on phones to route messages from device to device until they reach their destination, even in instances where there is no internet. However, Briar faces the same problems many mesh projects do: almost nobody has the app installed and it’s difficult to use. If a mesh chat app isn’t already widely installed before an internet shutdown, it’s going to be even harder to get people to install it en masse once the shutdown starts. 

A similar effort called bitchat has recently gained some attention. Bitchat is a peer-to-peer chat system that routes over Nostr, Tor, and Bluetooth. It is unfortunately tainted in many people’s eyes by being a project by former Twitter CEO Jack Dorsey, but it is open source and runs on both Android and iOS. It was used with some success in Iran during the latest internet shutdown

Another option is Delta Chat, which uses PGP for encryption and email for routing, while still being much simpler to use than either technology. Delta Chat is highly regarded in Iran for its ability to route a message through even the tiniest sliver of email access.

Satellite internet 

Satellite internet is an internet connection that uses a connection to a satellite dish to reach the internet, such as Starlink. Since there are no wires and no physical connection to infrastructure, satellite internet is harder to shut down. Satellite internet has therefore been used in many cases to circumvent internet shutdowns, with people sharing bandwidth with their neighbors. Satellites are harder for governments to shut down unilaterally.  Unfortunately when the satellites are owned by tech oligarchs, such as Starlink (owned by Elon Musk), or by allied governments, the owners of those satellites may willingly shut down the network anyway. 

Dreaming of a better future

Ultimately an app that is already widely being used would be the best option for shutdown resistant communication. Imagine if WhatsApp or Signal could fall back to mesh networking over bluetooth or wifi. Even better, imagine if our phones all had LoRA built in so we could have more effective mesh networks! What if our phones all had a connection to a satellite constellation run by an international coalition of hackers? We can dream of a better world and we can build it. 

We can’t rely on tech oligarchs to save us, especially when these same companies and governments are the ones to sever our access to the internet and telecommunications. This is why it's important to set up communication mechanisms before a disaster happens. 

As hackers, it's important for us to build these tools and infrastructure of decentralized communication, to help people learn how to use them, and to set up networks before disaster strikes. Get together with others in your city and start setting up resilient off-grid networks and building community now. 

Before you download or use any of the tools mentioned in this guide check with a lawyer in your jurisdiction or country and make sure you understand what legal risks you might be taking on. 

A previous version of this article appeared in the Spring 2026 issue of 2600 magazine

Cooper Quintin

Canada’s Bill C-22 Is a Repackaged Version of Last Year’s Surveillance Nightmare

23 hours 34 minutes ago

Last year, the Canadian government pushed Bill C-2, which would erode Canadian digital rights in the name of “border security.” The bill was so bad it didn’t even make it to committee because of the backlash from the privacy community. Now, the spring’s worst sequel, Bill C-22, aka The Lawful Access Act, is trying it again.

As with most sequels, Bill C-22 makes some tweaks to problematic elements, but largely retains the same problems. The bill forces digital services, which could include telecoms, messaging apps, and more, to record and retain metadata for a full year, and expands information sharing with foreign governments, including the United States. Metadata can reveal a lot about who you communicate with, where you go, and when you do so. Expanding the collection of metadata would require companies to store even more information about their users than they already do, providing an incentive for bad actors to access that information. 

Worst of all, Bill C-22 erodes the privacy of millions by providing a mechanism for the Minister of Public Safety to demand companies create a backdoor to their services to provide law enforcement access to data, as long as these mandates don’t introduce a “systemic vulnerability.” These widespread surveillance backdoors would likely facilitate even more data breaches than we see already. The bill also bans companies from even revealing the existence of these orders publicly.

The definitions of both “systemic vulnerabilities” and “encryption” are not clear enough in C-22, leaving wiggle room for the government to demand that companies circumvent encryption. And the overbroad definitions in the bill can include apps as well as operating systems. Canadian officials have made it clear they believe it’s possible to add surveillance without introducing systemic vulnerabilities, which is just not true. Surveillance of encrypted communications is fundamentally a systemic vulnerability.

This resembles what happened in the UK last year, when the government demanded that Apple implement this type of backdoor into its optional Advanced Data Protection feature, which then forced Apple to revoke the feature for its UK users instead of complying with the request. To this day, UK users still do not have access to this powerful, privacy-protective feature that provides stronger protections for data stored in iCloud. Both Meta and Apple are concerned that C-22 would give the Canadian governments similar powers, and both companies have come out against the bill. The U.S. House Judiciary and Foreign Affairs committees also sent a joint letter to Canada’s Minister of Public Safety highlighting the concern around backdoors into encrypted systems.

The dangers of these sorts of backdoors are not theoretical. In 2024, the Salt Typhoon hack took advantage of a system built by Internet Service Providers to give law enforcement access to user data. When you build these systems, hackers will come.

Canadians deserve strong privacy protections, transparency into how companies handle user data, and clear safeguards around encrypted data. Bill C-22 provides none of that, instead reaching further into the digital pockets of tech companies to build broad lawful access mechanisms.

Further reading

Thorin Klosowski

EFF to Fourth Circuit: Electronic Device Searches at the Border Require a Warrant

23 hours 40 minutes ago

EFF, along with the national ACLU, the ACLU affiliates in Maryland, North Carolina, South Carolina, and Virginia, and the National Association of Criminal Defense Lawyers (NACDL) filed an amicus brief in the U.S. Court of Appeals for the Fourth Circuit urging the court to require a warrant for border searches of electronic devices under the Fourth Amendment, an argument EFF has been making in the courts and Congress for nearly a decade. The Fourth Circuit heard oral arguments on May 8. The Knight Institute at Columbia University and Reporters Committee for Freedom of the Press also filed a helpful brief focusing on the First Amendment implications of border searches of electronic devices.

The case, U.S. v. Belmonte Cardozo, involves a U.S. citizen whose cell phone was manually searched after he arrived at Dulles airport near Washington, D.C., following a trip to Bolivia. He had been on the government’s radar prior to his international trip and had been flagged for secondary inspection. Border officers found child sexual abuse material (CSAM) on his phone, and he was later arrested and criminally charged.

The district court denied the defendant’s motion to suppress the images and other data obtained from the warrantless search of his cell phone. He was ultimately convicted of child pornography and sexual exploitation of minors because he had used social media to entice minors to send him sexually explicit photos of themselves.

The number of warrantless device searches at the border and the significant invasion of privacy they represent is only increasing. In Fiscal Year 2025, U.S. Customs and Border Protection (CBP) conducted 55,318 device searches, both manual (“basic”) and forensic (“advanced”).

A manual search involves a border officer tapping or mousing around a device. A forensic search involves connecting another device to the traveler’s device and using software to extract and analyze the data to create a detailed report the device owner’s activities and communications. However, both search methods are highly privacy-invasive, as border officers can access the same data that can reveal the most personal aspects of our lives, including political affiliations, religious beliefs and practices, sexual and romantic affinities, financial status, health conditions, and family and professional associations.

In our amicus brief, we argued that the Fourth Circuit should adopt the same legal standard for both manual and forensic searches, and that standard should be a warrant supported by probable cause and issued by a neutral judge. The highly personal nature of the information found on electronic devices is why there should not be different legal standards for different methods of search, and why a judge should determine whether the government has provided credible preliminary evidence that there’s a likelihood that further evidence will be found on the device indicating wrongdoing by the specific traveler.

Moreover, we argued that “the process of getting a warrant is not unduly burdensome,” and that “getting a warrant would not impede the efficient processing of travelers. If border officers have probable cause to search a device, they may retain it and let the traveler continue on their way, then get a search warrant. Or, where there is truly no time to go to a judge, the exigent circumstances exception may apply on a case-by-case basis.”

The Fourth Circuit in prior cases only considered forensic device searches at the border. In U.S. v. Kolsuz (2018), the court held that the forensic search of the defendant’s cell phone at the border “must be considered a nonroutine border search, requiring some measure of individualized suspicion” of a transnational offense, but the court declined to decide whether the standard is only reasonable suspicion or instead a probable cause warrant. Then in U.S. v. Aigbekaen (2019), the court held that a forensic device search at the border in support of a purely domestic law enforcement investigation requires a warrant. The court also reiterated the general Kolsuz rule for a forensic border-related device search: the “Government must have individualized suspicion of an offense that bears some nexus to the border search exception's purposes of protecting national security, collecting duties, blocking the entry of unwanted persons, or disrupting efforts to export or import contraband.” Now, manual searches are before the court.

In urging the Fourth Circuit to adopt a warrant standard for both manual and forensic device searches at the border, we argued that the U.S. Supreme Court’s balancing test in Riley v. California (2014) should govern the analysis here. In that case, the Court weighed the government’s interests in warrantless and suspicionless access to cell phone data following an arrest, against an arrestee’s privacy interests in the depth and breadth of personal information stored on a cell phone. The Court concluded that the search-incident-to-arrest warrant exception does not apply, and that police need to get a warrant to search an arrestee’s phone.

The U.S. Supreme Court has recognized for a century a border search exception to the Fourth Amendment’s warrant requirement, allowing not only warrantless but also often suspicionless “routine” searches of luggage, vehicles, and other items crossing the border. The primary justification for the border search exception has been to find—in the items being searched—goods smuggled to avoid paying duties (i.e., taxes) and contraband such as drugs, weapons, and other prohibited items, thereby blocking their entry into the country.

But a traveler’s privacy interests in their suitcase and its contents are minimal compared to those in all the personal data on the person’s cell phone or laptop. And a travelers’ privacy interests in their electronic devices are at least the same as those considered in Riley. Modern devices, over a decade later, contain even more data that can reveal even more intimate details about our lives.

We hope that the Fourth Circuit will rise to the occasion and be the first circuit to fully protect travelers’ Fourth Amendment rights at the border.

Sophia Cope

EFF Stands in Solidarity With RightsCon and the Global Digital Rights Community

1 day 2 hours ago

When governments shut down spaces for dialogue, dissent, and collective organizing, the damage extends far beyond a single event. The abrupt cancellation of RightsCon 2026—the world’s largest annual global digital rights conference—is not just a logistical disruption for thousands of researchers, journalists, technologists, and activists—it is part of a growing global pattern of shrinking civic space and increasing hostility toward free expression and independent civil society.

Just days before the conference was set to begin and as participants had begun to arrive in Lusaka, organizers announced that RightsCon would no longer proceed in Zambia or online after mounting political pressure and demands that would have excluded vulnerable communities and constrained discussion. The U.N.’s World Press Freedom Day, which was set to take place just prior to the conference, was scaled down in light of the events, and its press freedom prize ceremony postponed to a later date.

RightsCon has long served as one of the few truly global convenings where civil society groups, grassroots organizers, technologists, and policymakers can meet on equal footing to confront some of the most urgent human rights challenges of the digital age—from censorship and surveillance to internet shutdowns, platform accountability, and the safety of marginalized communities online. EFF has had a presence at RightsCon since its inception in 2011, and had planned to meet with and learn from international partners and present our work during several sessions in Lusaka.

The cancellation is especially devastating because of what RightsCon represents. For many advocates—particularly those from the global majority—it is not merely another conference. It is a rare opportunity to build solidarity across borders, form lasting partnerships, learn from other regions’ experiences, secure funding and support for local work, and ensure that the people most impacted by digital repression have a seat at the table. Holding the event in southern Africa carried particular significance, promising to elevate regional voices and strengthen local digital rights networks.

What happened in Zambia sends a chilling message. According to organizers and multiple reports, the pressure surrounding the event included Chinese government demands to exclude Taiwanese participants and moderate discussions around politically sensitive topics. At a moment when governments around the world are increasingly restricting protest, targeting journalists, cutting funds for human rights work, banning young people from online communities, censoring speech, and criminalizing civil society activity, the cancellation of RightsCon reflects the broader erosion of democratic space online and offline.

Organizations from the digital rights community have spoken out forcefully against the government’s cancellation of the conference, making clear that these attacks on civic participation will not pass unnoticed. Access Now described the decision as evidence of “the far reach of transnational repression targeting civil society.” Index on Censorship’s response warned that the move represents a dangerous escalation in attempts to suppress open dialogue, while IFEX rightly described the cancellation as a blow not just to one conference, but to freedom of expression and assembly everywhere.

We are also heartened to see statements from members of the international community—including Tabani Moyo, who spoke about the impact on the southern African community, and Taiwanese participant Shin Yang, who emphasized the importance of preserving spaces where marginalized communities can safely organize and speak—underscoring that attempts to silence civil society only reinforce the importance of defending open, global spaces for organizing and debate.

Even as this cancellation represents a serious setback, it is important to remember that the digital rights community has always adapted under pressure. Around the world, advocates continue to organize in increasingly difficult environments, finding new ways to connect, collaborate, and resist censorship and repression. Upcoming events like the Global Gathering and FIFAfrica—both of which EFF plans to attend—will bring together members of the community to tackle tough issues. And in the meantime, groups from all over the world are working together to incorporate global perspectives into platform regulations, oppose age verification laws, protect against surveillance, and fight internet shutdowns, among many other efforts.

RightsCon itself emerged from a recognition that defending human rights in the digital age requires international solidarity—and that need has not disappeared.

The conversations that were supposed to happen in Lusaka will continue elsewhere: in community spaces, online gatherings, encrypted chats, and future convenings yet to come. Governments may close venues, restrict participation, or attempt to narrow the boundaries of acceptable speech, but they cannot erase the global movement working to defend a free and open internet.

RightsCon will not go on in Zambia, but we remain heartened and inspired by the strength of the global digital rights community, stand with them in solidarity, and look forward to seeing our allies at the next RightsCon and other upcoming events.

Electronic Frontier Foundation

Congress Narrowed the GUARD Act, But Serious Problems Remain

3 days 20 hours ago

Following criticism, lawmakers have narrowed the GUARD Act, a bill aimed at restricting minors’ access to certain AI systems. The earlier version could have applied broadly to nearly every AI-powered chatbot or search tool. The amended bill focuses more narrowly on so-called “AI companions”—conversational systems designed to simulate emotional or interpersonal interactions with users. 

That change does address some of the broadest concerns raised about the original proposal, though some questions about the bill’s reach remain. Bottom line: the revised bill still creates serious problems for privacy, online speech, and parental choice.

TAKE ACTION

Tell Congress: oppose the guard act

The new GUARD Act still requires companies offering AI companions to implement burdensome age-verification systems tied to users’ real-world identities. Even parents who specifically want their teenagers to use these systems would still face significant hurdles. A family might decide that a conversational AI tool helps an isolated teenager practice social interaction, or engage in harmless creative roleplay. A parent deployed in the military might set up a persistent AI storyteller for a younger child. Under the revised bill, those users could still face mandatory age checks tied to sensitive personal or financial information before they or their children can use these services.

The revised bill also leaves important definitions unclear while sharply increasing penalties for developers and companies that get those judgments wrong. Congress narrowed the GUARD Act. But it is still trying to solve a complicated social problem with vague legal standards, heavy liability, and privacy-invasive verification systems.

Intrusive Age-Verification Remains In The Bill

The revised GUARD Act still requires companies offering AI companions to verify that users are adults through a “reasonable age verification” system. The bill allows a broader set of verification methods than the earlier version, but they are still tied to a user’s real-world identity—such as financial records, or age-verified accounts for a mobile operating system or app store. 

That approach still raises serious privacy and access concerns. Millions of Americans do not have current government ID, accounts at major banks, or stable access to the kinds of digital identity systems the bill contemplates. Even for those who do, requiring identity-linked verification to access online speech tools creates real risks for privacy, anonymity, and data security. Many people are rightly creeped out by age-verification systems, and may simply forgo using these services rather than compromise their privacy and security.

The revised definition of “AI companion” is also narrower than before, but it’s unclear at the margins. The bill now focuses on systems that “engage in interactions involving emotional disclosures” from the user, or present a “persistent identity, persona or character.” 

EFF appreciates that the authors recognized that the prior definition could reach a variety of AI systems that are not chatbots, including internet search engines. But the narrowed definition could be read to also apply to a variety of chat tools that are not AI companions. For example, many modern online conversational systems increasingly recognize and respond to users’ emotions. Customer service systems, including completely human-powered ones that existed long before AI chatbots, have long been designed to recognize frustration and respond empathetically. As conversational AI becomes more emotionally responsive, a customer service chatbot’s efforts to empathize may sweep it within the bill’s definition. 

Bigger Penalties, Bigger Incentives To Restrict Access

The revised bill also sharply increases penalties. Instead of $100,000 per violation, companies—including small developers—can face fines of up to $250,000 per violation, enforced by both federal and state officials.

That kind of liability creates incentives to over-restrict access, especially for minors. Smaller developers, in particular, may decide it is safer to block younger users entirely, disable conversational features, or avoid developing certain tools at all, rather than risk severe penalties under vague standards.

The concerns driving this bill are real. Some AI systems have engaged in troubling interactions with vulnerable users, including minors. But the right answer to that is targeted enforcement against bad actors, and privacy laws that protect us all. The revised GUARD Act instead responds with a privacy-invasive system that burdens the right to speak, read, and interact online.

Congress did improve this bill, but EFF’s core speech, privacy, and security issues remain.

TAKE ACTION

Tell Congress: oppose the guard act

Joe Mullin

Free Signal Guide

4 days 2 hours ago

EFF friend Guy Kawasaki* has written a book: Everybody Has Something to Hide: Why and How to Use Signal to Preserve Your Privacy, Security, and Well-Being. This guide is now available in Spanish and English as an ebook in the EPUB format that you can download here. Take a look and consider sharing it with anyone who you know who uses (or should use) Signal. 

And don't forget: EFF has two short guides on using Signal on our Surveillance Self-Defense site. An intro How to Use Signal guide, and a guide on Managing Signal Groups

Everybody Has Something to Hide: Why and How to Use Signal to Preserve Your Privacy, Security, and Well-Being courtesy of Guy Kawasaki. 

*Guy Kawasaki is an EFF donor.

Allison Morris

Milestone 1.0.0 Release of APK Downloader `apkeep` Powers Research on Android Apps

5 days 21 hours ago

Last week, we released apkeep version 1.0.0, the latest edition of our command-line Android package downloading software. Rather than indicating major changes for the project, this milestone instead signifies arriving at a relatively stable and mature place after gradual iteration on the project over the course of over four years.

What’s New in 1.0.0

We do have a few fresh features we’ve packed into this latest release, though—all focused on the Google Play Store: 

  • You can now download a dex metadata file associated with an app containing a Cloud Profile, which provides information on app performance based on real usage. 
  • You can now provide a token generated by the Aurora Store’s dispenser to log in anonymously for app downloads. 
  • Users can specify their own device profiles when downloading apps from Google Play, which the store uses to deliver the app variant which works for your particular device specifications. 
  • We’ve also fixed an authentication bug introduced by the Play Store API.

In addition to the various Linux, Windows, and Android environments we support, we’re also happy to announce that since the last release in October we’ve been included in Homebrew for macOS users!

How Researchers Use apkeep to Understand the Android App Landscape

Researchers and users contributed most of the features of this release, including downloading dex metadata containing Google’s Cloud Profiles. This feature helps them use the tool in their own research of highlighting how these Android compilation profiles can be a vital source of information for evaluating dynamic testing. Numerous other projects have cited apkeep usage in their own workflows. For example, Exodus Privacy uses it to power the εxodus tool’s downloads when they monitor the privacy properties of apps. Various research teams have noted their own use of the tool in whitepapers, including one team who used the tool to download 21,154 apps in a widespread study of Android evasive malware. We are proud to provide a reliable tool in the toolbox they use to power their work.

What’s in Store for apkeep?

Our goals with apkeep have remained constant: provide a reliable, fast, and safe way to download apps from multiple app providers, not just the Google Play Store. While we’ve focused on it as the major Android app provider of choice across much of the world, we’ve expanded support to other stores as well, such as F-Droid for downloading open source apps. We’d like to continue broadening apkeep’s list of supported providers, to make it easy to do comparative analysis of apps provided in different contexts. For this, we’d love your contributions.

How You Can Help

If you’re using apkeep as part of your own toolbox (whether using it to do malware analysis, auditing apps, or simply using it as an app archiving tool), let us know! And if you like what we do, please consider donating to EFF to support our work.

Bill Budington

👎 California's Terrible, No Good, Very Bad Social Media Ban | EFFector 38.9

6 days 3 hours ago

We'd all like the internet to be a better place—for kids and adults alike. But in the name of online safety, governments around the world are racing to impose a dangerous new system of control. Are age gates the silver bullet to the internet's problems they're being promoted as? Or are we being sold a bill of goods? We're answering this question and more in our latest EFFector newsletter.

JOIN OUR NEWSLETTER

For over 35 years, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This latest issue covers an attack on VPNs in Utah, a livestream on how to disenshittify the internet, and California's proposed social media ban that could set a dangerous new precedent for online censorship.

Prefer to listen in? EFFector is now available on all major podcast platforms. This time, we're having a conversation with EFF Legislative Analyst Molly Buckley on why social media bans can't sidestep the U.S. constitution. You can find the episode and subscribe on your podcast platform of choice:

%3Ciframe%20height%3D%22200px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F07b61711-d8ff-4483-aee3-21daa5a3ea22%3Fdark%3Dfalse%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

Want to help push back on these misguided regulations? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight for privacy and free speech online when you support EFF today!

Christian Romero

The SECURE Data Act is Not a Serious Piece of Privacy Legislation

6 days 5 hours ago

The federal SECURE Data Act is not a serious consumer privacy bill, and its provisions—if enacted—would be a retreat from already insufficient state protections.

Republicans on the House Energy and Commerce Committee released a draft of the bill late last month without bipartisan support. The bill is weaker than congressional proposals in prior years, as well as most of the 21 state consumer privacy laws already on the books.

The bill could wipe out hundreds of  state privacy protections.

Most troubling for EFF: the bill would preempt dozens, if not hundreds, of state laws that regulate related topics, and it would not allow consumers to sue to protect their own rights (commonly called a private right of action). And it comes nowhere close to banning online behavioral advertising—a practice that fuels technology companies’ always increasing hunt for personal data.

The bill also suffers from many other flaws including weak opt-out defaults, inadequate data minimization requirements, and large definitional loopholes for companies.

Key Provisions

The bill would give consumers some rights to take action to control their personal data— like access, correction, deletion, and limited portability. These rights have become standard in all data privacy proposals in recent years.

The bill would also require companies to obtain your consent before processing your sensitive data, or using any of your personal data for a previously undisclosed purpose. Absent your consent, a company couldn’t do these things.

Further, the bill would allow you to opt out of (1) targeted third-party advertising, (2) the sale of your personal data, and (3) profiling of you that has a legal, healthcare, housing, or employment effect. Unfortunately, a company could keep doing these invasive things to you, unless you opted out.

The bill would also require data brokers that make at least 50 percent of their profits from the sale of personal data to register in a public database maintained by the Federal Trade Commission (FTC).

Preemption of Too Many State Laws

Federal privacy laws should allow states to build ever stronger rights on top of the federal floor. Many federal privacy laws allow this, including the Health Insurance Portability and Accountability Act, the Video Privacy Protection Act, and the Electronic Communications Privacy Act.

The SECURE Data Act would not do that. Instead, it would wipe out dozens, if not hundreds, of existing state privacy protections. Section 15 of the bill would preempt any “law, rule, regulation, requirement, standard, or other provision [that] relates to the provisions of this Act.” This would kill the 21 state consumer privacy laws passed in the past few years. These state bills aren’t strong enough, but they are still better than this federal proposal. For example, California maintains a data broker deletion tool and requires companies to comply with automatic opt-out signals—including one that is built into EFF’s Privacy Badger.

Because the SECURE Data Act has provisions that relate to data privacy and security, it could preempt all 50 state data breach laws and many others. It could also preempt state laws related to specific pieces of sensitive data, like bans on the sale of biometric or location information. Some states like California have constitutional provisions that protect an individual’s right to privacy, which can be enforced against companies. That constitutional provision, as well as state privacy torts, could also be in danger if this bill passed.

No Private Enforcement, A New Cure Period, and Vague Security Powers

Strong consumer privacy laws should allow consumers to take companies to court to defend their own rights. This is essential because regulators do not have the resources to catch every violation, and federal consumer enforcement agencies have been gutted during the current administration.

The SECURE Data Act does not have a private right of action. The FTC, along with state attorneys general, have primary enforcement authority. The law also gives companies 45 days to “cure” any violation with no penalty after they are caught.

Moreover, Section 8 of the bill creates a vaguely defined self-regulatory scheme in which companies can apply to be audited by an “independent organization” that will apply a “code of conduct.” Following this code of conduct would give companies a presumption that they are complying with the law. This provision is an implicit acknowledgement that the bill does not provide regulators with any new resources to enforce new protections.

Section 9 of the bill would give the Secretary of Commerce broad power to “take any action necessary and appropriate to support the international flow of personal data,” including assessing “security interests of the United States.” The scope of this amorphous provision is unclear, but it likely does not belong in a consumer protection bill.

Weak Privacy Defaults

Your online privacy should not depend on whether you have the time, patience, and knowledge to navigate a website and turn off invasive tracking. Good privacy laws build in data minimization requirements—meaning there should be a default standard that prevents companies from processing your data for purposes that are not needed to provide you with the service you asked for.

The SECURE Data Act puts the burden on you to opt out of invasive company practices, like targeted third-party advertising, the sale of your personal data, and profiling. The bill at least requires companies to obtain your consent before processing your sensitive data (like selling your precise location). These consent requirements, however, are often an invitation for companies to trick you into clicking a button to give away your rights in hard-to-read policies. Indeed, few people would knowingly agree to let a company sell their personal data to a broker who turns around and sells it to the government.

Section 3 of the bill uses the term “data minimization,” but it is done in name only. The provision does not limit a company’s processing of data to only what is necessary to provide the customer with the good or service they asked for. Instead, the provision limits processing of data to only what a company “disclosed to the customer”—meaning if it is in the confusing privacy policy that nobody reads, it is okay.

And the bill would not even allow you to restrict certain uses of your data. As companies seek more data for AI systems, many internet users do not want their private personal data to be used to train those models. However, the bill makes clear that “nothing in this Act may be construed to restrict” a company from collecting, using, or retaining your data to “develop” or “improve” a new technology.

Other Flawed Definitions and Loopholes

The bill has numerous loopholes that technology companies would exploit if the bill were to become law. Below is just a sampling:

  • Government contractors: Under Section 13(b)(2), government contractors are exempt from the bill, which could be wrongly interpreted to exempt certain data brokers from sale restrictions when those sales are made to the government. This type of exemption could benefit surveillance companies like Clearview AI, which previously argued it was exempt from Illinois’ strict biometric law using a similar contractor exception. This is likely not the authors’ intention, since the definition of sale includes those made “to a government entity.”
    Sale definition: The definition in Section 16(28) is defined too narrowly. A sale should mean any exchange for monetary “or other valuable” consideration, as in some other privacy laws.
  • Biometric information definition: The definition in Section 16(4) excludes data generated from a photo or video, and the definition excludes face scans not meant to “identify a specific individual.” This could be wrongly interpreted to allow biometric identification from security camera footage, or biometric use for sentiment or demographic analysis.
  • Personal data definition: The definition in Section 16(21) exempts “de-identified data” from the definition of personal data, which could allow companies to do anything with de-identified data because that data is not protected by the law. The problem with de-identified data is that many times it is not.
  • Deletion requests: With regard to data that a company obtained from a third-party, Section 2(d)(5) would treat a consumer’s deletion request merely as an opt-out request. And even if a customer requested deletion, a company might be able to retain the data for research purposes under section 11(a)(9)(A).
  • Profiling definition: Under the definition in Section 16(25), companies could profile so long as the profiling is not “solely automated.” The flimsiest human review would exempt highly automated profiling.

Congress is long overdue to enact a strong comprehensive consumer data privacy law, and we have sketched what it should look like. But the SECURE Data Act is woefully inadequate. In fact, it would cause even more corporate surveillance of our personal information, by wiping out state laws that are more protective than this federal bill. Even worse, this bill would block state legislatures from protecting their residents from the privacy threats of tomorrow that are unforeseeable today. 

Mario Trujillo

EFF and 18 Organizations Urge UK Policymakers to Prioritize Addressing the Roots of Online Harm

1 week ago

EFF joins 18 organizations in writing a letter to UK policymakers urging them to address the root causes of online harm—rather than undermining the open web through blunt restrictions.

The coalition, which includes Mozilla, Tor Project, and Open Rights Group, warns that proposed measures following the passage of the Children’s Wellbeing and Schools Bill risk fundamentally reshaping the internet in harmful ways. Chief among these proposals are sweeping age-gating requirements and access restrictions that would apply not only to young people, but effectively to all users.

While framed as efforts to protect children online, these policies rely heavily on age assurance technologies that are either inaccurate, privacy-invasive, or both. As the letter notes, mandating such systems across a wide range of services—from social media and video games to VPNs and even basic websites—would force users to verify their identity simply to access the web. This creates serious risks, including expanded surveillance, data breaches, and the erosion of anonymity.

Beyond privacy concerns, the signatories argue that these measures threaten the core architecture of the open internet. Age-gating at scale could fragment the web into a patchwork of restricted jurisdictions, limit access to information, and entrench the dominance of powerful gatekeepers like app stores and platform ecosystems. In doing so, policymakers risk weakening the very qualities—interoperability, accessibility, and openness—that have made the internet a global public resource.

The letter also emphasizes what’s missing from the current policy approach: meaningful efforts to address the underlying drivers of online harm. Many digital platforms are designed to maximize engagement and profit through pervasive data collection and targeted advertising, often at the expense of user safety and autonomy. Rather than imposing access bans, the coalition calls on UK policymakers to hold companies accountable for these systemic practices and to prioritize user rights by design.

Importantly, the signatories highlight that the internet remains a vital space for young people: offering access to information, support networks, and opportunities for expression that may not exist offline. Policies that restrict access risk cutting off these lifelines without meaningfully reducing harm.

The message is clear: protecting users online requires more than heavy-handed restrictions. It demands thoughtful, rights-respecting policies that tackle the business models and design choices driving harm, while preserving the open, global nature of the web.

Jillian C. York

Shut Down Turnkey Totalitarianism

1 week ago

William Binney, the NSA surveillance architect-turned-whistleblower, called it the "turnkey totalitarian state." Whoever sits in power gains access to a boundless surveillance empire that scorns privacy and crushes dissent. Politicians will come and go, but you can help us claw the tools of oppression out of government hands.

JOIN EFF

Become a Monthly Sustaining Donor

We must stand strong to uphold your privacy and free expression as democratic principles. With members around the world, EFF is empowered to use its trusted voice and formidable advocacy to protect your rights online. Whether giving monthly or one-time donations, members have helped EFF:

  • Sue to stop warrantless searches of Automated License Plate Reader (ALPR) records, which reveal millions of drivers’ private habits, movements, and associations.

  • Launch Rayhunter, an open source tool that empowers you to help search out cell-site simulators capable of tracking the movements of protestors, journalists, and more.

  • Help journalists see through the spin of "copaganda" by breaking down how policing technology companies often market their tools with misleading claims with our Selling Safety report.

Right now, U.S. Congress is on the edge of renewing the international mass spying program known as Section 702, affecting millions. EFF is rallying to cut through the politics and give ordinary people a chance to stop this oppressive surveillance. It’s only possible with help from supporters like you, so join EFF today.

The New EFF Member Gear

Get this year’s new member t-shirt when you join EFF. Aptly titled "Claw Back," the design features an orange boy swatting at the street-level surveillance equipment multiplying in our communities. You might empathize with him, but there’s a better way. Let’s end the law enforcement contracts, harmful practices, and twisted logic that enable mass spying in the first place.

You can also get brand new set of eleven soft and supple polyglot puffy stickers as a token of thanks. Whether you're a kid or a kid at heart, these nostalgic stickers are perfect for digital devices, lunchboxes, and notebooks alike. Our little Ghostie protects privacy in six languages: Arabic, English, Japanese, Persian, Russian, and Spanish.

And for a limited time, get a Privacy Badger Crewneck sweater to help you browse the web with confidence. The embroidered Privacy Badger mascot appears above Traditional Chinese for "privacy” because human rights are universal. Millions of people around the world use Privacy Badger, EFF's free browser extention that blocks hidden trackers that twist your web browsing into a commodity for Big Tech, advertisers, scammers, and data brokers.

Privacy is a human right because it gives you a fundamental measure of security and freedom. We owe it to ourselves to fight the mass surveillance used to control and intimidate people. Let’s do this. Join EFF today with a monthly donation or one-time donation and help claw back your privacy.

____________________

EFF is a member-supported U.S. 501(c)(3) organization. We've received top ratings from the nonprofit watchdog Charity Navigator since 2013! Your donation is tax-deductible as allowed by law.

Aaron Jue

EFF Submission to UK Consultation on Digital ID

1 week 1 day ago

Last September, the United Kingdom’s Prime Minister Keir Starmer announced plans to introduce a new digital ID scheme in the country. The scheme aims to make it easier for people to prove their identities by creating a virtual ID on personal devices with information like names, date of birth, nationality or residency status, and a photo to verify their right to live and work in the country. 

Since then, EFF has joined UK-based civil society organizations in urging the government to reconsider this proposal. In one joint letter from December, ahead of Parliament’s debate around a petition signed by 2.9 million people calling for an end to the government’s plans to roll out a national digital ID, EFF and 12 other civil society organizations wrote to politicians in the country urging MPs to reject the Labour government’s proposal.

Nevertheless, politicians have continued to explore ways to build out a digital ID system in the country, often fluctuating between different ideas and conceptualisations for such a scheme. In their search for clarity, the government launched a consultation, ‘Making public services work for you with your digital identity,’ seeking views on a proposed national digital ID system in the UK. 

EFF submitted comments to this consultation, focusing on six interconnected issues:

  1. Mission creep
  2. Infringements on privacy rights 
  3. Serious security risks
  4. Reliance on inaccurate and unproven technologies
  5. Discrimination and exclusion
  6. The deepening of entrenched power imbalances between the state and the public.

Even the strongest recommended safeguards cannot resolve these issues, and the fundamental core problem that a mandatory digital ID scheme that shifts power dramatically away from individuals and toward the state. They are pursued as a technological solution to offline problems but instead allow the state to determine what you can access, not just verify who you are, by functioning as a key to opening—or closing—doors to essential services and experiences. 

No one should be coerced—technically or socially—into a digital system in order to participate fully in public life. It is essential that the UK government listen to people in the country and say no to digital ID. 

Read our submission in full here.

Paige Collings

Getting Digital Fairness Right: EFF's Recommendations for the EU's Digital Fairness Act

1 week 1 day ago
Digital Fairness in the EU

The next few years will be decisive for EU digital policymaking. With major laws like the Digital Services Act, the Digital Markets Act, and the AI Act now in place, the EU is entering an enforcement era that will show whether these rules are rights-respecting or drift toward overreach and corporate control. With the proposed EU’s Digital Fairness Act (DFA), the Commission is now turning to increasingly visible risks for users, such as dark patterns and exploitative personalization. Its “Digital Fairness Fitness Check” makes clear that existing consumer rules need updating to reflect how digital markets operate today.

But not all proposed solutions point in the right direction. Regulators are already flirting with measures that rely on expanded surveillance, such as age verification mandates—surface-level fixes that risk undermining fundamental rights while offering little more than a false sense of protection.

For EFF, digital fairness means addressing the root causes of harm, not requiring platforms to exert more control over their users. It means safeguarding privacy, freedom of expression, and the rights of users and developers.

If the DFA is to make a real difference, it must tackle structural imbalances. Lawmakers should focus on two interlocking principles. First, prioritize privacy. Reforms should address harms driven by surveillance-based business models, alongside deceptive design practices that impair informed choices. Second, strengthen user sovereignty, which is also a necessary precondition for European digital sovereignty more broadly. Strengthening user sovereignty means taking measures that address user lock-in, coercive contract terms, and manipulative defaults that limit users’ ability to freely choose how they use digital products and services.

Together, these principles would support the EU’s objectives of consistent consumer protection, fair markets, and a more coherent legal framework. If implemented properly, the EU could address power imbalances and build trust in Europe’s digital economy.

Ban Dark Patterns

Dark patterns are practices that impair users’ ability to make informed and autonomous decisions. Many companies deploy these tactics through interface design to steer choices and influence behavior. Their impact goes beyond poor consumer decisions. Dark patterns push users to share personal data they would not otherwise disclose and undermine autonomy by making alternatives harder to access.

The DFA should address this by clearly prohibiting misleading interfaces that distort user choice in commercial contexts. While the Digital Services Act introduced a definition, it only partially bans such practices and leaves gaps across existing consumer law rules. The DFA should close these gaps by, at the very least, introducing explicit prohibitions and clearer enforcement rules, without resorting to design mandates.

Tackle Commercial Surveillance

At the core of digital unfairness lies the pervasive collection and use of personal data. Surveillance and profiling drive many of the harms regulators are trying to address, from dark patterns to exploitative personalization. The DFA should tackle these incentives directly by reducing reliance on surveillance-based business models. These practices are fundamentally incompatible with privacy and fairness, and they distort digital markets by rewarding data exploitation rather than quality of service. At a minimum, the DFA should address unfair profiling and surveillance advertising by strengthening privacy rights and banning pay-for-privacy schemes. Users should not have to trade their data or pay extra to avoid being tracked. Accordingly, the DFA should support the recognition of automated privacy signals by web browsers and mobile operating systems, which give users a better way to reject tracking and exercise their rights. Practices that override such signals through banners or interface design should be considered unfair.

Addressing surveillance and profiling also protects children, since many online harms are tied to the collection and exploitation of their data. Systems that serve ads or curate content often rely on intrusive profiling practices, raising concerns about privacy and fairness, particularly when applied to minors. Rather than turning to invasive age verification, the focus should be on limiting data use by default.

Strengthen User Sovereignty

There is a major gap in how EU law addresses user autonomy in digital markets: many digital products and services still restrict what people can do with what they pay for through opaque or one-sided licensing terms, technical protection measures, and remote controls. These mechanisms increasingly limit lawful use, modification, or access after purchase, allowing providers to revoke access, disable functionalities, or degrade performance over time. In practice, this turns ownership into a conditional rental.

Consumers must be able to use and resell digital goods without hidden limitations and with clear licensing terms. Too often, technical and contractual lock-ins, including remote lockouts and unilateral restrictions on functionality, erode that control. Recent legal reforms show that progress is possible. Rules such as those under the Digital Markets Act have begun to curb technical and contractual barriers and promote user choice. However, many restrictions persist.

The DFA must address these practices by targeting unfair post-sale restrictions and strengthening users’ ability to control and switch services. This means setting clear limits on unfair terms and misleading practices, alongside robust transparency on how digital services function over time. It should also strengthen interoperability and support user control, allowing people to access third-party applications and to let trusted applications act on their behalf, reducing lock-in and expanding meaningful choice in how users interact with digital services.

Christoph Schmon

A Bridge to Somewhere: How to Link Your Mastodon, Bluesky, or Other Federated Accounts

1 week 4 days ago

One of the central promises of open social media services is interoperability—the idea that wherever you personally decide to post doesn’t require others to be there just to follow what you have to say. Think of it like a radio broadcast: you want to reach people and don't care where they are or what device they're using. For example, in theory, a Bluesky user can follow someone on Mastodon or Threads without having to create a Mastodon or Threads account. But these systems are still a work in progress, and you might need to tweak a few things to get it working correctly.

Right now, broadcasting your message across social platforms can be a funky experience at best, deliberately broken up by oligopolists. The idea of the open web was baked into the internet via protocols like HTML and RSS that made it easy for anyone to visit a website or follow most blogs. The fact social media isn’t similarly open reflects an intentional choice to privatize the internet. 

Bridging and managing your posts so they’re viewable outside a singular source is part of the broader philosophy of POSSE, short for Post Own Site Syndicate Elsewhere (sometimes its Post Own Site, Share Everywhere). Instead of managing several accounts across different services, you post once to one primary site (which might be your personal website, or just one social media account), then set it up so it automatically publishes everywhere else. This way, it doesn’t matter where you or your audience is, and they're not walled off by account registration requirements. 

We’ll come back around to POSSE at the end of this post, but for now, let’s assume you just want your current main open social media account to actually have a chance to reach the most people it can. 

Why Post to the Open Social Web

Because the Fediverse and ATmosphere use different protocols, we need to use a third-party tool so accounts can communicate with each other. For that, we’ll need a bridge. As the name suggests, a bridge can connect one social media account to another, so you can post once and spread your message across several places. This isn’t just some niche concept: major blogging platforms like Wordpress and Ghost integrate posting to the Fediverse.

Bridging is an important facet of POSSE, but also something more people should consider, even if they don’t run their own websites. For example, if you don’t want to create a Threads account just to interact with your one friend who uses that platform, you shouldn’t have to. The good news is, you don’t. There are several bridging services, like Fedisky, RSS Parrot, and pinhole, but Bridgy Fed is currently the simplest to use, so we’ll focus on that. 

How to Post to Bluesky from Mastodon

From your Mastodon account (or other Fediverse account, for simplicity’s sake we’ll stick to Mastodon throughout), search for the username @bsky.brid.gy@bsky.brid.gy and follow that account. Once you do, the account will follow you back and you’ll be bridged and people can find you from their Bluesky account. You should also get a DM with your bridged username. If you don’t see the @bsky.brid.gy@bsky.brid.gy user when you search, your Mastodon instance may be blocking the bridging tool. 

Threads users who have enabled Fediverse sharing will be able to find you with your standard Mastodon username (ie, @your_user_name@mastodon.social), but if they haven’t enabled sharing, they will not be able to see your account. While this search is still a beta feature, you might find it easier to share the full URL, which would look like this: https://www.threads.net/fediverse_profile/@your_user_name@mastodon.social

People on Bluesky can find you by: Either searching for your Mastodon username, or if that doesn’t work, @your_user_name.instance.ap.brid.gy. For example, if your username is @eff@mastodon.social, it would appear as @eff.mastodon.social.ap.brid.gy.

An example of a Mastodon username from the Bluesky web client.

How to Post to Mastodon and Bluesky from Threads

Yes, Threads is technically on the Fediverse, and you can bridge your Threads account to Mastodon or Bluesky (unless you’re in Europe, where the feature is disabled), but it’s a different process than on Bluesky and Mastodon.

  • Open Settings > Account > Fediverse Sharing and set the option to “On.” This will make your posts visible to Mastodon (or other Fediverse) users, and vice versa. 
  • Once the Fediverse sharing is enabled, you’ll likely need to wait a week, then you can bridge to Bluesky. Search for and follow the @bsky.brid.gy@bsky.brid.gy account (it may take some digging to find it, but if that doesn’t work you can try visiting the profile page directly

People on Mastodon (or other Fediverse accounts) and Bluesky can find you by: Mastodon users can find you at, @your_threads_username@threads.net while Bluesky users will find you at, @your_threads_username.threads.net.ap.brid.gy (seriously, that will be the username). Note that some Mastodon instances may block Threads users entirely.

An example of a Threads username from the Mastodon web client.

An example of a Threads username from the Bluesky web client.

How to Post to Mastodon and Threads from Bluesky

From your Bluesky (or other ATProto) account, search for the username, “@ap.brid.gy” and follow that account. Once you do, the account will follow you back and you’ll be bridged, so people can follow you from Mastodon or other Fediverse accounts. You should also get a DM with your bridged username.

People on Mastodon (or other Fediverse account) and Threads can find you by: Your username will appear as @your_bluesky_username@bsky.brid.gy. For example, if your Bluesky username is @eff@bsky.social, it would appear as @eff.bksy.social@bsky.brid.gy.

An example of a Bluesky username from the Mastodon web client.

How to Post Everywhere from Your Own Website

You can bridge more than social media accounts. If you have your own website, you can bridge that too (as long as it supports microformats and webmention, or an Atom or RSS feed. If you have a blog, there’s a good chance you’re already good to go). When you do so, the bridged account will either post the full text (or image) of whatever you post to your personal site, or a link to that content,  depending on how your website is set up. You’ll also probably want to log into your Bridgy user page so you can manage the account. 

Where people can find your bridged account: Usually, a user can just search for your website’s URL on their decentralized social network of choice, or enter it on the Bridgy Fed page. But if that doesn’t work, they can try @yourdomain.com@web.brid.gy from Mastodon or @yourdomain.com.web.brid.gy from Bluesky.

An example of a bridged website username in the Mastodon web client.

How Your Account Username Looks on Each Platform

   

You’re Bound to Run Into Some Quirks
  • Sometimes messages take a little while to crossover between networks, and sometimes they don't crossover at all.
  • You can’t log into a bridged account like a regular account, but Bridgy Fed does provide some tools to see incoming notifications and recent activity in case they’re not coming through properly.
  • ActivityPub and ATProto don’t have the same feature set, so you will have certain capabilities for one account you might not have in another. For example, you can edit posts on Mastodon, but not on Bluesky. If you edit a post that’s bridged from Mastodon to Bluesky, the Bluesky post will not be updated. 
  • Replies can sometimes get lost, especially if the person (or people) replying to you doesn’t have sharing turned on.
  • Ownership of accounts can get weird. For example, if you post to your own website and use a tool like Wordpress or Ghost for federation (more info below), you don’t necessarily get access to a “normal” social media account, with a standard login and password.
  • And more! This is still a work in progress that has some technical quirks, but it’s improving all the time, and it’s best to keep telling yourself that troubleshooting is part of the fun.
Other Cool Stuff You Can Do

As mentioned up top, there’s a lot more you can do, and an increasing number of tools are making this process simpler. Bridgy Fed is one way to post to more places from a single account, but it’s far from the only way to do so. Here are just a few examples.

  • Micro.blog is a paid service where you can blog from your own domain name, then post automatically to Mastodon, Bluesky, Threads, Tumblr, Nostr, LinkedIn, Medium, Pixelfed, and Flickr.
  • Ghost is a blogging and newsletter platform that offers direct integration with the Fediverse, as well as support for Bluesky. Wordpress offers the option to join the Fediverse through a community plugin. Other newsletter platforms, like Buttondown, also have plans for federation. 
  • Surf.social is a landing page and social media utility where you can show off all your various accounts (Federated or not). From the reader point of view, you can follow one publications numerous types of posts in one place. For example, 404 Media’s Surf.social feed includes its YouTube feed, podcast feed, and its journalist’s social media posts.
  • If you think these new handles are a bit ugly, you can use a custom domain for Bluesky or fediverse account from your website. 

Of course, there are plenty of other tools, blogging platforms, and other utilities out there to help facilitate posting and bridging accounts, with new ones coming along every day. 

With proper support, time, and effort, eventually we will all be able to seamlessly interact across platforms, take our follows and followers to other services when a platform no longer suits our needs, and interact with a variety of web content regardless of what platform hosts it. Until then, we still need to do some DIY work, support the services we want to succeed, and push for more platforms and services to support federated protocols.

Correction: an earlier version of this blog was missing the full Bluesky username in the account username chart.

Thorin Klosowski

Utah’s New Law Targeting VPNs Goes Into Effect May 6th

1 week 4 days ago

Update, May 11, 2026: Utah has agreed to not enforce the VPN law until Sept. 3, 2026 after Aylo, the parent company of Pornub.com, challenged the law in court.

For the last couple of years, we’ve watched the same predictable cycle play out across the globe: a state (or country) passes a clunky age-verification mandate, and, without fail, Virtual Private Network (VPN) usage surges as residents scramble to maintain their privacy and anonymity. We've seen this everywhere—from states like Florida, Missouri, Texas, and Utah, to countries like the United Kingdom, Australia, and Indonesia

Instead of realizing that mass surveillance and age gates aren't exactly crowd favorites, Utah lawmakers have decided that VPNs themselves are the real issue.

On May 6, 2026, Utah will become, to EFF’s knowledge, the first state in the nation to target the use of VPNs to avoid legally mandated age-verification gates. While advocates in states like Wisconsin successfully forced the removal of similar provisions due to constitutional and technical concerns, Utah is proceeding with a mandate that threatens to significantly undermine digital privacy rights. 

What the Bill Does

Formally known as the “Online Age Verification Amendments,” Senate Bill 73 (SB 73) was signed by Governor Spencer Cox on March 19, 2026. While the majority of the bill consists of provisions related to a 2% tax on revenues from online adult content that is set to take effect in October, one of the more immediate concerns for EFF is the section regulating VPN access, which goes into effect this coming Wednesday.

The VPN Provisions

The new law explicitly addresses VPN use in Section 14, which amends Section 78B-3-1002 of existing Utah statutes in two primary ways:

  1. Regulation based on physical location: Under the law, an individual is considered to be accessing a website from Utah if they are physically located there, regardless of whether they use a VPN, proxy server, or other means to disguise their geographic location.
  2. Ban on sharing VPN instructions: Commercial entities that host "a substantial portion of material harmful to minors" are now prohibited from facilitating or encouraging the use of a VPN to bypass age checks. This includes providing instructions on how to use a VPN or providing the means to circumvent geofencing.

By holding companies liable for verifying the age of anyone physically in Utah, even those using a VPN, the law creates a massive "liability trap." Just like we argued in the case of the Wisconsin bill, if a website cannot reliably detect a VPN user's true location and the law requires it to do so for all users in a particular state, then the legal risk could push the site to either ban all known VPN IPs, or to mandate age verification for every visitor globally. This would subject millions of users to invasive identity checks or blocks to their VPN use, regardless of where they actually live. 

JOIN EFF

HELP US STOP THESE VPN BILLS ACROSS THE COUNTRY

"Don't Ask, Don't Tell"

In practice, SB 73 is different from the Wisconsin proposal in that it stops short of a total VPN ban. Instead, it discourages using VPNs by imposing the liability described above and by muzzling the websites themselves from sharing information about VPNs. This raises significant First Amendment concerns, as it prevents platforms from providing basic, truthful information about a lawful privacy tool to their users. 

Unlike previous drafts seen in other states, SB 73 doesn't explicitly ban the use of a VPN. Under a "don't ask, don't tell" style of enforcement, websites likely only have an obligation to ask for proof of age if they actually learn that a user is physically in Utah and using a VPN. If a site doesn’t know a user is in Utah, their broader obligation to police VPNs remains murky. So, while SB 73 isn’t as extreme as the discarded Wisconsin proposal, it remains a dangerous precedent.

Technical Feasibility

Then there is also the question of technical feasibility: Blocking all known VPN and proxy IP addresses is a technical whack-a-mole that likely no company can win. Providers add new IP addresses constantly, and no comprehensive blocklist exists. Complying with Utah’s requirements would require impossible technical feats.

The internet is built to, and will always, route around censorship. If Utah successfully hampers commercial VPN providers, motivated users will transition to non-commercial proxies, private tunnels through cloud services like AWS, or residential proxies that are virtually indistinguishable from standard home traffic. These workarounds will emerge within hours of the law taking effect. Meanwhile, the collateral damage will fall on businesses, journalists, and survivors of abuse who rely on commercial VPNs for essential data security.

These provisions won't stop a tech-savvy teenager, but they certainly will impact the privacy of every regular Utah resident who just wants to keep their data out of the hands of brokers or malicious actors.

Uncharted Territory

Lawmakers have watched age-verification mandates fail and, instead of reconsidering the approach, have decided to wage war on privacy itself. As the Cato Institute states: 

“The point is that when an internet policy can be avoided by a relatively common technology that often provides significant privacy and security benefits, maybe the policy is the problem. Age verification regimes do plenty of damage to online speech and privacy, but attacking VPNs to try to keep them from being circumvented is doubling down on this damaging approach."

Attacks on VPNs are, at their core, attacks on the tools that enable digital privacy. Utah is setting a precedent that prioritizes government control over the fundamental architecture of a private and secure internet, and it won’t stop at the state’s borders. Regulators in countries outside the U.S. are still eyeing VPN restrictions, with the UK Children’s Commissioner calling VPNs a “loophole that needs closing” and the French Minister Delegate for Artificial Intelligence and Digital Affairs saying VPNs are “the next topic on my list” after the country enacted a ban on social media for kids under 15.

As this law goes into effect, we are entering uncharted territory. Lawmakers who can’t distinguish between a security tool and a "loophole" are now writing the rules for one of the most complex infrastructures on Earth. And we can assure that the result won't be a safer internet, only an increasingly less private one.

SUPPORT EFF  

BECOME AN EFF MEMBER TODAY

Rindala Alajaji

Open Records Laws Reveal ALPRs’ Sprawling Surveillance. Now States Want to Block What the Public Sees.

1 week 5 days ago

Reporters, community advocates, EFF, and others have used public records laws to reveal and counteract abuse, misuse, and fraudulent narratives around how law enforcement agencies across the country use and share data collected by automated license plate readers (ALPRs). EFF is alarmed by recent laws in several states that have blocked public access to data collected by ALPRs, including, in some cases, information derived from ALPR data. We do not support pending bills in Arizona and Connecticut that would block the public oversight capabilities that ALPR information offers.

Every state has laws granting members of the public the right to obtain records from state and local governments. These are often called “freedom of information acts” (FOIAs) or “public records acts” (PRAs). They are a powerful check by the people on their government, and EFF frequently advocates for robust public access and uses the laws to scrutinize government surveillance

But lawmakers across the country, often in response to public scrutiny of police ALPRs, are introducing or enacting measures aimed at excluding broad swaths of ALPR information from disclosure under these public records laws. This could include whole categories of important information: general information about the extent of law enforcement use; details on ALPR sharing across policing agencies; data on the number of license plate scans conducted, where they happened, and how many “hits” for license plates of interest actually occur; analyses on how many false matches or other errors occur; and images taken of individuals’ own vehicles. 

No thanks. Public records and public scrutiny of ALPR programs have shown that people are harmed by these systems and that retained ALPR data violates people’s privacy. In this moment, lawmakers should not be completely cutting off access to public records that document the abuses perpetuated by ALPRs. 

Transparency with privacy

To be sure, there are legitimate concerns about wholesale public disclosure of raw ALPR data. After all, many of the harms people experience from these systems are based on the government’s collection, retention, and use of this information. Public transparency rights should not exacerbate the privacy harms suffered by people subjected to ALPR surveillance. But many current proposals do not address legitimate privacy concerns in a measured way, much less seek to harmonize people’s privacy with the public’s right to know.

There is a better path to balancing privacy and transparency rights than outright bans or total disclosure. 

Any legislative proposal concerning public access to ALPR data must start with this reality: ALPR data is deeply revealing about where a person goes, and thus about what they are doing and who they are doing it with. That’s a reason why EFF opposes ALPRs. It is dangerous that the police have so much of our ALPR information. Even worse for our privacy would be for police to disclose our ALPR information to our bosses, political opponents, and ex-friends. Or to surveillance-oriented corporations that would use our ALPR information to send us targeted ads, or monetize it by selling it to the highest bidder.

On the other hand, EFF’s firsthand experience using public records from ALPR systems demonstrates the strong accountability value of public access to many kinds of ALPR data, including information like data-sharing reports and network audits. For example, in our “Data Driven” series, we used ALPR data-sharing and hit ratio reports to investigate the extent of ALPR data sharing between police departments and to analyze the number of ALPR scans that are ultimately associated with a crime-related vehicle. We have also identified racist uses of ALPR systems, ALPR surveillance of protestors, and ALPR tracking of a person who sought an abortion. Across the country, municipalities have been shutting down their contracts for ALPR use, often citing concerns with data sharing with federal and immigration agents. 

These records are not just informational—they are leverage. Communities, journalists, and local officials have used ALPR disclosures to block new deployments, refuse contract renewals, and terminate existing agreements with surveillance vendors whose practices proved too dangerous to continue. Without this evidentiary record, it is far harder for cities to exercise their procurement power to say no.

It is not always easy to harmonize transparency and privacy when one person wishes to use a public records law to obtain government records that reveal people’s personal information. The best approach is for public records laws to contain a privacy exemption that requires balancing, on a case-by-case basis, of the transparency benefits versus the privacy costs of disclosure. Many do. These provisions of public records laws already accommodate similar concerns about disclosing personal information of private individuals whose information the government may have collected, government employee’s private data, and other personal information. 

The balancing provisions in these laws are often flexible and allow for nuance. For example, if a government record contains a mix of information that does not reveal people’s private information and some that does, agencies and courts can disclose the non-private information while withholding the truly private information. This is often accomplished with blacking out, or redacting, the private information.

Applying this privacy-and-transparency balancing to ALPR records, it will often be appropriate for the government to disclose some information and withhold other information. Everybody should generally have access to records showing their own movements and other information captured by ALPRs, but the privacy protections in public records laws should foreclose a single person’s ability to get a copy of similar records about everyone else. And even with accessing your own data, there are complications with shared vehicles that should be considered when balancing privacy and transparency.

An example of where it may be appropriate to release unredacted data and images would be vehicles engaged in non-sensitive government business. For example, a member of the public might use ALPR scans of garbage trucks to identify gaps in service, which would not reveal private information. On other hand, it would be inappropriate to release the scans of a government social worker visiting their clients. 

Public records laws should allow a requester to obtain some ALPR information about government surveillance of everyone else, in a manner that accommodates the public transparency interest in disclosure and people’s privacy interests. For example, the best public records laws would disclose the times and places that plate data was collected, but not plate data itself. This can be done, for example, by an agency or court finding that disclosing aggregated and/or deidentified ALPR data protects the privacy or other interests of individuals captured within the data. The best laws recognize that aggregation or de-identification of databases are redactions in service of individual privacy (which responding agencies must do), and are not creating new public records (which responding agencies sometimes need not do). 

Likewise, in a government audit log of police searches of stored ALPR data, it will often be appropriate to disclose an officer’s investigative purposes to conduct a search, and the officer’s search terms – but not the search term if it is a license plate number. Many people do not want the world to know that they are under police investigation, and many public records laws generally limit the disclosure of such sensitive facts because of the reputational and privacy harm inherent in that disclosure.

Aggregate ALPR information about, for example, the amount of data collected and error rates can have important transparency value and impact government policy. Requiring the public release of that kind of data contributes to informed public discussion of how our policing agencies do their jobs. This kind of information has been used to study, critique, and provide oversight of ALPR use.

Thus, the wholesale exemption of ALPR information from disclosure under state public records laws would stymie the public’s ability to monitor how their government is using powerful and controversial surveillance technology. EFF cannot support such laws.

Blocking transparency

In Connecticut, SB 4 is a pending bill that would exclude, from that state’s public records law, information “gathered by” an ALPR or “created through an analysis of the information gathered by” an ALPR. This could ultimately harm individual civilians, who would have less ability to protect themselves from law enforcement that indiscriminately collect vehicle information. Other provisions of this bill would limit government use of ALPRs, and regulate data brokers.

In Arizona, SB 1111 would restrict public access to ALPR data “collected by” an ALPR. The bill would even make it a felony to access or use data from an ALPR (or disseminate it) in violation of this article, which apparently might apply to a member of the public who obtained ALPR data with a public records request. The bill’s author claims it adds “guardrails” for ALPR use.

Earlier this year, Washington state enacted a law that will exempt data “collected by” ALPRs from the state’s public records law. While “bona fide research” will still be a way for some people to obtain ALPR data, this may not include journalists and activists who analyze aggregate data to identify policy flaws. Notably, Washington courts found last year that information generated by ALPR, including images of an individual’s own vehicle, are public records; this new legislation will override that decision, blocking the ability for people to see what photos police have taken of their own vehicles. Other provisions of this new law will limit government use of ALPRs.

A year ago, Illinois’ HB 3339 ended use of that state’s public records law to obtain ALPR information used and collected by the Illinois State Police (ISP), including both information “gathered by an ALPR” and information “created from the analysis of data generated by an ALPR.” This Illinois language for just the ISP is very similar to what is now being considered in Connecticut for all state and local agencies. 

Sadly, the list goes on. Georgia exempted ALPR data (both “captured by or derived from” ALPRs) of any government agency from its open records law. Adding insult to injury, Georgia also made it a misdemeanor to knowingly request, use, or obtain law enforcement’s plate data for any purpose other than law enforcement. Maryland exempted “information gathered by” an ALPR from its public information act. Oklahoma exempted from its open records act the ALPR data “collected, retained or shared” by District Attorneys under that state’s Uninsured Vehicle Enforcement Program.

These laws and bills in seven states are an unwelcome national trend.

Next steps

We urge legislators to reject efforts to amend state public records laws to wholly exempt ALPR information. This would diminish meaningful oversight over these controversial technologies. Public disclosure of some ALPR information is important. 

There is a better approach for states that want to harmonize privacy and transparency in the context of ALPR data: 

  1. Open records laws should cover, and not exclude, information collected by ALPRs, and also any public records derived from that information.
  2. Open records laws should have a privacy exemption that applies to all records, including information collected or derived from ALPRs. That exemption should require a case-by-case balancing of the transparency benefits and privacy costs of disclosure. These provisions work best when agencies and courts can analyze the context of the particular records, the weight of the privacy interests and public interests at stake, and other specific facts to fashion the best balance between these competing values. 
  3. When a document contains both exempt and non-exempt information, open records laws should require disclosure of the latter and withholding of the former. The best public records laws allow agencies to black out, or redact, specific private information while disclosing non-private information in the same records, threading the privacy and transparency needle.
  4. Finally, in the context of a law enforcement ALPR database (including both data collected by ALPRs and audit logs of police searches of stored ALPR data), the law should permit agencies to disclose aggregated and/or deidentified data, while withholding personally identifiable data. Importantly, the law should recognize that the steps an agency takes to protect individual privacy in ALPR databases should not be construed as creating a new public record. 

FOIA balancing standards are one layer in a larger governance stack, and work best alongside strong guardrails on whether and how governments procure ALPR systems in the first place: public debate over vendor contracts, binding surveillance ordinances, strict data‑retention limits, and clear pathways to end ALPR programs entirely where the risks prove too great.

Beryl Lipton

Digital Hopes, Real Power: From Connection to Collective Action

1 week 5 days ago

This is the fifth and final installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. You can read the rest of the series here.

If the Arab Spring was defined by optimism about what the internet could do, the years since have been marked by a more sober understanding of what it takes to defend it. 

Back in 2011, the term “digital rights” was still fairly new. While in the decades prior, open source and hacker communities—as well as a handful of organizations including EFF—had advocated for digital freedoms, it was through the merging of disparate communities from around the world in the 2000s that digital rights came to be more clearly understood as an extension of fundamental human rights.

In 2011, we observed that there were only a few organizations focused on digital rights in the region. Groups like Nawaat, which emerged from the Tunisian diaspora under the Ben Ali regime; the Arab Digital Expression Foundation, formed to promote the creative use of technology; and SMEX, which was initially created to teach journalists and others about social media but has grown to become a powerful force in the region, led the way. Since that time, dozens of organizations have emerged throughout the region to promote freedom of expression, innovation, privacy, and digital security.

Understanding how the digital rights movement evolved in the Middle East and North Africa requires a closer look at the communities that shaped it, and the organizations that are carrying on the fight today. Perspectives from people and organizations that were key to these efforts offer critical insight into how the movement has grown and what challenges lie ahead.

Reem Almasri, a senior researcher and digital sovereignty consultant, says that:

‘Digital rights’ emerged as a term around the Arab Spring, when the internet was still a fairly unregulated space, we were still trying to figure out the tech companies’ policies, and force governments to look at the internet as a fundamental right like water and electricity.

But then the need to converge digital rights to everyday rights—economic, political, social rights—and to connect it to geopolitics has started to be thought about, and to be in discussion as well. And to not look at digital rights as a separate field from everything else that’s affecting it, from the geopolitical context.

Mohamad Najem, who co-founded SMEX in 2008 and has led it to become the largest organization in the region, told me that, at the time, “Nobody gave [social media] a lot of attention in our region.” Their work was “a positive approach to social media, how we can democratize sharing information, how we can share more from civil society, change people’s minds, et cetera.”

“After that phase,” he continues, “we can think about 2012-2013—after the Arab Spring, as an organization we started looking at the infrastructure of the internet, and how freedom of expression and privacy are affected. That’s when we started looking more at what we call digital rights.”

Towards Tech Accountability

In the aftermath of the Arab Spring, social media companies moved from a largely hands-off approach to governance toward more formalized—and often opaque—content moderation systems. Platforms expanded their trust and safety teams and began working more closely with civil society through trusted partnerships in the region and globally. But, Mohamad Najem says:

After the expansion of tech accountability itself and the adaptation of tech companies, we’ve noticed that it’s not taking us anywhere. Gradually we’ve come to a new phase where it feels like tech accountability is an economy by itself that is not leading to real results. So the next phase for us at least and maybe for others in global majority communities is how we can focus on digital public good, how we can push more governments, private and public institutions to adopt more open source software, to look at the ecosystem and understand the US threats happening now, et cetera.

Another group that has played a key role in the fight for digital rights and tech accountability in the region is 7amleh, a Palestinian organization that was founded in 2013. At the time, says Jalal Abukhater:

[I]t was unique and interesting in Palestinian society to have a human rights organization dedicated fully to the topic of digital rights, you know, human rights in a digital format. However, with the years, we saw various milestones, we saw progress of policy decisions and movements through the Israeli government to influence content moderation in Big Tech companies. We saw problems there as an organization.

7amleh took a leading stance in fighting to preserve the digital rights of Palestinians during a period where there was a very strong influence through the Israeli government. There was actually quite important reporting coming through 7amleh on the situation of online content moderation at a time when it wasn’t really a topic being discussed but it was very clearly a situation where there was major influence by government and political suppression happening as a result.

An Ever-Expanding Ecosystem

While in the early days, the digital rights movement attracted specialists, today, people from other fields have recognized how digital rights intersect with their work, and the digital rights community has embraced them.

Almasri says:

Because the digital rights movement has been decentralizing and has stopped being a speciality, it stopped being an exclusive thing for digital rights specialists, since of course the internet not only in the Arab region but all over the world has become a fundamental infrastructure for running any kind of sensitive operations, or operations in general…all types of organizations, and companies, and initiatives are thinking about their digital security, about how internet laws are affecting the use of the internet, or putting them at risk, and how surveillance technologies are affecting their operations.

Abukhater credits the collaborative work that emerged within the region over the years in building the movement’s strength:

[Today], civil society and digital civil society have many forums, many coalitions and networks, but it’s always important to remember that this is work that builds over many years of experience, and relationships, and networks—that it’s different parties coming to support each other at different phases to ensure that this kind of work succeeds and that this ecosystem is sustained globally with support from partner organizations which were very crucial in ensuring that this ecosystem is sustained, especially in Palestine.

Growing Collaborations

Conferences like Bread and Net, first held in Beirut in 2018, and the Palestine Digital Activism Forum (PDAF), first held in Ramallah in 2017, bring activists, academics, journalists, and other practitioners together to network and learn about each other’s work. The pandemic, conflict, and other barriers haven’t stopped either conference from carrying on: PDAF has become an annual virtual event that draws big-name speakers, while Bread & Net has spaced out its meetings but continues to draw bigger crowds each time. 

Almasri credits these meetings with expanding the movement beyond the traditional techies and activists who first got involved. “You see a wide spectrum of different fields. You see artists, archivists, journalists joining these conversations, which is definitely on the brighter side of things when it comes to this field, or this scene.”

She also credits the emergence of alliances such as the Middle East Alliance for Digital Rights (MADR, of which EFF is a member), founded in 2020 by individuals and organizations who had been working together for many years to formalize those collaborations.

“Other than the collaborations at the advocacy level, [MADR] creates a sort of pressure point on Big Tech, on content moderation policies, allows for certain coordination at the level of the UN, et cetera, which I see as really positive because it brings some of the redundant efforts together and helps decide on priorities.”

Looking Forward

In thinking about the future of the movement, Almasri and Najem agree that digital rights are no longer a niche. In Najem’s words, “It’s about everything else…it’s about everything.” 

Almasri adds:

[W]hen it comes to priorities, things that this scene has been working on, I feel that October 7 [2023] was a big turning point in the way that digital rights activists, researchers, and academics—this field—is looking at digital rights in general. Of course, there is the major question of the need to revise tactics to fight Israel’s tech-enabled genocide that is also empowered by the global economy, big tech, and governments of the world?  What alliances should we start building on a regional and global level?

She sees ‘digital sovereignty,’ the ability of people and communities to choose, control, and use technology that serves their needs and values, as one of the next big topics for the movement to tackle, as debates over who owns and hosts our data have sharpened amid revelations that U.S. companies have played a role in regional conflicts.

There have been pockets of debates on how to achieve digital sovereignty, especially from human rights organizations documenting war crimes … There’s an awareness of how the dependence on US-based providers, cloud storage, even hosting infrastructure is a risk, especially after how using these services has been weaponized against the digital existence of certain organizations in the region that have been deplatformed or had their content removed on platforms like Meta and YouTube because their content doesn’t align with the foreign policy of the United States…so it raises a big question about how we look at digital independence, what is the spectrum of independence that civil society in the region can achieve, and in relation to what’s available as well.

Almasri also points to the role of researchers in the region:

There has been a lot more research on the political economy of surveillance technologies, so not only looking at how governments are using them, but their supply chain, who’s investing in these technologies, and how geopolitical networks empowered their proliferation in the hands of governments.

This is where studies looking at the political economy of AI and the military become important, trying to understand how this field of weapons, the military, and AI grew together as part of this global capitalist system rather than looking at these technologies in silos, that is. Looking at the proliferation of these technologies from a geopolitical point of view, looking at the bigger ecosystem rather than zooming in to the specifics of it. I think this has been a big development in the way that we look at digital rights, and the way that digital rights have been converged and integrated into the geopolitical scene.

As the global digital rights community continues to expand, it’s clear that the questions at its core are no longer just about access or expression, but about power—who holds it, how it is exercised, and who is left out of its protections. What began as a fight to keep the internet open has become a broader effort to reimagine it—an effort that is grappling with questions of infrastructure, ownership, and the global inequalities embedded in both.

And yet, despite the scale of these challenges, the movement’s strength lies in the solidarity, the ecosystems, and the networks it has spent more than a decade building. From the early days of the blogging and techie communities to the increasingly powerful digital rights community, advocates in the region have gone up against dictators, endured war and repression, yet remain determined to push forward.

Jillian C. York

EFF Submission to UN Report on the Role of Media in the Context of Israel’s Policies Toward Palestinians

1 week 5 days ago

The UN Special Rapporteur on the situation of human rights in the Palestinian territories occupied since 1967 recently announced a study addressing the killings and attacks against Palestinian journalists and media workers, the destruction of media infrastructure in Gaza, and the production and dissemination of narratives that may enable, justify, or incite international crimes. 

As part of this consultation, EFF contributed a submission that identifies a significant deterioration of press freedom and free expression in the period since October 2023, including an increase in censorship and wave of killings of journalists; adding to an already pervasive censorship and surveillance regime for Palestinians. 

In particular, concerns raised in our submission relate to:

  1. Government takedown requests 
  2. Disinformation and content moderation
  3. Attacks on internet infrastructure

The concerns about censorship in Palestine are ever increasing, and include multiple international forums. Ending the deliberate digital isolation of the Palestinian people is critical to protecting fundamental human rights.

Read the briefing in full here.

Paige Collings
Checked
1 hour 58 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed