🌜 A voice cries out under the crescent moon...

2 days 23 hours ago

EFF needs your help to defend privacy and free speech online. Learn why you're crucial to the fight in this edition of campfire tales from our friends, The Encryptids. These cunning critters have come out of hiding to help us celebrate EFF’s summer membership drive for internet freedom.

Through EFF's 34th birthday on July 10, you can be a member for just $20 and receive 2 rare gifts (including a Bigfoot enamel pin!), and as a bonus new recurring monthly or annual donations get a free match! Join us today.

Today’s post comes from international vocal icon Banshee. She may not be a beast like many cryptids, but she is a *BEAST* when it comes to free speech and local activism...

-Aaron Jue
EFF Membership Team



hat’s that saying about being well behaved and making history? Most people picture me shrieking across the Irish countryside. It's a living, but my voice has real power: it can help me speak truth to power, and it can lend support to the people in my communities.

Free expression is a human right, full stop. And it’s tough to get it right on the internet. Just look at messy content moderation from social media giants. Or the way politicians, celebrities, and companies abuse copyright and trademark law to knock their critics offline. And don’t get me started on repressive governments cutting the internet during protests. Censorship hits disempowered groups the hardest. That’s why I raise my voice to prop up the people around me, and why EFF is such an important ally in the fight to protect speech in the modern world.

Free expression is a human right, full stop.

The things you create, say, and share can change the world, and there’s never been a better megaphone than the internet. A free web carries your voice whether your cause is the environment, workers’ rights, gender equality, or your local parent-teacher group. For all the sewage that people spew online, we must fight back with better ideas and a brighter vision for the future.

EFF’s lawyers, policy analysts, tech experts, and activists know free speech, creativity, and privacy online better than anyone. Hell, EFF even helped establish computer code as legally protected speech back in the 90s. I hope you’ll use your compassion to protect our freedom online with even a small donation to EFF (or even start a monthly donation!).

Join EFF

Free expression is a human right

So the next time someone tells you that you’re being shrill, remind him to STFU because you have something to say. And be grateful that people around the world support EFF to protect our rights online.

Down for the Cause,



EFF is a member-supported U.S. 501(c)(3) organization celebrating TEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

Banshee .

For The Bragging Rights: EFF’s 16th Annual Cyberlaw Trivia Night

2 days 23 hours ago

This post was authored by the mysterious Raul Duke.

The weather was unusually cool for a summer night. Just the right amount of bitterness in the air for attorneys from all walks of life to gather in San Francisco’s Mission District for EFF’s 16th annual Cyberlaw Trivia Night.

Inside Public Works, attorneys filled their plates with chicken and waffles, grabbed a fresh tech-inspired cocktail, and found their tables—ready to compete against their colleagues in obscure tech law trivia. The evening started promptly six minutes late, 7:06 PM PT, with Aaron Jue, EFF's Director of Member Engagement, introducing this year’s trivia tournament.

A lone Quizmaster, Kurt Opsahl, took the stage, noting that his walk-in was missing a key component, until The Blues Brothers started playing, filling the quizmaster with the valor to thank EFF’s intern fund supporters Fenwick and Morrison Forrester. The judges begrudgingly took the stage as the quizmaster reminded them that they have jobs at this event.

One of the judges, EFF’s Civil Liberties Director David Greene, gave some fiduciary advice to the several former EFF interns that were in the crowd. It was anyone’s guess as to whether they had gleaned any inside knowledge about the trivia.

I asked around as to what the attorneys had to gain by participating in this trivia night. I learned that not only were bragging rights on the table, but additionally teams had a chance to win champion steins.

The prizes: EFF steins!

With formalities out of the way, the first round of trivia - “General” - started with a possibly rousing question about the right to repair. Round one ended with the eighth question, which included a major typo calling the “Fourth Amendment is Not for Sale Act” the “First Amendment...” The proofreaders responsible for this mistake have been dealt with.

I was particularly struck by the names of each team: “Run DMCA,” “Ineffective Altruists,” “Subpoena Colada,” “JDs not LLM,” “The little VLOP that could,” and “As a language model, I can't answer that question.” Who knew attorneys could create such creative names?

I asked one of the lawyers if he could give me legal advice on a personal matter (I won’t get into the details here, but it concerns both maritime law and equine law). The lawyer gazed at me with the same look one gives a child who has just proudly thew their food all over the floor. I decided to drop the matter.

Back to the event. It was a close game until the sixth and final round, though we wouldn’t hear the final winners until after the tiebreaker questions.

After several minutes, the tiebreaker was announced. The prompt: which team could get the closest to Pi without going over. This sent your intrepid reporter into an existential crisis. Could one really get to the end of pi? I’m told you could get to Pluto with just the first four and didn’t see any reason in going further than that. During my descent into madness, it was revealed that team “JDs not LLMs” knew 22 digits of pi.

After that shocking revelation, the final results were read, with the winning trivia masterminds being:

1st Place: JDs not LLMs

2nd Place: The Little VLOP That Could

3rd Place: As A Language Model, I Can't Answer That Question

EFF Membership Advocate Christian Romero taking over for Raul Duke.

EFF hosts Cyberlaw Trivia Night to gather those in the legal community who help protect online freedom for tech users. Among the many firms that dedicate their time, talent, and resources to the cause, we would especially like to thank Fenwick and Morrison Foerster for supporting EFF’s Intern Fund!

If you are an attorney working to defend civil liberties in the digital world, consider joining EFF's Cooperating Attorneys list. This network helps EFF connect people to legal assistance when we are unable to assist.

Are you interested in attending or sponsoring an upcoming EFF Trivia Night? Please reach out to tierney@eff.org for more information.

Be sure to check EFF’s events page and mark your calendar for next year’s 17th annual Cyberlaw Trivia Night

Christian Romero

Opposing a Global Surveillance Disaster | EFFector 36.8

3 days 23 hours ago

Join EFF on a road trip through the information superhighway! As you choose the perfect playlist for the trip we'll share our findings about the latest generation of cell-site simulators; share security tips for protestors at college campuses; and rant about the surveillance abuses that could come from the latest UN Cybercrime Convention draft.

As we reach the end of our road trip, know that you can stay up-to-date on these issues with our EFFector newslettter! You can read the full issue here, or subscribe to get the next one in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:


EFFECTOR 36.8 - Opposing A Global Surveillance Disaster

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Police are Using Drones More and Spending More For Them

4 days 1 hour ago

Police in Minnesota are buying and flying more drones than ever before, according to an annual report recently released by the state’s Bureau of Criminal Apprehension (BCA). Minnesotan law enforcement flew their drones without a warrant 4,326 times in 2023, racking up a state-wide expense of over $1 million. This marks a large, 41 percent increase from 2022, when departments across the state used drones 3,076 times and spent $646,531.24 on using them. The data show that more was spent on drones last year than in the previous two years combined. Minneapolis Police Department, the state’s largest police department, implemented a new drone program at the end of 2022 and reported that its 63 warrantless flights in 2023 cost nearly $100,000.

Since 2020, the state of Minnesota has been obligated to put out a yearly report documenting every time and reason law enforcement agencies in the state — local, county, or state-wide — used unmanned aerial vehicles (UAVs), more commonly known as drones, without a warrant. This is partly because Minnesota law requires a warrant for law enforcement to use drones except for specific situations listed in the statute. The State Court Administrator is also required to provide a public report of the number of warrants issued for the use of UAVs, and the data gathered by them. These regular reports give us a glimpse into how police are actually using these devices and how often. As more and more police departments around the country use drones or experiment with drones as first responders, it offers an example of how transparency around drone adoption can be done.

You can read our blog about the 2021 Minnesota report here.

According to EFF’s Atlas of Surveillance, 130 of Minnesota’s 408 law enforcement agencies have drones. Of the Minnesota agencies known to have drones prior to this month’s report, 29 of them did not provide the BCA with 2023 use and cost data.

One of the more revealing aspects of drone deployment provided by  the report is the purpose for which police are using them. A vast majority of uses, almost three-quarters of every time police in Minnesota used drones, were either related to obtaining an aerial view of incidents involving injuries  or death, like car accidents, or for police training and public relations purposes.

Are drones really just a 1 million dollar training tool? We’ve argued many times that tools deployed by police for very specific purposes often find punitive uses that far outreach their original, possibly more innocuous intention. In the case of Minnesota’s drone usage, that can be seen in the other exceptions to the warrant requirement, such as surveilling a public event where there’s a “heightened risk” for participant security. The warrant requirement is meant to prevent using aerial surveillance in violation of civil liberties, but these exceptions open the door to surveillance of First Amendment-protected gatherings and demonstrations. 

Matthew Guariglia

New ALPR Vulnerabilities Prove Mass Surveillance Is a Public Safety Threat

5 days 20 hours ago

Government officials across the U.S. frequently promote the supposed, and often anecdotal, public safety benefits of automated license plate readers (ALPRs), but rarely do they examine how this very same technology poses risks to public safety that may outweigh the crimes they are attempting to address in the first place. When law enforcement uses ALPRs to document the comings and goings of every driver on the road, regardless of a nexus to a crime, it results in gargantuan databases of sensitive information, and few agencies are equipped, staffed, or trained to harden their systems against quickly evolving cybersecurity threats.

The Cybersecurity and Infrastructure Security Agency (CISA), a component of the U.S. Department of Homeland Security, released an advisory last week that should be a wake up call to the thousands of local government agencies around the country that use ALPRs to surveil the travel patterns of their residents by scanning their license plates and "fingerprinting" their vehicles. The bulletin outlines seven vulnerabilities in Motorola Solutions' Vigilant ALPRs, including missing encryption and insufficiently protected credentials.

To give a sense of the scale of the data collected with ALPRs, EFF found that just 80 agencies in California using primarily Vigilant technology, collected more than 1.6 billion license plate scans (CSV) in 2022. This data can be used to track people in real time, identify their "pattern of life," and even identify their relations and associates. An EFF analysis from 2021 found that 99.9% of this data is unrelated to any public safety interest when it's collected. If accessed by malicious parties, the information could be used to harass, stalk, or even extort innocent people.

Unlike location data a person shares with, say, GPS-based navigation app Waze, ALPRs collect and store this information without consent and there is very little a person can do to have this information purged from these systems. And while a person can turn off their phone if they are engaging in a sensitive activity, such as visiting a reproductive health facility or attending a protest, tampering with your license plate is a crime in many jurisdictions. Because drivers don't have control over ALPR data, the onus for protecting the data lies with the police and sheriffs who operate the surveillance and the vendors that provide the technology.

It's a general tenet of cybersecurity that you should not collect and retain more personal data than you are capable of protecting. Perhaps ironically, a Motorola Solutions cybersecurity specialist wrote an article in Police Chief magazine this month that  public safety agencies "are often challenged when it comes to recruiting and retaining experienced cybersecurity personnel," even though "the potential for harm from external factors is substantial." 

That partially explains why, more than 125 law enforcement agencies reported a data breach or cyberattacks between 2012 and 2020, according to research by former EFF intern Madison Vialpando. The Motorola Solutions article claims that ransomware attacks "targeting U.S. public safety organizations increased by 142 percent" in 2023.

Yet, the temptation to "collect it all" continues to overshadow the responsibility to "protect it all." What makes the latest CISA disclosure even more outrageous is it is at least the third time in the last decade that major security vulnerabilities have been found in ALPRs.

In 2015, building off the previous works of University of Arizona researchers, EFF published an investigation that found more than 100 ALPR cameras in Louisiana, California and Florida were connected unsecured to the internet, many with publicly accessible websites that anyone could use to manipulate the controls of the cameras or siphon off data. Just by visiting a URL, a malicious actor, without any specialized knowledge, could view live feeds of the cameras, including one that could be used to spy on college students at the University of Southern California. Some of the agencies involved fixed the problem after being alerted about that problem. However, 3M, which had recently bought the ALPR manufacturer PIPS Technology (which has since been sold to Neology), claimed zero responsibility for the problem, saying instead that it was the agencies' responsibility to manage the devices' cybersecurity. "The security features are clearly explained in our packaging," they wrote. Four years later, TechCrunch found that the problem still persisted.

In 2019, Customs & Border Protections' vendor providing ALPR technology for Border Patrol checkpoints was breached, with hackers gaining access to 105,000 license plate images, as well as more than 184,000 images of travelers from a face recognition pilot program. Some of those images made it onto the dark web, according to reporting by journalist Joseph Cox.

If there's one positive thing we can say about the latest Vigilant vulnerability disclosures, it's that for once a government agency identified and reported the vulnerabilities before they could do damage. The initial discovery was made by the Michigan State Police Michigan Cyber Command Center, which passed the information onto CISA, which then worked with Motorola Solutions to address the problems.

The Michigan Cyber Command center found a total of seven vulnerabilities in Vigilant devices; two of which were medium severity and 5 of which were high severity vulnerabilities.

One of the most severe vulnerabilities (given a score of 8.6 out of 10,) was that every camera sold by Motorola had a wifi network turned on by default that used the same hardcoded password as every other camera, meaning that if someone was able to find the password to connect to one camera they could connect to any other camera as long as they were near it.

Someone with physical access to the camera could also easily install a backdoor, which would allow them access to the camera even if the wifi was turned off. An attacker could even log into the system locally using a default username and password. Once they connected to that camera they would be able to see live video and control the camera, even disable it. Or they could view historic recordings of license plate data stored without any kind of encryption. They would also see logs containing authentication information which could be used to connect to a back-end server where more information is stored. Motorola claims that they have mitigated all of these vulnerabilities.

When vulnerabilities are found, it's not enough for them be patched: They must be used as a stark warnings for policy makers and the courts. Following EFF's report in 2015, Louisiana Gov. Bobby Jindal spiked a statewide ALPR program, writing in his veto message:

Camera programs such as these that make private information readily available beyond the scope of law enforcement, pose a fundamental risk to personal privacy and create large pools of information belonging to law abiding citizens that unfortunately can be extremely vulnerable to theft or misuse.

In May, a Norfolk Circuit Court Judge reached the same conclusion, writing in an order suppressing the data collected by ALPRs in a criminal case:

The Court cannot ignore the possibility of a potential hacking incident either. For example, a team of computer scientists at the University of Arizona was able to find vulnerable ALPR cameras in Washington, California, Texas, Oklahoma, Louisiana, Mississippi, Alabama, Florida, Virginia, Ohio, and Pennsylvania. (Italics added for emphasis.) … The citizens of Norfolk may be concerned to learn the extent to which the Norfolk Police Department is tracking and maintaining a database of their every movement for 30 days. The Defendant argues “what we have is a dragnet over the entire city” retained for a month and the Court agrees.

But a data breach isn't the only way that ALPR data can be leaked or abused. In 2022, an officer in the Kechi (Kansas) Police Department accessed ALPR data shared with his department by the Wichita Police Department to stalk his wife. Meanwhile, recently the Orrville (Ohio) Police Department released a driver's raw ALPR scans to a total stranger in response to a public records request, 404 Media reported.

Public safety agencies must resist the allure of marketing materials promising surveillance omniscience, and instead collect only the data they need for actual criminal investigations. They must never store more data than they adequately protect within their limited resources–or they must keep the public safe from data breaches by not collecting the data at all.

Dave Maass

California Lawmakers Should Reject Mandatory Internet ID Checks

6 days ago

California lawmakers are debating an ill-advised bill that would require internet users to show their ID in order to look at sexually explicit content. EFF has sent a letter to California legislators encouraging them to oppose Assembly Bill 3080, which would have the result of censoring the internet for all users. 

If you care about a free and open internet for all, and are a California resident, now would be a good time to contact your California Assemblymember and Senator and tell them you oppose A.B. 3080. 

Adults Have The Right To Free And Anonymous Internet Browsing

If A.B. 3080 passes, it would make it illegal to show websites with one-third or more “sexually explicit content” to minors. These “explicit” websites would join a list of products or services that can’t be legally sold to minors in California, including things like firearms, ammunition, tobacco, and e-cigarettes. 

But these things are not the same, and should not be treated the same under state or federal law. Adults have a First Amendment right to look for information online, including sexual content. One of the reasons EFF has opposed mandatory age verification is because there’s no way to check ID online just for minors without drastically harming the rights of adults to read, get information, and to speak and browse online anonymously. 

As EFF explained in a recent amicus brief on the issue, collecting ID online is fundamentally different—and more dangerous—than in-person ID checks in the physical world. Online ID checks are not just a momentary display—they require adults “to upload data-rich, government-issued identifying documents to either the website or a third-party verifier” and create a “potentially lasting record” of their visit to the establishment. 

The more information a website collects about visitors, the more chances there are for such data to get into the hands of a criminal or other bad actor, a marketing company, or someone who has filed a subpoena for it. So-called “anonymized” data can be reassembled, especially when it consists of data-rich government ID together with browsing data like IP addresses. 

Data breaches are a fact of life. Once governments insist on creating these ID logs for visiting websites with sexual content, those data breaches will become more dangerous. 

This Bill Mandates ID Checks For A Wide Range Of Content 

The bar is set low in this bill. It’s far from clear what websites prosecutors will consider to have one-third content that’s not appropriate for minors, as that can vary widely by community and even family standards. The bill will surely rope in general-use websites that allow some explicit content. A sex education website for high-school seniors, for instance, could be considered “offensive” and lacking in educational value for young minors. 

Social media sites, online message forums, and even email lists may have some portion of content that isn’t appropriate for younger minors, but also a large amount of general-interest content. Bills like California’s that require ID checks for any site with 33% content that prosecutors deem explicit is similar to having Netflix require ID checks at login, whether a user wants to watch a G-rated movie or an R-rated movie. 

Adults’ Right To View Websites Of Their Choice Is Settled Law 

U.S. courts have already weighed in numerous times on government efforts to age-gate content, including sexual content. In Reno v. ACLU, the Supreme Court overruled almost all of the Communications Decency Act, a 1996 law that was intended to keep “obscene or indecent” material away from minors. 

The high court again considered the issue in 2004 in ACLU v. Ashcroft, when it found that a federal law of that era, which sought to impose age-verification requirements on sexual online content, was likely unconstitutional. 

Other States Will Follow 

In the past year, several other state legislatures have passed similar unwise and unconstitutional “online ID check” laws. They are being subject to legal challenges now working their way through courts, including a Texas age verification law that EFF has asked the Supreme Court to look at. 

Elected officials in many other states, however, wisely refused to enact mandatory online ID laws, including Minnesota, Illinois, and Wisconsin. In April, Arizona’s governor vetoed a mandatory ID-check bill that was passed along partisan lines in her state, stating that the bill “goes against settled case law” and insisting any future proposal must be bipartisan and also “work within the bounds of the First Amendment.” 

California is not only the largest state, it is the home of many of the nation’s largest creative industries. It has also been a leader in online privacy law. If California passes A.B. 3080, it will be a green light to other states to pass online ID-checking laws that are even worse. 

Tennessee, for instance, recently passed a mandatory ID bill that includes felony penalties for anyone who “publishes or distributes” a website with one-third adult content. Tennessee’s fiscal review committee estimated that the state will incarcerate one person per year under this law, and has budgeted accordingly. 

California lawmakers have a chance to restore some sanity to our national conversation about how to protect minors online. Mandatory ID checks, and fines or incarceration for those who fail to use them, are not the answer. 

Further reading: 

Joe Mullin

How to Clean Up Your Bluesky Feed

6 days 1 hour ago

In our recent comparison of Mastodon, Bluesky, and Threads, we detail a few of the ways the similar-at-a-glance microblogging social networks differ, and one of the main distinctions is how much control you have over what you see as a user. We’ve detailed how to get your Mastodon feed into shape before, and now it’s time to clean up your Bluesky feed. We’ll do this mostly through its moderation tools.

Currently, Bluesky is mostly a single experience that operates on one set of flagship services operated by the Bluesky corporation. As the AT Protocol expands and decentralizes, so will the variety of moderation and custom algorithmic feed options. But for the time being, we have Bluesky.

Bluesky’s current moderation filters operate on two levels: the default options built in the Bluesky app, and community created filters called “labelers”. The company’s default system includes options and company labelers which hide the sorts of things we’re all used to having restricted on social networks, like spam or adult content. It also includes defaults to hiding other categories like engagement farming and certain extremist views. Community options use Bluesky’s own moderation tool, Ozone, and are built exactly the same system as the company’s default ones; the only difference is which ones are built into the app. All this choice ends up being both powerful and overwhelming. So let’s walk through how to use it to make your Bluesky experience as good as possible.

Familiarize Yourself with Bluesky’s Moderation Tools

Bluesky offers several ways to control what appears in your feed: labeling and curation tools to hide (or warn about) the content of a post, and tools to block accounts from your feed entirely. Let’s start with customizing the content you see.

Get to Know Bluesky’s Built-In Settings

By default, Bluesky offers a basic moderation tool that allows you to show, hide, or warn about a range of content related to everything from topics like self-harm, extremist views, or intolerance, to more traditional content moderation like security concerns, scams, or inauthentic accounts.

This build-your-own filter approach is different from other social networks, which tend to control moderation on a platform level, leaving little up to the end user. This gives you control over what you see in your feed, but it’s also overwhelming to wrap your head around. We suggest popping into the moderation screen to see how it’s set up, and tweak any options you’d like:

Tap > Settings > Moderation > Bluesky Moderation Service to get to the settings. You can choose from three display options for each type of post: off (you’ll see it), warn (you’ll get a warning before you can view the post), or hide (you won’t see the post at all). blueskymoderation.png There’s no way currently to entirely opt out of Bluesky’s defaults, though the company does note that any separate client app (i.e., not the official Bluesky app) can set up its own rules. However, you can subscribe to custom label sets to layer on top of the Bluesky defaults. These labels are similar to the Block Together tool formerly supported by Twitter, and allow individual users or communities to create their own moderation filters. As with the default moderation options, you can choose to have anything that gets labeled hidden or see a warning if it’s flagged. These custom services can include all sorts of highly specific labels, like whether an image is suspected to be made with AI, includes content that may trigger phobias (like spiders), and more. There’s currently no way to easily search for these labeling services, but Bluesky notes a few here, and there’s a broad list here.

To enable one of these, search for the account name of a labeler, like “@xblock.aendra.dev” and then subscribe to it. Once you subscribe, you can toggle any labeling filters the account offers. If you decide you no longer want to use the service or you want to change the settings, you can do so on the same moderation page noted above.


Build Your Own Mute and Block Lists (or Subscribe to Others)

Custom moderation and labels don’t replace one of the most common tools in all of social media: the ability to block accounts entirely. Here, Bluesky offers something new with the old, though. Not only can you block and mute users, you can also subscribe to block lists published by other users, similar to tools like Block Party.

To mute or block someone, tap their user profile picture to get to their profile, then the three-dot icon, then choose to “Mute Account,” which makes it so they don’t appear in your feed, but they can still see yours, or “Block Account,” which makes it so they don’t appear in your feed and they can’t view yours. Note that a list of your Muted accounts is private, but your Blocked accounts are public. Anyone can see who you’ve blocked, but not who you’ve muted. blueskymute.png You can also use built-in algorithmic tools like muting specific words or phrases. Tap > Settings > Moderation and then tap “Mute words & tags.” Type in any word or phrase you want to mute, select whether to mute it if it appears “text & tags” or just in “tags only,” and then it’ll be hidden from your feed.

Users can also experiment with more elaborate algorithmic curation options, such as using tools like Blacksky to completely reshape your feed.

If all this manual work makes you tired, then mute lists might be the answer. These are curated lists made by other Bluesky users that mass mute accounts. These mute lists, unlike muted accounts, are public, though, so keep that in mind before you create or sign up for one.

As with community run moderation services, there’s not currently a great way to search for these lists. To sign up for mute list you’ll need to know the username of someone who has created a block or mute list that you want to use. Search for their profile, tap the “Lists” option from their profile page, tap the list you’re interested in, then “Subscribe.” Confusingly, from this screen, a “List” can be a feed you subscribe to of posts you want to see (like if someone made a list of “people who work at EFF,”) or a block or mute list. If it's referred to as a “user list” and has the option to “Pin to home,” then it’s a feed you can follow, otherwise it’s a mute or block list.


Clean Up Your Timeline

Is there some strange design decision in the app that makes you question why you use it? Perhaps you hate seeing reposts? Bluesky offers a few ways to choose how information is displayed in the app that can make it easier to use. These are essentially custom algorithms, which Bluesky calls “Feeds,” that filter and focus your content however you want.

Subscribe to (or Build Your Own) Custom Feeds


Unlike most social networks, Bluesky gives you control over the algorithm that displays content. By default, you’ll get a chronological feed, but you can pick and choose from other options using custom feeds. These let you tinker with your feed, create entirely new ones, and more. Custom feeds make it so you can look at a feed of very specific types of posts, like only mutuals (people who also follow you back), quiet posters (people who don’t post much), news organizations, or just photos of cats. Here, unlike with some of the other custom tools, Bluesky does at least provide a way to search for feeds to use.

Tap > Settings > Feeds. You’ll find a list of your current feeds here, and if you scroll down you’ll find a search bar to look for new ones. These can be as broad as “Posters in Japan,” to as focused as “Posts about Taylor Swift.” Once you pick a few, these custom feeds will appear at the top of your main timeline. If you ever want to rearrange what order these appear in, head back to the Feeds page, then tap the gear icon in the top-right to get to a screen where you can change the order. If you’re still struggling to find useful feeds, this search engine might help.

Customize How Replies Work, and Other Little Things in Your Feed


Bluesky has one last trick to making it a little nicer to use than other social networks, and that’s the amount of control you get over your main “following” feed. From your feed, tap the controls icon in the top right to get to the “Following Feed Preferences” page.

Here, you can do everything from hide replies to controlling what replies you do see (like only seeing replies to posts from people you follow, or only for posts with more than two replies). You can also hide reposts and quote posts, and even allow for posts from some of your custom feeds to get injected into your main feed. For example, if you enable the “Show Posts from My Feeds” option and you have subscribed to “Quiet Posters,” you’ll occasionally get a post from someone you follow outside of a strictly chronological time.

Final bonus tip: enable two-factor authentication: Bluesky rolled out email-based two-factor authentication well after many people signed up. If you’ve never looked at your settings, you probably never noticed this was offered. We suggest you turn it on to better secure your account. Head to > Settings, then scroll down to “Require email code to log into your account,” and enable it.

Phew, if that all felt a little overwhelming, that’s because it is. Sure, many people can sign up for Bluesky and never touch any of this stuff, but for those who want a safe, customizable experience, the whole thing feels a bit too crunchy in its current state. And while this sort of empowerment for users, which gives so many levers to control the content, is great, it’s also a lot. The good news is that Bluesky’s defaults are currently good enough to get started. But one of the benefits of community-based moderation like we see on Mastodon or certain Subreddits, is that volunteers do a lot of this heavy lifting for everyone. AT Protocol is still new however, and perhaps as more developers shape its future through new tools and services, these difficulties will be eased.

Thorin Klosowski

What’s the Difference Between Mastodon, Bluesky, and Threads?

6 days 1 hour ago

The ongoing Twitter exodus sparked life into a new way of doing social media. Instead of a handful of platforms trying to control your life online, people are reclaiming control by building more open and empowering approaches to social media. Some of these you may have heard of: Mastodon, Bluesky, and Threads. Each is distinct, but their differences can be hard to understand as they’re rooted in their different technical approaches. 

The mainstream social web arguably became “five websites, each consisting of screenshots of text from the other four,”  but in just the last few years radical and controversial changes to major platforms were a wake up call to many and are driving people to seek alternatives to the billionaire-driven monocultures.

Two major ecosystems have emerged in the wake, both encouraging the variety and experimentation of the earlier web. The first, built on ActivityPub protocol, is called the Fediverse. While it includes many different kinds of websites, Mastodon and Threads have taken off as alternatives for Twitter that use this protocol. The other is the AT Protocol, powering the Twitter alternative Bluesky.

These protocols, a shared language between computer systems, allow websites to exchange information. It’s a simple concept you’re benefiting from right now, as protocols enable you to read this post in your choice of app or browser. Opening this freedom to social media has a huge impact, letting everyone send and receive posts their own preferred way. Even better, these systems are open to experiment and can cater to every niche, while still connecting to everyone in the wider network. You can leave the dead malls of platform capitalism, and find the services which cater to you.

To save you some trial and error, we have outlined some differences between these options and what that might mean for them down the road.

ActivityPub and AT Protocols ActivityPub

The Fediverse goes a bit further back,  but ActivityPub’s development by the world wide web consortium (W3C) started in 2014. The W3C is a public-interest non-profit organization which has played a vital role in developing open international standards which define the internet, like HTML and CSS (for better or worse). Their commitment to ActivityPub gives some assurance the protocol will be developed in a stable and ostensibly consensus driven process.

This protocol requires a host website (often called an “instance”) to maintain an “inbox” and “outbox” of content for all of its users, and selectively share this with other host websites on behalf of the users. In this federation model users are accountable to their instance, and instances are accountable to each other. Misbehaving users are banned from instances, and misbehaving instances are cut off from others through “defederation.” This creates some stakes for maintaining good behavior, for users and moderators alike.

ActivityPub handles a wide variety of uses, but the application most associated with the protocol is Mastodon. However, ActivityPub is also integral to Meta’s own Twitter alternative, Threads, which is taking small steps to connect with the Fediverse. Threads is a totally different application, solely hosted by Meta, and is ten times bigger than the Fediverse and Bluesky networks combined—making it the 500-pound gorilla in the room. Meta’s poor reputation on privacy, moderation, and censorship, has driven many Fediverse instances to vow they’ll defederate from Threads. Other instances still may connect with Threads to help users find a broader audience, and perhaps help sway Threads users to try Mastodon instead.

AT Protocol

The Authenticated Transfer (AT) Protocol is newer; sparked by Twitter co-founder Jack Dorsey in 2019. Like ActivityPub, it is also an open source protocol. However, it is developed unilaterally by a private for-profit corporation— Bluesky PBLLC— though it may be imparted to a web standards body in the future. Bluesky remains mostly centralized. While it has recently opened up to small hosts, there are still some restrictions preventing major alternatives from participating. As developers further loosens control we will likely see rapid changes in how people use the network.

The AT Protocol network design doesn’t put the same emphasis on individual hosts as the Fediverse does, and breaks up hosting, distribution, and curation into distinct services. It’s easiest to understand in comparison to traditional web hosting. Your information, like posts and profiles, are held in Personal Data Servers (PDSes)—analogous to the hosting of a personal website. This content is then fetched by relay servers, like web crawlers, which aggregate a “firehose” of everyone’s content without much alteration. To sort and filter this on behalf of the user, like a “search engine,” AT has Appview services, which give users control over what they see. When accessing the Appview through a client app or website, the user has many options to further filter, sort, and curate their feed, as well as “subscribe” to filters and labels someone else made.

The result is a decentralized system which can be highly tailored while still offering global reach. However, this atomized system also may mean the community accountability encouraged by the host-centered system may be missing, and users are ultimately responsible for their own experience and moderation. This will depend on how the network opens to major hosts other than the Bluesky corporation.

User Experience

Mastodon, Threads and Bluesky have a number of differences that are not essential to their underlying protocol which affect users looking to get involved today. Mastodon and Bluesky are very customizable, so these differences are just addressing the prevalent trends.

Timeline Algorithm

Most Mastodon and most ActivityPub sites prefer a more straightforward timeline of content from accounts you follow. Threads have a Meta-controlled algorithm, like Instagram. Bluesky defaults to a chronological feed, but opens algorithmic curation and filtering up to apps and users. 

User Design

All three services present a default appearance that will be familiar to anyone who has used Twitter. Both Mastodon and Bluesky have alternative clients with the only limit being a developer’s imagination. In fact, thanks to their open nature, projects like SkyBridge let users of one network use apps built for the other (in this case, Bluesky users using Mastodon apps). Threads does not have any alternate clients and requires a developer API, which is still in beta.


Threads has the greatest advantage to getting people to sign up, as it has only one site which accepts an Instagram account as a login. Bluesky also has only one major option for signing up, but has some inherent flexibility in moving your account later on. That said, diving into a few extra setup steps can improve the experience. Finally, one could easily join Mastodon by joining the flagship instance, mastodon.social. However, given the importance of choosing the right instance, you may miss out on some of the benefits of the Fediverse and want to move your account later on. 


Threads has a reputation for being more brand-focused, with more commercial accounts and celebrities, and Meta has made no secret about their decisions to deemphasize political posts on the platform. Bluesky is often compared to early Twitter, with a casual tone and a focus on engaging with friends. Mastodon draws more people looking for community online, especially around shared interests, and each instance will have distinct norms.

Privacy Considerations

Neither ActivityPub nor AT Protocol currently support private end-to-end encrypted messages at this time, so they should not be used for sensitive information. For all services here, the majority of content on your profile will be accessible from the public web. That said, Mastodon, Threads, and Bluesky differ in how they handle user data.


Everything you do as a user is entrusted to the instance host including posts, interactions, DMs, settings, and more. This means the owner of your instance can access this information, and is responsible for defending it against attackers and law enforcement. Tech-savvy people may choose to self-host, but users generally need to find an instance run by someone they trust.

The Fediverse muffles content sharing through a myriad of permissions set by users and instances. If your instance blocks a poorly moderated instance for example, the people on that other site will no longer be in your timelines nor able to follow your posts. You can also limit how messages are shared to further reduce the intended audience. While this can create a sense of community and closeness,  remember it is still public and instance hosts are always part of the equation. Direct messages, for example, will be accessible to your host and the host of the recipient.

If content needs to be changed or deleted after being shared, your instance can request these changes, and this is often honored. That said, once something is shared to the network, it may be difficult to “undo.”


All user content is entrusted to one host, in this case Meta, with a privacy policy similar to Instagram. Meta determines when information is shared with law enforcement, how it is used for advertising, how well protected it is from a breach, and so on.

Sharing with instances works differently for Threads, as Meta has more restricted interoperability. Currently, content sharing is one-way: Threads users can opt-in to sharing their content with the Fediverse, but won’t see likes or replies. By the end of this year, they will allow Threads users to follow accounts on Mastodon accounts.

Federation on Threads may always be restricted, and features like transferring one's account to Mastodon may never be supported. Limits in sharing should not be confused with enhanced privacy or security, however. Public posts are just that—public—and you are still trusting your host (Meta) with private data like DMs (currently handled by Instagram). Instead these restrictions, should they persist, should be seen as the minimum level of control over users Meta deems necessary.


Bluesky, in contrast, is a very “loud” system. Every public message, interaction, follow and block is hosted by your PDS and freely shared to everyone in the network. Every public post is for everyone and is only discovered according to their own app and filter preferences. There are ways to algorithmically imitate smaller spaces with filtering and algorithmic feeds, such as with the Blacksky project, but these are open to everyone and your posts will not be restricted to that curated space.

Direct messages are limited to the flagship Bluesky app, and can be accessed by the Bluesky moderation team. The project plans to eventually incorporate DMs into the protocol, and include end-to-end-encryption, but it is not currently supported. Deletion on Bluesky is simply handled by removing the content from your PDS, but once a message is shared to Relay and Appview services it may remain in circulation a while longer according to their retention settings.

Moderation Mastodon

Mastodon’s approach to moderation is often compared to subreddits, where the administrators of an instance are responsible for creating a set of rules and empowering a team of moderators to keep the community healthy. The result is a lot more variety in moderation experience, with the only boundary being an instance’s reputation in the broader Fediverse. Instances coordinating and “defederating” from problematic hosts has already been effective in the Fediverse. One former instance, Gab, was successfully cut off from the Fediverse for hosting extreme right-wing hate. The threat of defederation sets a baseline of behavior across the Fediverse, and from there users can choose instances based on reputation and on how aligned the hosts are with their own moderation preferences.

At its best, instances prioritize things other than growth. New members are welcomed and onboarded carefully as new community members, and hosts only grow the community if their moderation team can support it. Some instances even set a permanent cap on participation to a few thousand to ensure a quality and intimate experience. Current members too can vote with their feet, and if needed split off into their own new instance without needing to disconnect entirely.

While Mastodon has a lot going for it by giving users a choiceavoiding automation, and avoiding unsustainable growth, there are other evergreen moderation issues at play. Decisions can be arbitrary, inconsistent, and come with little recourse. These aren't just decisions impacting individual users, but also those affecting large swaths of them, when it comes to defederation. 


Threads, as alluded to when discussing privacy above, aims for a moderation approach more aligned with pre-2022 Twitter and Meta’s other current platforms like Instagram. That is, an impossible task of scaling moderation with endless growth of users.

As the largest of these services however, this puts Meta in a position to set norms around moderation as it enters the Fediverse. A challenge for decentralized projects will be to ensure Meta’s size doesn’t make them the ultimate authority on moderation decisions, a pattern of re-centralization we’ve seen happen in email. Spam detection tools have created an environment where email, though an open standard, is in practice dominated by Microsoft and Google as smaller services are frequently marked as spammers. A similar dynamic could play out with the federated social web, where Meta has capacity to exclude smaller instances with little recourse. Other instances may copy these decisions or fear not to do so, lest they are also excluded. 


While in beta, Bluesky received a lot of praise and criticism for its moderation. However, up until recently, all moderation was handled by the centralized Bluesky company—not throughout the distributed AT network. The true nature of moderation structure on the network is only now being tested.

AT Protocol relies on labeling services, aka “labelers”  for moderation. These special accounts using Bluesky’s Ozone tool labels posts with small pieces of metadata. You can also filter accounts with account block lists published by other users, a lot like the Block Together tool formerly available on Twitter. Your Appview aggregating your feed uses these labels to and block lists to filter content. Arbitrary and irreconcilable moderation decisions are still a problem, as are some of the risks of using automated moderation, but it is less impactful as users are not deplatformed and remain accessible to people with different moderation settings. This also means problematic users don’t go anywhere and can still follow you, they are just less visible.

The AT network is censorship resistant, and conversely, it is difficult to meaningfully ban users. To be propagated in the network one only needs a PDS to host their account, and at least one Relay to spread that information. Currently Relays sit out of moderation, only scanning to restrict CSAM. In theory Relays could be more like a Fediverse instance and more accurately curate and moderate users. Even then, as long as one Relay carries the user they will be part of the network. PDSes, much like web hosts, may also choose to remove controversial users, but even in those cases PDSes are easy to self-host even on a low-power computer.

Like the internet generally, removing content relies on the fragility of those targeted. With enough resources and support, a voice will remain online. Without user-driven approaches to limit or deplatform content (like defederation), Bluesky services may be targeted by censorship on the infrastructure level, like on the ISP level.

Hosting and Censorship

With any internet service, there are some legal obligations when hosting user generated content. No matter the size, hosts may need to contend with DMCA takedowns, warrants for user data, cyber attacks,  blocking from authoritarian regimes, and other pressures from powerful interests. This decentralized approach to social media also relies on a shared legal protection for all hosts, Section 230.  By ensuring they are not held liable for user-generated content, this law provides the legal protection necessary for these platforms to operate and innovate.

Given the differences in the size of hosts and their approach to moderation, it isn’t surprising that each of these platforms will address platform liability and censorship differently.


Instance hosts, even for small communities, need to navigate these legal considerations as we outlined in our Fediverse legal primer. We have already seen some old patterns reemerge with these smaller, and often hobbyist, hosts struggling to defend themselves from legal challenges and security threats. While larger hosts have resources to defend against these threats, an advantage of the decentralized model is censors need to play whack-a-mole in a large network where messages flow freely across the globe. Together, the Fediverse is set up to be quite good at keeping information safe from censorship, but individual users and accounts are very susceptible to targeted censorship efforts and will struggle with rebuilding their presence.


Threads is the easiest to address, as Meta is already several platforms deep into addressing liability and speech concerns, and have the resources to do so. Unlike Mastodon or Bluesky, they also need to do so on a much larger scale with a larger target on their back as the biggest platform backed by a multi-billion dollar company. The unique challenge for Threads however will be how Meta decides to handle content from the rest of the Fediverse. Threads users will also need to navigate the perks and pitfalls of sticking with a major host with a spotty track record on censorship and disinformation.


Bluesky is not yet tested beyond the flagship Bluesky services, and raises a lot more questions. PDSes, Relays and even Appviews play some role in hosting, and can be used with some redundancies. For example your account on one PDS may be targeted, but the system is designed to be easy for users to change this host, self-host, or have multiple hosts while retaining one identity on the network.

Relays, in contrast, are more computationally demanding and may remain the most “centralized” service as natural monopolies— users have some incentive to mostly follow the biggest relays. The result is a potential bottle-neck susceptible to influence and censorship. However, if we see a wide variety of relays with different incentives, it becomes more likely that messages can be shared throughout the network despite censorship attempts.

You Might Not Have to Choose

With this overview, you can start diving into one of these new Twitter alternatives leading the way in a more free social web. Thanks to the open nature of these new systems, where you set up will become less important with improved interoperability.

Both ActivityPub and AT Protocol developers are receptive to making the two better at communicating with one another, and independent projects like  Bridgy Fed, SkyBridge, RSS Parrot and Mastofeed are already letting users get the best of both worlds. Today a growing number of projects speak both protocols, along with older ones like RSS. It may be these paths towards a decentralized web become increasingly trivial as they converge, despite some early growing pains. Or the two may be eclipsed by yet another option. But their shared trajectory is moving us towards a more free, more open and refreshingly weird social web free of platform gatekeepers.

Rory Mir

Ah, Steamboat Willie. It’s been too long. 🐭

6 days 1 hour ago

Did you know Disney’s Steamboat Willie entered the public domain this year? Since its 1928 debut, U.S. Congress has made multiple changes to copyright law, extending Disney’s ownership of this cultural icon for almost a century. A century.

Creativity should spark more creativity.

That’s not how intellectual property laws are supposed to work. In the United States, these laws were designed to give creators a financial incentive to contribute to science and culture. Then eventually the law makes this expression free for everyone to enjoy and build upon. Disney itself has reaped the abundant benefits of works in the public domain including Hans Christian Andersen’s “The Little Mermaid" and "The Snow Queen." Creativity should spark more creativity.

In that spirit, EFF presents to you this year’s EFF member t-shirt simply called “Fix Copyright":

Copyright Creativity is fun for the whole family.

The design references Steamboat Willie, but also tractor owners’ ongoing battle to repair their equipment despite threats from manufacturers like John Deere. These legal maneuvers are based on Section 1201 of the Digital Millennium Copyright Act or DMCA. In a recent appeals court brief, EFF and co-counsel Wilson Sonsini Goodrich & Rosati argued that Section 1201 chills free expression, impedes scientific research, and to top it off, is unenforceable because it’s too broad and violates the First Amendment. Ownership ain’t what it used to be, so let’s make it better.

We need you! Get behind this mission and support EFF's work as a member. Through EFF's 34th anniversary on July 10:

You can help cut through the BS and make the world a little brighter—whether online or off.

Join EFF

Defend Creativity & Innovation Online


EFF is a member-supported U.S. 501(c)(3) organization celebrating TEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

Aaron Jue

Podcast Episode: AI in Kitopia

6 days 10 hours ago

Artificial intelligence will neither solve all our problems nor likely destroy the world, but it could help make our lives better if it’s both transparent enough for everyone to understand and available for everyone to use in ways that augment us and advance our goals — not for corporations or government to extract something from us and exert power over us. Imagine a future, for example, in which AI is a readily available tool for helping people communicate across language barriers, or for helping vision- or hearing-impaired people connect better with the world.

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2Fea9307db-050b-40c7-a346-91dce12a1683%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com


(You can also find this episode on the Internet Archive and on YouTube.)

This is the future that Kit Walsh, EFF’s Director of Artificial Intelligence & Access to Knowledge Legal Projects, and EFF Senior Staff Technologist Jacob Hoffman-Andrews, are working to bring about. They join EFF’s Cindy Cohn and Jason Kelley to discuss how AI shouldn’t be a tool to cash in, or to classify people for favor or disfavor, but instead to engage with technology and information in ways that advance us all. 

In this episode you’ll learn about: 

  • The dangers in using AI to determine who law enforcement investigates, who gets housing or mortgages, who gets jobs, and other decisions that affect people’s lives and freedoms. 
  • How "moral crumple zones” in technological systems can divert responsibility and accountability from those deploying the tech. 
  • Why transparency and openness of AI systems — including training AI on consensually obtained, publicly visible data — is so important to ensure systems are developed without bias and to everyone’s benefit. 
  • Why “watermarking” probably isn’t a solution to AI-generated disinformation. 

Kit Walsh is a senior staff attorney at EFF, serving as Director of Artificial Intelligence & Access to Knowledge Legal Projects. She has worked for years on issues of free speech, net neutrality, copyright, coders' rights, and other issues that relate to freedom of expression and access to knowledge, supporting the rights of political protesters, journalists, remix artists, and technologists to agitate for social change and to express themselves through their stories and ideas. Before joining EFF, Kit led the civil liberties and patent practice areas at the Cyberlaw Clinic, part of Harvard University's Berkman Klein Center for Internet and Society; earlier, she worked at the law firm of Wolf, Greenfield & Sacks, litigating patent, trademark, and copyright cases in courts across the country. Kit holds a J.D. from Harvard Law School and a B.S. in neuroscience from MIT, where she studied brain-computer interfaces and designed cyborgs and artificial bacteria. 

Jacob Hoffman-Andrews is a senior staff technologist at EFF, where he is lead developer on Let's Encrypt, the free and automated Certificate Authority; he also works on EFF's Encrypt the Web initiative and helps maintain the HTTPS Everywhere browser extension. Before working at EFF, Jacob was on Twitter's anti-spam and security teams. On the security team, he implemented HTTPS-by-default with forward secrecy, key pinning, HSTS, and CSP; on the anti-spam team, he deployed new machine-learned models to detect and block spam in real-time. Earlier, he worked on Google’s maps, transit, and shopping teams.


What do you think of “How to Fix the Internet?” Share your feedback here


Contrary to some marketing claims, AI is not the solution to all of our problems. So I'm just going to talk about how AI exists in Kitopia. And in particular, the technology is available for everyone to understand. It is available for everyone to use in ways that advance their own values rather than hard coded to advance the values of the people who are providing it to you and trying to extract something from you and as opposed to embodying the values of a powerful organization, public or private, that wants to exert more power over you by virtue of automating its decisions.
So it can make more decisions classifying people, figuring out whom to favor, whom to disfavor. I'm defining Kitopia a little bit in terms of what it's not, but to get back to the positive vision, you have this intellectual commons of research development of data that we haven't really touched on privacy yet, but but data that is sourced in a consensual way and when it's, essentially, one of the things that I would love to have is a little AI muse that actually does embody my values and amplifies my ability to engage with technology and information on the Internet in a way that doesn't feel icky or oppressive and I don't have that in the world yet.

That’s Kit Walsh, describing an ideal world she calls “Kitopia”. Kit is a senior staff attorney at the Electronic Frontier Foundation. She works on free speech, net neutrality and copyright and many other issues related to freedom of expression and access to knowledge. In fact, her full title is EFF’s Director of Artificial Intelligence & Access to Knowledge Legal Projects. So, where is Kitopia, you might ask? Well we can’t get there from here - yet. Because it doesn’t exist. Yet. But here at EFF we like to imagine what a better online world would look like, and how we will get there and today we’re joined by Kit and by EFF’s Senior Staff Technologist Jacob Hoffman-Andrews. In addition to working on AI with us, Jacob is a lead developer on Let's Encrypt, and his work on that project has been instrumental in helping us encrypt the entire web. I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

And I’m Jason Kelley, EFF’s Activism Director. This is our podcast series How to Fix the Internet.

I think in my ideal world people are more able to communicate with each other across language barriers, you know, automatic translation, transcription of the world for people who are blind or for deaf people to be able to communicate more clearly with hearing people. I think there's a lot of ways in which AI can augment our weak human bodies in ways that are beneficial for people and not simply increasing the control that their governments and their employers have over their lives and their bodies.

We’re talking to Kit and Jacob both, because this is such a big topic that we really need to come at it from multiple angles to make sense of it and to figure out the answer to the really important question which is, How can AI actually make the world we live in, a better place?

So while many other people have been trying to figure out how to cash in on AI, Kit and Jacob have been looking at AI from a public interest and civil liberties perspective on behalf of EFF. And they’ve also been giving a lot of thought to what an ideal AI world looks like.

AI can be more than just another tool that’s controlled by big tech. It really does have the potential to improve lives in a tangible way. And that’s what this discussion is all about. So we’ll start by trying to wade through the hype, and really nail down what AI actually is and how it can and is affecting our daily lives.

The confusion is understandable because AI is being used as a marketing term quite a bit, rather than as an abstract concept, rather than as a scientific concept.
And the ways that I think about AI, particularly in the decision-making context, which is one of our top priorities in terms of where we think that AI is impacting people's rights, is first I think about what kind of technology are we really talking about because sometimes you have a tool that actually no one is calling AI, but it is nonetheless an example of algorithmic decision-making.
That also sounds very fancy. This can be a fancy computer program to make decisions, or it can be a buggy Excel spreadsheet that litigators discover is actually just omitting important factors when it's used to decide whether people get health care or not in a state health care system.

You're not making those up, Kit. These are real examples.

That’s not a hypothetical. Unfortunately, it’s not a hypothetical, and the people who litigated that case lost some clients because when you're talking about not getting health care that can be life or death. And machine learning can either be a system where you – you, humans, code a reinforcement mechanism. So you have sort of random changes happening to an algorithm, and it gets rewarded when it succeeds according to your measure of success, and rejected otherwise.
It can be training on vast amounts of data, and that's really what we've seen a huge surge in over the past few years, and that training can either be what's called unsupervised, where you just ask your system that you've created to identify what the patterns are in a bunch of raw data, maybe raw images, or it can be supervised in the sense that humans, usually low paid humans, are coding their views on what's reflected in the data.
So I think that this is a picture of a cow, or I think that this picture is adult and racy. So some of these are more objective than others, and then you train your computer system to reproduce those kinds of classifications when it makes new things that people ask for with those keywords, or when it's asked to classify a new thing that it hasn't seen before in its training data.
So that's really a very high level oversimplification of the technological distinctions. And then because we're talking about decision-making, it's really important who is using this tool.
Is this the government which has all of the power of the state behind it and which administers a whole lot of necessary public benefits - that is using decisions to decide who is worthy and who is not to obtain those benefits? Or, who should be investigated? What neighborhoods should be investigated?
We'll talk a little bit more about the use in law enforcement later on, but it's also being used quite a bit in the private sector to determine who's allowed to get housing, whether to employ someone, whether to give people mortgages, and that's something that impacts people's freedoms as well.

So Jacob, two questions I used to distill down on AI decision-making are, who is the decision-making supposed to be serving and who bears the consequences if it gets it wrong? And if we think of those two framing questions, I think we get at a lot of the issues from a civil liberties perspective. That sound right to you?

Yeah, and, you know, talking about who bears the consequences when an AI or technological system gets it wrong, sometimes it's the person that system is acting upon, the person who's being decided whether they get healthcare or not and sometimes it can be the operator.
You know, it's, uh, popular to have kind of human in the loop, like, oh, we have this AI decision-making system that's maybe not fully baked. So there's a human who makes the final call. The AI just advises the human and, uh, there's a great paper by Madeleine Clare Elish describing this as a form of moral crumple zones. Uh, so, you may be familiar in a car, modern cars are designed so that in a collision, certain parts of the car will collapse to absorb the force of the impact.
So the car is destroyed but the human is preserved. And, in some human in the loop decision making systems often involving AI, it's kind of the reverse. The human becomes the crumple zone for when the machine screws up. You know, you were supposed to catch the machine screwup. It didn't screw up in over a thousand iterations and then the one time it did, well, that was your job to catch it.
And, you know, these are obviously, you know, a crumple zone in a car is great. A moral crumple zone in a technological system is a really bad idea. And it takes away responsibility from the deployers of that system who ultimately need to bear the responsibility when their system harms people.

So I wanna ask you, what would it look like if we got it right? I mean, I think we do want to have some of these technologies available to help people make decisions.
They can find patterns in giant data probably better than humans can most of the time. And we'd like to be able to do that. So since we're fixing the internet now, I want to stop you for a second and ask you how would we fix the moral crumple zone problem or what were the things we think about to do that?

You know, I think for the specific problem of, you know, holding say a safety driver or like a human decision-maker responsible for when the AI system they're supervising screws up, I think ultimately what we want is that the responsibility can be applied all the way up the chain to the folks who decided that that system should be in use. They need to be responsible for making sure it's actually a safe, fair system that is reliable and suited for purpose.
And you know, when a system is shown to bring harm, for instance, you know, a self-driving car that crashes into pedestrians and kills them, you know, that needs to be pulled out of operation and either fixed or discontinued.

Yeah, it made me think a little bit about, you know, kind of a change that was made, I think, by Toyota years ago, where they let the people on the front line stop the line, right? Um, I think one thing that comes out of that is you need to let the people who are in the loop have the power to stop the system, and I think all too often we don't.
We devolve the responsibility down to that person who's kind of the last fair chance for something but we don't give them any responsibility to raise concerns when they see problems, much less the people impacted by the decisions.

And that’s also not an accident of the appeal of these AI systems. It's true that you can't hold a machine accountable really, but that doesn't deter all of the potential markets for the AI. In fact, it's appealing for some regulators, some private entities, to be able to point to the supposed wisdom and impartiality of an algorithm, which if you understand where it comes from, the fact that it's just repeating the patterns or biases that are reflected in how you trained it, you see it's actually, it's just sort of automated discrimination in many cases and that can work in several ways.
In one instance, it's intentionally adopted in order to avoid the possibility of being held liable. We've heard from a lot of labor rights lawyers that when discriminatory decisions are made, they're having a lot more trouble proving it now because people can point to an algorithm as the source of the decision.
And if you were able to get insight in how that algorithm were developed, then maybe you could make your case. But it's a black box. A lot of these things that are being used are not publicly vetted or understood.
And it's especially pernicious in the context of the government making decisions about you, because we have centuries of law protecting your due process rights to understand and challenge the ways that the government makes determinations about policy and about your specific instance.
And when those decisions and when those decision-making processes are hidden inside an algorithm then the old tools aren't always effective at protecting your due process and protecting the public participation in how rules are made.

It sounds like in your better future, Kit, there's a lot more transparency into these algorithms, into this black box that's sort of hiding them from us. Is that part of what you see as something we need to improve to get things right?

Absolutely. Transparency and openness of AI systems is really important to make sure that as it develops, it develops to the benefit of everyone. It's developed in plain sight. It's developed in collaboration with communities and a wider range of people who are interested and affected by the outcomes, particularly in the government context though I'll speak to the private context as well. When the government passes a new law, that's not done in secret. When a regulator adopts a new rule, that's also not done in secret. There's either, sure, that's, there are exceptions.

Right, but that’s illegal.

Yeah, that's the idea. Right. You want to get away from that also.

Yeah, if we can live in Kitopia for a moment where, where these things are, are done more justly, within the framework of government rulemaking, if that's occurring in a way that affects people, then there is participation. There's meaningful participation. There's meaningful accountability. And in order to meaningfully have public participation, you have to have transparency.
People have to understand what the new rule is that's going to come into force. And because of a lot of the hype and mystification around these technologies, they're being adopted under what's called a procurement process, which is the process you use to buy a printer.
It's the process you use to buy an appliance, not the process you use to make policy. But these things embody policy. They are the rule. Sometimes when the legislature changes the law, the tool doesn't get updated and it just keeps implementing the old version. And that means that the legislature's will is being overridden by the designers of the tool.

You mentioned predictive policing, I think, earlier, and I wonder if we could talk about that for just a second because it's one way where I think we at EFF have been thinking a lot about how this kind of algorithmic decision-making can just obviously go wrong, and maybe even should never be used in the first place.
What we've seen is that it's sort of, you know, very clearly reproduces the problems with policing, right? But how does AI or this sort of predictive nature of the algorithmic decision-making for policing exacerbate these problems? Why is it so dangerous I guess is the real question.

So one of the fundamental features of AI is that it looks at what you tell it to look at. It looks at what data you offer it, and then it tries to reproduce the patterns that are in it. Um, in the case of policing, as well as related issues around decisions for pretrial release and parole determinations, you are feeding it data about how the police have treated people, because that's what you have data about.
And the police treat people in harmful, racist, biased, discriminatory, and deadly ways that it's really important for us to change, not to reify into a machine that is going to seem impartial and seem like it creates a veneer of justification for those same practices to continue. And sometimes this happens because the machine is making an ultimate decision, but that's not usually what's happening.
Usually the machine is making a recommendation. And one of the reasons we don't think that having a human in the loop is really a cure for the discriminatory harms is that humans are more likely to follow the AI if it gives them cover for a biased decision that they're going to make. And relatedly, some humans, a lot of people, develop trust in the machine and wind up following it quite a bit.
So in these contexts, if you really wanted to make predictions about where a crime was going to occur, well it would send you to Wall Street. And that's not, that's not the result that law enforcement wants.
But, first of all, you would actually need data about where crimes occur, and generally people who don't get caught by the police are not filling out surveys to say, here are the crimes I got away with so that you can program a tool that's going to do better at sort of reflecting some kind of reality that you're trying to capture. You only know how the system has treated people so far and all that you can do with AI technology is reinforce that. So it's really not an appropriate problem to try to solve with this technology.

Yeah, our friends at Human Rights Data Analysis Group who did some of this work said, you know, we call it predictive policing, but it's really predicting the police because we're using what the police already do to train up a model, and of course it's not going to fix the problems with how police have been acting in the past. Sorry to interrupt. Go on.

No, to build on that, by definition, it thinks that the past behavior is ideal, and that's what it should aim for. So, it's not a solution to any kind of problem where you're trying to change a broken system.

And in fact, what they found in the research was that the AI system will not only replicate what the police do, it will double down on the bias because it's seeing a small trend and it will increase the trend. And I don't remember the numbers, but it's pretty significant. So it's not just that the AI system will replicate what the police do. What they found in looking at these systems is that the AI systems increase the bias in the underlying data.
It's really important that we continue to emphasize the ways in which AI and machine learning are already being used and already being used in ways that people may not see, but dramatically impact them. But right now, what's front of mind for a lot of people is generative AI. And I think many, many more people have started playing around with that. And so I want to start with how we think about generative AI and the issues it brings. And Jacob, I know you have some thoughts about that.

Yeah. To call back to, at the beginning you asked about, how do we define AI? I think one of the really interesting things in the field is that it's changed so much over time. And, you know, when computers first became broadly available, you know, people have been thinking for a very long time, what would it mean for a computer to be intelligent? And for a while we thought, wow, you know, if a computer could play chess and beat a human, we would say that's an intelligent computer.
Um, if a computer could recognize, uh, what's in an image, is this an image of a cat or a cow - that would be intelligence. And of course now they can, and we don't consider it intelligence anymore. And you know, now we might say if a computer could write a term paper, that's intelligence and I don't think we're there yet, but the development of chatbots does make a lot of people feel like we're closer to intelligence because you can have a back and forth and you can ask questions and receive answers.
And some of those answers will be confabulations and, but some percentage of the time they'll be right. And it starts to feel like something you're interacting with. And I think, rightly so, people are worried that this will destroy jobs for writers and for artists. And to an earlier question about, you know, what does it look like if we get it right, I think, you know, the future we want is one where people can write beautiful things and create beautiful things and, you know, still make a great living at it and be fulfilled and safe in their daily needs and be recognized for that. And I think that's one of the big challenges we're facing with generative AI.

Let’s pause for just a moment to say thank you to our sponsor. How to Fix the Internet is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. And now back to our discussion with Kit and Jacob about AI: the good, the bad, and what could be better.

There’s been a lot of focus on the dark side of generative AI and the idea of using copyright to address those problems has emerged. We have worries about that as a way to sort out between good and bad uses of AI, right Kit?

Absolutely. We have had a lot of experience with copyright being used as a tool of censorship, not only against individual journalists and artists and researchers, but also against entire mediums for expression, against libraries, against the existence of online platforms where people are able to connect and copyright not only lasts essentially forever, it comes with draconian penalties that are essentially a financial death sentence for the typical person in the United States. So in the context of generative AI, there is a real issue with the potential to displace creative labor. And it's a lot like the issues of other forms of automation that displace other forms of labor.
And it's not always the case that an equal number of new jobs are created, or that those new jobs are available to the people who have been displaced. And that's a pretty big social problem that we have. In Kitopia, we have AI and it's used so that there's less necessary labor to achieve a higher standard of living for people, and we should be able to be excited about automation of labor tasks that aren't intrinsically rewarding.
One of the reasons that we're not is because the fruits of that increased production flow to the people who own the AI, not to the people who were doing that labor, who now have to find another way to trade their labor for money or else become homeless and starve and die, and that's cruel.
It is the world that we're living in so it's really understandable to me that an artist is going to want to reach for copyright, which has the potential of big financial damages against someone who infringes, and is the way that we've thought about monetization of artistic works. I think that way of thinking about it is detrimental, but I also think it's really understandable.
One of the reasons why the particular legal theories in the lawsuits against generative AI technologies are concerning is because they wind up stretching existing doctrines of copyright law. So in particular, the very first case against Stable Diffusion argued that you were creating an infringing derivative work when you trained your model to recognize the patterns in five billion images.
It's a derivative work of each and every one of them. And that can only succeed as a legal theory if you throw out the existing understanding of what a derivative work is, that it has to be substantially similar to a thing that it's infringing and that limitation is incredibly important for human creativity.
The elements of my work that you might recognize from my artistic influences in the ordinary course of artistic borrowing and inspiration are protected. I'm able to make my art without people coming after me because I like to draw eyes the same way as my inspiration or so on, because ultimately the work is not substantially similar.
And if we got rid of that protection, it would be really bad for everybody.
But at the same time, you can see how someone might say, why should I pay a commission to an artist if I can get something in the same style? To which I would say, try it. It's not going to be what you want because art is not about replicating patterns that are found in a bunch of training data.
It can be a substitute for stock photography or other forms of art that are on the lower end of how much creativity is going into the expression, but for the higher end, I think that part of the market is safe. So I think all artists are potentially impacted by this. I'm not saying only bad artists have to care, but there is this real impact.
Their financial situation is precarious already, and they deserve to make a living, and this is a bandaid because we don't have a better solution in place to support people and let them create in a way that is in accord with their values and their goals. We really don't have that either in the situation where people are primarily making their income doing art that a corporation wants them to make to maximize its products.
No artist wants to create assets for content. Artists want to express and create new beauty and new meaning and the system that we have doesn't achieve that. We can certainly envision better ones but in the meantime, the best tool that artists have is banding together to negotiate with collective power, and it's really not a good enough tool at this point.
But I also think there's a lot of room to ethically use generative AI if you're working with an artist and you're trying to communicate your vision for something visual, maybe you're going to use an AI tool in order to make something that has some of the elements you're looking for and then say this, this is what I want to pay you to, to draw. I want this kind of pose, right? But, but, more unicorns.

And I think while we're talking about these sort of seemingly good, but ultimately dangerous solutions for the different sort of problems that we're thinking about now more than ever because of generative AI, I wanted to talk with Jacob a little bit about watermarking. And this is meant to solve a sort of problem of knowing what is and is not generated by AI.
And people are very excited about this idea that through some sort of, well, actually you just explain Jacob, cause you are the technologist. What is watermarking? Is this a good idea? Will this work to help us understand and distinguish between AI-generated things and things that are just made by people?

Sure. So a very real and closely related risk of generative AI is that it is - it will, and already is - flooding the internet with bullshit. Uh, you know, many of the articles you might read on any given topic, these days the ones that are most findable are often generated by AI.
And so an obvious next step is, well, what if we could recognize the stuff that's written by AI or the images that are generated by AI, because then we could just skip that. You know, I wouldn't read this article cause I know it's written by AI or you can go even a step further, you could say, well, maybe search engines should downrank things that were written by AI or social networks should label it or allow you to opt out of it.
You know, there's a lot of question about, if we could immediately recognize all the AI stuff, what would we do about it? There's a lot of options, but the first question is, can we even recognize it? So right off the bat, you know, when ChatGPT became available to the public, there were people offering ChatGPT detectors. You know, you could look at this content and, you know, you can kind of say, oh, it tends to look like this.
And you can try to write something that detects its output, and the short answer is it doesn't work and it's actually pretty harmful. A number of students have been harmed because their instructors have run their work through a ChatGPT detector, an AI detector that has incorrectly labeled it.
There's not a reliable pattern in the output that you can always see. Well, what if the makers of the AI put that pattern there? And, you know, for a minute, let's switch from text based to image based stuff. Jason, have you ever gone to a stock photo site to download a picture of something?

I sadly have.

Yeah. So you might recognize the images they have there, they want to make sure you pay for the image before they use it. So there's some text written across it in a kind of ghostly white diagonal. It says, this is from say shutterstock.com. So that's a form of watermark. If you just went and downloaded that image rather than paying for the cleaned up version, there's a watermark on it.
So the concept of watermarking for AI provenance is that It would be invisible. It would be kind of mixed into the pixels at such a subtle level that you as a human can't detect it, but you know, a computer program designed to detect that watermark could so you could imagine the AI might generate a picture and then in the top left pixel, increase its shade by the smallest amount, and then the next one, decrease it by the smallest amount and so on throughout the whole image.
And you can encode a decent amount of data that way, like what system produced it, when, all that information. And actually the EFF has published some interesting research in the past on a similar system in laser printers where little yellow dots are embedded by certain laser printers, by most laser printers that you can get as an anti counterfeiting measure.

This is one of our most popular discoveries that comes back every few years, if I remember right, because people are just gobsmacked that they can't see them, but they're there, and that they have this information. It's a really good example of how this works.

Yeah, and it's used to make sure that they can trace back to the printer that printed anything on the off chance that what you're printing is fake money.

Indeed, yeah.
The other thing people really worry about is that AI will make it a lot easier to generate disinformation and then spread it and of course if you're generating disinformation it's useful to strip out the watermark. You would maybe prefer that people don't know it's AI. And so you're not limited to resizing or cropping an image. You can actually, you know, run it through a program. You can see what the shades of all the different pixels are. And you, in theory probably know what the watermarking system in use is. And given that degree of flexibility, it seems very, very likely - and I think past technology has proven this out - that it's not going to be hard to strip out the watermark. And in fact, it's not even going to be hard to develop a program to automatically strip out the watermark.

Yep. And you, you end up in a cat and mouse game where the people who you most want to catch, who are doing sophisticated disinformation, say to try to upset elections, are going to be able to either strip out the watermark or fake it and so you end up where the things that you most want to identify are probably going to trick people. Is that, is that the way you're thinking about it?

Yeah, that's pretty much what I'm getting at. I wanted to say one more thing on, um, watermarking. I'd like to talk about chainsaw dogs. There's this popular genre of image on Facebook right now of a man and his chainsaw carved wooden dog and, often accompanied by a caption like, look how great my dad is, he carved this beautiful thing.
And these are mostly AI generated and they receive, you know, thousands of likes and clicks and go wildly viral. And you can imagine a weaker form of the disinformation claim of say, ‘Well, okay, maybe state actors will strip out watermarks so they can conduct their disinformation campaigns, but at least adding watermarks to AI images will prevent this proliferation of garbage on the internet.’
People will be able to see, oh, that's a fake. I'm not going to click on it. And I think the problem with that is even people who are just surfing for likes on social media actually love to strip out credits from artists already. You know, cartoonists get their signatures stripped out and in the examples of these chainsaw dogs, you know, there is actually an original.
There's somebody who made a real carving of a dog. It was very skillfully executed. And these are generated using kind of image to image AI, where you take an image and you generate an image that has a lot of the same concepts. A guy, a dog, made of wood and so they're already trying to strip attribution in one way.
And I think likely they would also find a way to strip any watermarking on the images they're generating.

So Jacob, we heard earlier about Kit's ideal world. I'd love to hear about the future world that Jacob wants us to live in.

Yeah. I think the key thing is, you know, that people are safer in their daily lives than they are today. They're not worried about their livelihoods going away. I think this is a recurring theme when most new technology is invented that, you know, if it replaces somebody's job, and that person's job doesn't get easier, they don't get to keep collecting a paycheck. They just lose their job.
So I think in the ideal future, people have a means to live and to be fulfilled in their lives to do meaningful work still. And also in general, human agency is expanded rather than restricted. The promise of a lot of technologies that, you know, you can do more in the world, you can achieve the conditions you want in your life.

Oh that sounds great. I want to come back to you Kit. We've talked a little about Kitopia, including at the top of the show. Let's talk a little bit more. What else are we missing?

So in Kitopia, people are able to use AI if it's a useful part of their artistic expression, they're able to use AI if they need to communicate something visual when I'm hiring a concept artist, when I am getting a corrective surgery, and I want to communicate to the surgeon what I want things to look like.
There are a lot of ways in which words don't communicate as well as images. And not everyone has the skill or the time or interest to go and learn a bunch of photoshop to communicate with their surgeon. I think it would be great if more people were interested and had the leisure and freedom to do visual art.
But in Kitopia, that's something that you have because your basic needs are met. And in part, automation is something that should help us do that more. The ability to automate aspects of, of labor should wind up benefiting everybody. That's the vision of AI in Kitopia.

Nice. Well that's a wonderful place to end. We're all gonna pack our bags and move to Kitopia. And hopefully by the time we get there, it’ll be waiting for us.
You know, Jason, that was such a rich conversation. I'm not sure we need to do a little recap like we usually do. Let's just close it out.

Yeah, you know, that sounds good. I'll take it from here. Thanks for joining us for this episode of How to Fix the Internet. If you have feedback or suggestions, we would love to hear from you. You can visit EFF.org slash podcasts to click on listener feedback and let us know what you think of this or any other episode.
You can also get a transcript or information about this episode and the guests. And while you're there of course, you can become an EFF member, pick up some merch, or just see what's happening in digital rights this or any other week. This podcast is licensed Creative Commons Attribution 4. 0 International and includes music licensed Creative Commons Unported by their creators.
In this episode, you heard Kalte Ohren by Alex featuring starfrosch & Jerry Spoon; lost Track by Airtone; Come Inside by Zep Hume; Xena's Kiss/Medea's Kiss by MWIC; Homesick By Siobhan D and Drops of H2O ( The Filtered Water Treatment ) by J.Lang. Our theme music is by Nat Keefe of BeatMower with Reed Mathis. And How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology. We’ll see you next time. I’m Jason Kelley.

And I’m Cindy Cohn.


Josh Richman

California’s Facial Recognition Bill Is Not the Solution We Need

6 days 21 hours ago

California Assemblymember Phil Ting has introduced A.B. 1814, a bill that would supposedly regulate police use of facial recognition technology. The problem is that it would do little to actually change the status quo of how police use this invasive and problematic technology. Police use of facial recognition poses a massive risk to civil liberties, privacy, and even our physical health as the technology has been known to wrongfully sic armed police on innocent peopleparticularly Black men and women. That’s why this issue is too important to throw inadequate or half-measures like A.B. 1814 to try to fix it.

The bill dictates that police should examine facial recognition matches “with care” and that a match should not be the sole basis for the probable cause for an arrest or search warrant. And while we agree it is a big issue that police seem to repeatedly use the matches spit out by a computer as the only justification for arresting people, theoretically the limit this bill imposes is already the limit. Police departments and facial recognition companies alike both maintain that police cannot justify an arrest using only algorithmic matches–so what would this bill really change? It only gives the appearance of doing something to address face recognition technology's harms, while inadvertently allowing the practice to continue.

Additionally, A.B. 1814 gives defendants no real recourse against police who violate its requirements. There is neither a suppression remedy nor a usable private cause of action. The bill lacks transparency requirements which would compel police departments to reveal if they used face recognition in the first place. This means if police did arrest someone wrongfully because a computer said they looked similar to the subject, someone would likely not even know they could sue the department over damages, unless they uncovered it while being prosecuted. 

Despite these attempts at creating leaky bureaucratic reforms, police may continue to use this technology to identify people at protests, track marginalized individuals when they visit doctors or have other personal encounters, as well as any other number of civil liberties-chilling uses police might overtly or inadvertently deploy. It is this reason that EFF continues to advocate for a complete ban on government use of face recognition–an approach that has also resulted in cities across the United States standing up for themselves and enacting bans. Until the day comes that California lawmakers realize the urgent need to ban government use of face recognition, we will continue to differentiate between bills that will make a serious difference in the lives of the surveilled, and those that do not. That is why we are urging Assemblymembers to vote no on A.B. 1814. 

Matthew Guariglia

The Surgeon General's Fear-Mongering, Unconstitutional Effort to Label Social Media

6 days 22 hours ago

Surgeon General Vivek Murthy’s extraordinarily misguided and speech-chilling call this week to label social media platforms as harmful to adolescents is shameful fear-mongering that lacks scientific evidence and turns the nation’s top physician into a censor. This claim is particularly alarming given the far more complex and nuanced picture that studies have drawn about how social media and young people’s mental health interact.

The Surgeon General’s suggestion that speech be labeled as dangerous is extraordinary. Communications platforms are not comparable to unsafe food, unsafe cars, or cigarettes, all of which are physical products—rather than communications platforms—that can cause physical injury. Government warnings on speech implicate our fundamental rights to speak, to receive information, and to think. Murthy’s effort will harm teens, not help them, and the announcement puts the surgeon general in the same category as censorial public officials like Anthony Comstock

There is no scientific consensus that social media is harmful to children's mental health. Social science shows that social media can help children overcome feelings of isolation and anxiety. This is particularly true for LBGTQ+ teens. EFF recently conducted a survey in which young people told us that online platforms are the safest spaces for them, where they can say the things they can't in real life ‘for fear of torment.’ They say these spaces have improved their mental health and given them a ‘haven’ to talk openly and safely. This comports with Pew Research findings that teens are more likely to report positive than negative experiences in their social media use. 

Additionally, Murthy’s effort to label social media creates significant First Amendment problems in its own right, as any government labeling effort would be compelled speech and courts are likely to strike it down.

Young people’s use of social media has been under attack for several years. Several states have recently introduced and enacted unconstitutional laws that would require age verification on social media platforms, effectively banning some young people from them. Congress is also debating several federal censorship bills, including the Kids Online Safety Act and the Kids Off Social Media Act, that would seriously impact young people’s ability to use social media platforms without censorship. Last year, Montana banned the video-sharing app TikTok, citing both its Chinese ownership and its interest in protecting minors from harmful content. That ban was struck down as unconstitutionally overbroad; despite that, Congress passed a similar federal law forcing TikTok’s owner, ByteDance, to divest the company or face a national ban.

Like Murthy, lawmakers pushing these regulations cherry-pick the research, nebulously citing social media’s impact on young people, and dismissing both positive aspects of platforms and the dangerous impact these laws have on all users of social media, adults and minors alike. 

We agree that social media is not perfect, and can have negative impacts on some users, regardless of age. But if Congress is serious about protecting children online, it should enact policies that promote choice in the marketplace and digital literacy. Most importantly, we need comprehensive privacy laws that protect all internet users from predatory data gathering and sales that target us for advertising and abuse.

Aaron Mackey

The UN Cybercrime Draft Convention is a Blank Check for Surveillance Abuses

1 week 3 days ago

This is the second post in a series highlighting the problems and flaws in the proposed UN Cybercrime Convention. Check out our detailed analysis on the criminalization of security research activities under the proposed convention.

The United Nations Ad Hoc Committee is just weeks away from finalizing a too-broad Cybercrime Draft Convention. This draft would normalize unchecked domestic surveillance and rampant government overreach, allowing serious human rights abuses around the world.

The latest draft of the convention—originally spearheaded by Russia but since then the subject of two and a half years of negotiations—still authorizes broad surveillance powers without robust safeguards and fails to spell out data protection principles essential to prevent government abuse of power.

As the August 9 finalization date approaches, Member States have a last chance to address the convention’s lack of safeguards: prior judicial authorization, transparency, user notification, independent oversight, and data protection principles such as transparency, minimization, notification to users, and purpose limitation. If left as is, it can and will be wielded as a tool for systemic rights violations.

Countries committed to human rights and the rule of law must unite to demand stronger data protection and human rights safeguards or reject the treaty altogether. These domestic surveillance powers are critical as they underpin international surveillance cooperation.

EFF’s Advocacy for Human Rights Safeguards

EFF has consistently advocated for human rights safeguards to be a baseline for both the criminal procedural measures and international cooperation chapters. The collection and use of digital evidence can implicate human rights, including privacy, free expression, fair trial, and data protection. Strong safeguards are essential to prevent government abuse.

Regrettably, many states already fall short in these regards. In some cases, surveillance laws have been used to justify overly broad practices that disproportionately target individuals or groups based on their political views—particularly ethnic and religious groups. This leads to the suppression of free expression and association, the silencing of dissenting voices, and discriminatory practices. Examples of these abuses include covert surveillance of internet activity without a warrant, using technology to track individuals in public, and monitoring private communications without legal authorization, oversight, or safeguards.

The Special Rapporteur on the rights to freedom of peaceful assembly and of association has already sounded the alarm about the dangers of current surveillance laws, urging states to revise and amend these laws to comply with international human rights norms and standards governing the rights to privacy, free expression, peaceful assembly, and freedom of association. The UN Cybercrime Convention must be radically amended to avoid entrenching and expanding these existing abuses globally. If not amended, it must be rejected outright.

How the Convention Fails to Protect Human Rights in Domestic Surveillance

The idea that checks and balances are essential to avoid abuse of power is a basic “Government 101” concept. Yet throughout the negotiation process, Russia and its allies have sought to chip away at the already-weakened human rights safeguards and conditions outlined in Article 24 of the proposed Convention. 

Article 24 as currently drafted requires that every country that agrees to this convention must ensure that when it creates, uses, or applies the surveillance powers and procedures described in the domestic procedural measures, it does so under its own laws. These laws must protect human rights and comply with international human rights law. The principle of proportionality must be respected, meaning any surveillance measures should be appropriate and not excessive in relation to the legitimate aim pursued.

Why Article 24 Falls Short? 1. The Critical Missing Principles

While incorporation of the principle of proportionality in Article 24(1) is commendable, the article still fails to explicitly mention the principles of legality, necessity, and non-discrimination, which hold equivalent status to proportionality in human rights law relative to surveillance activities. A primer:

  • The principle of legality requires that restrictions on human rights including the right to privacy be authorized by laws that are clear, publicized, precise, and predictable, ensuring individuals understand what conduct might lead to restrictions on their human rights.
  • The principles of necessity and proportionality ensure that any interference with human rights is demonstrably necessary to achieving a legitimate aim and only include measures that are proportionate to that aim.
  • The principle of non-discrimination requires that laws, policies and human rights obligations be applied equally and fairly to all individuals, without any form of discrimination based on race, color, sex, language, religion, political or other opinion, national or social origin, property, birth, or other status, including the application of surveillance measures.

Without including all these principles, the safeguards are incomplete and inadequate, increasing the risk of misuse and abuse of surveillance powers.

2. Inadequate Specific Safeguards 

Article 24(2) requires countries to include, where “appropriate,” specific safeguards like:

  • judicial or independent review, meaning surveillance actions must be reviewed or authorized by a judge or an independent regulator.
  • the right to an effective remedy, meaning people must have ways to challenge or seek remedy if their rights are violated.
  • justification and limits, meaning there must be clear reasons for using surveillance and limits on how much surveillance can be done and for how long.

Article 24 (2) introduces three problems:

2.1 The Pitfalls of Making Safeguards Dependent on Domestic Law

Although these safeguards are mentioned, making them contingent on domestic law can vastly weaken their effectiveness, as national laws vary significantly and many of them won’t provide adequate protections. 

2.2 The Risk of Ambiguous Terms Allowing Cherry-Picked Safeguards

The use of vague terms like “as appropriate” in describing how safeguards will apply to individual procedural powers allows for varying interpretations, potentially leading to weaker protections for certain types of data in practice. For example, many states provide minimal or no safeguards for accessing subscriber data or traffic data despite the intrusiveness of resulting surveillance practices. These powers have been used to identify anonymous online activity, to locate and track people, and to map people’s contacts. By granting states broad discretion to decide which safeguards to apply to different surveillance powers, the convention fails to ensure the text will be implemented in accordance with human rights law. Without clear mandatory requirements, there is a real risk that essential protections will be inadequately applied or omitted altogether for certain specific powers, leaving vulnerable populations exposed to severe rights violations. Essentially, a country could just decide that some human rights safeguards are superfluous for a particular kind or method of surveillance, and dispense with them, opening the door for serious human rights abuses.

2.3 Critical Safeguards Missing from Article 24(2)

The need for prior judicial authorization, for transparency, and for user notification is critical to any effective and proportionate surveillance power, but not included in Article 24(2).

Prior judicial authorization means that before any surveillance action is taken, it must be approved by a judge. This ensures an independent assessment of the necessity and proportionality of the surveillance measure before it is implemented. Although Article 24 mentions judicial or other independent review, it lacks a requirement for prior judicial authorization. This is a significant omission that increases the risk of abuse and infringement on individuals' rights. Judicial authorization acts as a critical check on the powers of law enforcement and intelligence agencies.

Transparency involves making the existence and extent of surveillance measures known to the public; people must be fully informed of the laws and practices governing surveillance so that they can hold authorities accountable. Article 24 lacks explicit provisions for transparency, so surveillance measures could be conducted in secrecy, undermining public trust and preventing meaningful oversight. Transparency is essential for ensuring that surveillance powers are not misused and that individuals are aware of how their data might be collected and used.

User notification means that individuals who are subjected to surveillance are informed about it, either at the time of the surveillance or afterward when it no longer jeopardizes the investigation. The absence of a user notification requirement in Article 24(2) deprives people of the opportunity to challenge the legality of the surveillance or seek remedies for any violations of their rights. User notification is a key component of protecting individuals’ rights to privacy and due process. It may be delayed, with appropriate justification, but it must still eventually occur and the convention must recognize this.

Independent oversight involves monitoring by an independent body to ensure that surveillance measures comply with the law and respect human rights. This body can investigate abuses, provide accountability, and recommend corrective actions. While Article 24 mentions judicial or independent review, it does not establish a clear mechanism for ongoing independent oversight. Effective oversight requires a dedicated, impartial body with the authority to review surveillance activities continuously, investigate complaints, and enforce compliance. The lack of a robust oversight mechanism weakens the framework for protecting human rights and allows potential abuses to go unchecked.


While it’s somewhat reassuring that Article 24 acknowledges the binding nature of human rights law and its application to surveillance powers, it is utterly unacceptable how vague the article remains about what that actually means in practice. The “as appropriate” clause is a dangerous loophole, letting states implement intrusive powers with minimal limitations and no prior judicial authorization, only to then disingenuously claim this was “appropriate.” This is a blatant invitation for abuse. There’s nothing “appropriate” about this, and the convention must be unequivocally clear about that.

This draft in its current form is an egregious betrayal of human rights and an open door to unchecked surveillance and systemic abuses. Unless these issues are rectified, Member States must recognize the severe flaws and reject this dangerous convention outright. The risks are too great, the protections too weak, and the potential for abuse too high. It’s long past time to stand firm and demand nothing less than a convention that genuinely safeguards human rights.

Check out our detailed analysis on the criminalization of security research activities under the UN Cybercrime Convention. Stay tuned for our next post, where we'll explore other critical areas affected by the convention, including its scope and human rights safeguards.

Katitza Rodriguez

If Not Amended, States Must Reject the Flawed Draft UN Cybercrime Convention Criminalizing Security Research and Certain Journalism Activities

1 week 3 days ago

This is the first post in a series highlighting the problems and flaws in the proposed UN Cybercrime Convention. Check out The UN Cybercrime Draft Convention is a Blank Check for Surveillance Abuses

The latest and nearly final version of the proposed UN Cybercrime Convention—dated May 23, 2024 but released today June 14—leaves security researchers’ and investigative journalists’ rights perilously unprotected, despite EFF’s repeated warnings.

The world benefits from people who help us understand how technology works and how it can go wrong. Security researchers, whether independently or within academia or the private sector, perform this important role of safeguarding information technology systems. Relying on the freedom to analyze, test, and discuss IT systems, researchers identify vulnerabilities that can cause major harms if left unchecked. Similarly, investigative journalists and whistleblowers play a crucial role in uncovering and reporting on matters of significant public interest including corruption, misconduct, and systemic vulnerabilities, often at great personal risk.

For decades, EFF has fought for security researchers and journalists, provided legal advice to help them navigate murky criminal laws, and advocated for their right to conduct security research without fear of legal repercussions. We’ve helped researchers when they’ve faced threats for performing or publishing their research, including identifying and disclosing critical vulnerabilities in systems. We’ve seen how vague and overbroad laws on unauthorized access have chilled good-faith security research, threatening those who are trying to keep us safe or report on public interest topics. 

Now, just as some governments have individually finally recognized the importance of protecting security researchers’ work, many of the UN convention’s criminalization provisions threaten to spread antiquated and ambiguous language around the world with no meaningful protections for researchers or journalists. If these and other issues are not addressed, the convention poses a global threat to cybersecurity and press freedom, and UN Member States must reject it.

This post will focus on one critical aspect of coders’ rights under the newest released text: the provisions that jeopardize the work of security researchers and investigative journalists. In subsequent posts, Wwe will delve into other aspects of the convention in later posts.

How the Convention Fails to Protect Security Research and Reporting on Public Interest Matters What Provisions Are We Discussing?

Articles 7 to 11 of the Criminalization Chapter—covering illegal access, illegal interception, interference with electronic data, interference with ICT systems, and misuse of devices—are core cybercrimes of which security researchers often have been accused of such offenses as a result of their work. (In previous drafts of the convention, these were articles 6-10).

  • Illegal Access (Article 7): This article risks criminalizing essential activities in security research, particularly where researchers access systems without prior authorization to identify vulnerabilities.
  • Illegal Interception (Article 8): Analysis of network traffic is also a common practice in cybersecurity; this article currently risks criminalizing such analysis and should similarly be narrowed to require malicious criminal intent (mens rea).
  • Interference with Data (Article 9) and Interference with Computer Systems (Article 10): These articles may inadvertently criminalize acts of security research, which often involve testing the robustness of systems by simulating attacks that could be described as “interference” even though they don’t cause harm and are performed without criminal malicious intent.

All of these articles fail to include a mandatory element of criminal intent to cause harm, steal, or defraud. A requirement that the activity cause serious harm is also absent from Article 10 and optional in Article 9. These safeguards must be mandatory.

What We Told the UN Drafters of the Convention in Our Letter?

Earlier this year, EFF submitted a detailed letter to the drafters of the UN Cybercrime Convention on behalf of 124 signatories, outlining essential protections for coders. 

Our recommendations included defining unauthorized access to include only those accesses that bypass security measures, and only where such security measures count as effective. The convention’s existing language harks back to cases where people were criminally prosecuted just for editing part of a URL.

We also recommended ensuring that criminalization of actions requires clear malicious or dishonest intent to harm, steal, or infect with malware. And we recommended explicitly exempting good-faith security research and investigative journalism on issues of public interest from criminal liability.

What Has Already Been Approved?

Several provisions of the UN Cybercrime Convention have been approved ad referendum. These include both complete articles and specific paragraphs, indicating varying levels of consensus among the drafters.

Which Articles Has Been Agreed in Full

The following articles have been agreed in full ad referendum, meaning the entire content of these articles has been approved:

    • Article 9: Interference with Electronic Data
    • Article 10: Interference with ICT Systems
    • Article 11: Misuse of Devices 
    • Article 28(4): Search and Seizure Assistance Mandate

We are frustrated to see, for example, that Article 11 (misuse of devices) has been accepted without any modification, and so continues to threaten the development and use of cybersecurity tools. Although it criminalizes creating or obtaining these tools only for purposes of violations of other crimes defined in Articles 7-10 (covering illegal access, illegal interception, interference with electronic data, and interference with ICT systems), those other articles lack mandatory criminal intent requirements and a requirement to define “without right” as bypassing an effective security measure. Because those articles do not specifically exempt activities such as security testing, Article 11 may inadvertently criminalize security research and investigative journalism. It may punish even making or using tools for research purposes if the research, such as security testing, is considered to fall under one of the other crimes.

We are also disappointed that Article 28(4) has also been approved ad referendum. This article could disproportionately empower authorities to compel “any individual” with knowledge of computer systems to provide any “necessary information” for conducting searches and seizures of computer systems. As we have written before, this provision can be abused to force security experts, software engineers, tech employees to expose sensitive or proprietary information. It could also encourage authorities to bypass normal channels within companies and coerce individual employees—under threat of criminal prosecution—to provide assistance in subverting technical access controls such as credentials, encryption, and just-in-time approvals without their employers’ knowledge. This dangerous paragraph must be removed in favor of the general duty for custodians of information to comply with data requests to the extent of their abilities.

Which Provisions Has Been Partially Approved?

The broad prohibitions against unauthorized access and interception have already been approved ad referendum, which means:

  • Article 7: Illegal Access (first paragraph agreed ad referendum)
  • Article 8: Illegal Interception (first paragraph agreed ad referendum)

The first paragraph of each of these articles includes language requiring countries to criminalize accessing systems or data or intercepting “without right.” This means that if someone intentionally gets into a computer or network without authorization, or performs one of the other actions called out in subsequent articles, it should be considered a criminal offense in that country. The additional optional requirements, however, are crucial for protecting the work of security researchers and journalists, and are still on the negotiating table and worth fighting for.  

What Has Not Been Agreed Upon Yet?

There is no agreement yet on Paragraph 2 of Article 7 on Illegal Access and Article 8 on illegal interception, which give countries the option to add specific requirements that can vary from article to article. Such safeguards could provide necessary clarifications to prevent criminalization of legal activities and ensure that laws are not misapplied to stifle research, innovation, and reporting on public interest matters. We made clear throughout this negotiation process that these conditions are a crucially important part of all domestic legislation pursuant to the convention. We’re disappointed to see that states have failed to act on any of our recommendations, including the letter we sent in February.

The final text dated May 23, 2024 of the convention is conspicuously silent on several crucial protections for security researchers:

  • There are no explicit exemptions for security researchers or investigative journalists who act in good faith.
  • The requirement for malicious intent remains optional rather than mandatory, leaving room for broad and potentially abusive interpretations.
  • The text does not specify that bypassing security measures should only be considered unauthorized if those measures are effective, nor make that safeguard mandatory.
How Has Similar Phrasing Caused Problems in the Past?

There is a history of overbroad interpretation under laws such as the United States’ Computer Fraud and Abuse Act, and this remains a significant concern with similarly vague language in other jurisdictions. This can also raise concerns well beyond researchers’ and journalists’ work, as when such legislation is invoked by one company to hinder a competitor’s ability to access online systems or create interoperable technologies. EFF’s paper, “Protecting Security Researchers' Rights in the Americas,” has documented numerous instances in which security researchers faced legal threats for their work:

  • MBTA v. Anderson (2008): The Massachusetts Bay Transit Authority (MBTA) used a  cybercrime law to sue three college students who were planning to give a presentation about vulnerabilities in Boston’s subway fare system.
  • Canadian security researcher (2018): A 19-year-old Canadian was accused of unauthorized use of a computer service for downloading public records from a government website.
  • LinkedIn’s cease and desist letter to hiQ Labs, Inc. (2017): LinkedIn invoked cybercrime law against hiQ Labs for “scraping” — accessing publicly available information on LinkedIn’s website using automated tools. Questions and cases related to this topic have continued to arise, although an appeals court ultimately held that scraping public websites does not violate the CFAA. 
  • Canadian security researcher (2014): A security researcher demonstrated a widely known vulnerability that could be used against Canadians filing their taxes. This was acknowledged by the tax authorities and resulted in a delayed tax filing deadline. Although the researcher claimed to have had only positive intentions, he was charged with a cybercrime.
  • Argentina’s prosecution of Joaquín Sorianello (2015): Software developer Joaquín Sorianello uncovered a vulnerability in election systems and faced criminal prosecution for demonstrating this vulnerability, even though the government concluded that he did not intend to harm the systems and did not cause any serious damage to them.

These examples highlight the chilling effect that vague legal provisions can have on the cybersecurity community, deterring valuable research and leaving critical vulnerabilities unaddressed.


The latest draft of the UN Cybercrime Convention represents a tremendous failure to protect coders’ rights. By ignoring essential recommendations and keeping problematic language, the convention risks stifling innovation and undermining cybersecurity. Delegates must push for urgent revisions to safeguard coders’ rightsandrights and ensure that the convention fosters, rather than hinders, the development of a secure digital environment. We are running out of time; action is needed now.

Stay tuned for our next post, in which we will explore other critical areas affected by the proposed convention including its scope and human rights safeguards. 

Katitza Rodriguez

Hand me the flashlight. I’ll be right back...

1 week 4 days ago

It’s time for the second installment of campfire tales from our friends, The Encryptids—the rarely-seen enigmas who’ve become folk legends. They’re helping us celebrate EFF’s summer membership drive for internet freedom!

Through EFF's 34th birthday on July 10, you can receive 2 rare gifts, be a member for just $20, and as a bonus new recurring monthly or annual donations get a free match! Join us today.

So...do you ever feel like tech companies still own the devices you’ve paid for? Like you don’t have alternatives to corporate choices? Au contraire! Today, Monsieur Jackalope tells us why interoperability plays a key role in giving you freedom in tech...

-Aaron Jue
EFF Membership Team



all me Jacques. Some believe I am cuddly. Others deem me ferocious. Yet I am those things and more. How could anyone tell me what I may be? Beauty lives in creativity, innovation, and yes, even contradiction. When you are confined to what is, you lose sight of what could be. Zut! Here we find ourselves at the mercy of oppressive tech companies who perhaps believe you are better off without choices. But they are wrong.

Control, commerce, and lack of competition. These limit us and rob us of our potential. We are destined for so much more in tech! When I must make repairs on my scooter, do I call Vespa for their approval on my wrenches? Mais non! Then why should we prohibit software tools from interacting with one another? The connected world must not be a darker reflection of this one we already know.

The connected world must not be a darker reflection of this one we already know.

EFF’s team—avec mon ami Cory Doctorow!—advocate powerfully for systems in which we do not need the permission of companies to fix, connect, or play with technology. Oui, c’est difficile: you find copyrighted software in nearly everything, and sparkling proprietary tech lures you toward crystal prisons. But EFF has helped make excellent progress with laws supporting your Right to Repair, they speak out against tech monopolies, they lift up the free and open source software community, and they advocate for creators across the web.

Join EFF

Interoperability makes good things great

You can make a difference in the fight to truly own your devices. Support the EFF’s efforts as a member this year and reach toward the sublime web that interconnection and creativity can bring.


Monsieur Jackalope


EFF is a member-supported U.S. 501(c)(3) organization celebrating TEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

M. Jackalope

EFF to Ninth Circuit: Abandoning a Phone Should Not Mean Abandoning Its Contents

1 week 4 days ago

This post was written by EFF legal intern Danya Hajjaji.

Law enforcement should be required to obtain a warrant to search data contained in abandoned cell phones, EFF and others explained in a friend-of-the-court brief to the Ninth Circuit Court of Appeals.

The case, United States v. Hunt, involves law enforcement’s seizure and search of an iPhone the defendant left behind after being shot and taken to the hospital. The district court held that the iPhone’s physical abandonment meant that the defendant also abandoned the data stored on the phone. In support of the defendant’s appeal, we urged the Ninth Circuit to reverse the district court’s ruling and hold that the Fourth Amendment’s abandonment exception does not apply to cell phones: as it must in other circumstances, law enforcement should generally have to obtain a warrant before it searches someone’s cell phone.

Cell phones differ significantly from other physical property. They are pocket-sized troves of highly sensitive information with immense storage capacity. Today’s phone carries and collects vast and varied data that encapsulates a user’s daily life and innermost thoughts.

Courts—including the US Supreme Court—have recognized that cell phones contain the “sum of an individual’s private life.” And, because of this recognition, law enforcement must generally obtain a warrant before it can search someone’s phone.

While people routinely carry cell phones, they also often lose them. That should not mean losing the data contained on the phones.

While the Fourth Amendment’s ”abandonment doctrine” permits law enforcement to conduct a warrantless seizure or search of an abandoned item, EFF’s brief explains that this precedent does not mechanically apply to cell phones. As the Supreme Court has recognized multiple times, the rote application of case law from prior eras with less invasive and revealing technologies threatens our Fourth Amendment protections.

Our brief goes on to explain that a cell phone owner rarely (if ever) intentionally relinquishes their expectation of privacy and possessory interests in data on their cell phones, as they must for the abandonment doctrine to apply. The realities of the modern cell phone seldom infer a purpose to discard the wealth of data they contain. Cell phone data is not usually confined to the phone itself, and is instead stored in the “cloud” and accessible across multiple devices (such as laptops, tablets, and smartwatches).

We hope the Ninth Circuit recognizes that expanding the abandonment doctrine in the manner envisioned by the district court in Hunt would make today’s cell phone an accessory to the erosion of Fourth Amendment rights.

Brendan Gilligan

Encode Justice NC - the Movement for a Safe, Equitable AI

1 week 4 days ago

The Electronic Frontier Alliance is proud to have such a diverse membership, and is especially proud to ally with Encode Justice chapters. Encode Justice is a community that includes over 1,000 high school and college students across over 40 U.S. states and 30 countries. Organized into chapters, these young people constitute a global youth movement for safe, equitable AI. Their mission is mobilizing communities for AI aligned with human values.

At its core, Encode Justice is more than just a name. It’s a guiding philosophy: they believe we must encode justice and safety into the technologies we build. Young people are critical stakeholders in conversations about AI, and presently, as we find ourselves face-to-face with challenges like algorithmic bias, misinformation, democratic erosion, and labor displacement; we simultaneously stand on the brink of even larger-scale risks that could result from the loss of human control over increasingly powerful systems. Encode Justice believes human-centered AI must be built, designed, and governed by and for diverse stakeholders, and that AI should help guide us towards our aspirational future, not simply reflect the data of our past and present.

Currently three local chapters of Encode Justice have joined the EFA: Encode Justice North Carolina, Oregon, and Georgia. Recently I caught up with the leader of Encode Justice NC, Siri, about her chapter, their work, and how other people (including youth) can plug in and join the movement for safe, equitable AI:

Can you tell us a little about your chapter, its composition, and its projects?

Encode Justice North Carolina is an Encode Justice chapter led by Siri M while including other high schoolers and college students in NC. Most of us are in the Research Triangle Park area, but we’d also welcome any NC based student that is interested in our work! In the past, we have done projects including educational workshops, policy memos, and legislative campaigns (on the state & city council level) while lobbying officials and building coalitions with other state and local organizations.

Diving more into the work of your chapter, can you elaborate? And are there any local partnerships you’ve made with regard to your legislative advocacy efforts?

We’ve specifically done a lot of work around surveillance, with ‘AI in Policing & Surveillance' being the subject of our educational workshop with the national organization “Paving Tomorrow.” We’ve also lobbied the city council of Cary, NC to pass an ACLU model bill on police surveillance, after gaining support in the campaign from Emancipate NC, the EFA, and BSides RDU. Notably, we have lobbied our state legislature to pass a bill regarding social media addiction and data privacy for youth. Additionally, a policy memo from our chapter was written and published as a part of the Encode Justice State AI legislative project to spread information and analysis on the local legislative landscape, stakeholders, and solutions regarding tech policy related issues in our state. The memo was for legislators, organizations, and press to use.

We’ve also conducted a project to gather student testimonials on AI/school-based surveillance. In the near future, we are looking forward to working on bigger campaigns, including a national legislative facial recognition campaign, and a local campaign on the impacts of surveillance on immigrant communities. We are also more generally looking forward to expanding our reach while gaining new members in more regions of NC, and potentially leading more campaigns and projects while increasing their scope and widening our range of topics. 

How can other youth plug-in to support and join the movement?

Anyone, including non-students, can follow us on Instagram at @encodejusticenc. If you are interested in becoming an Encode Justice North Carolina member, you could please fill out the form to do so! Lastly, if you are a student that would like to support us in a smaller way, you can fill out the student testimonies survey here.

Christopher Vines

The Next Generation of Cell-Site Simulators is Here. Here’s What We Know.

1 week 4 days ago

Dozens of policing agencies are currently using cell-site simulators (CSS) by Jacobs Technology and its Engineering Integration Group (EIG), according to newly-available documents on how that company provides CSS capabilities to local law enforcement. 

A proposal document from Jacobs Technology, provided to the Massachusetts State Police (MSP) and first spotted by the Boston Institute for Nonprofit Journalism (BINJ), outlines elements of the company’s CSS services, which include discreet integration of the CSS system into a Chevrolet Silverado and lifetime technical support. The proposal document is part of a winning bid Jacobs submitted to MSP earlier this year for a nearly $1-million contract to provide CSS services, representing the latest customer for one of the largest providers of CSS equipment.

An image of the Jacobs CSS system as integrated into a Chevrolet Silverado for the Virginia State Police. Source: 2024 Jacobs Proposal Response

The proposal document from Jacobs provides some of the most comprehensive information about modern CSS that the public has had access to in years. It confirms that law enforcement has access to CSS capable of operating on 5G as well as older cellular standards. It also gives us our first look at modern CSS hardware. The Jacobs system runs on at least nine software-defined radios that simulate cellular network protocols on multiple frequencies and can also gather wifi intelligence. As these documents describe, these CSS are meant to be concealed within a common vehicle. Antennas are hidden under a false roof so nothing can be seen outside the vehicles, which is a shift from the more visible antennas and cargo van-sized deployments we’ve seen before.  The system also comes with a TRACHEA2+ and JUGULAR2+ for direction finding and mobile direction finding. 

The Jacobs 5G CSS base station system. Source: 2024 Jacobs Proposal Response

CSS, also known as IMSI catchers, are among law enforcement’s most closely-guarded secret surveillance tools. They act like real cell phone towers, “tricking” mobile devices into connecting to them, designed to intercept the information that phones send and receive, like the location of the user and metadata for phone calls, text messages, and other app traffic. CSS are highly invasive and used discreetly. In the past, law enforcement used a technique called “parallel construction”—collecting evidence in a different way to reach an existing conclusion in order to avoid disclosing how law enforcement originally collected it—to circumvent public disclosure of location findings made through CSS. In Massachusetts, agencies are expected to get a warrant before conducting any cell-based location tracking. The City of Boston is also known to own a CSS. 

This technology is like a dragging fishing net, rather than a focused single hook in the water. Every phone in the vicinity connects with the device; even people completely unrelated to an investigation get wrapped up in the surveillance. CSS, like other surveillance technologies, subjects civilians to widespread data collection, even those who have not been involved with a crime, and has been used against protestors and other protected groups, undermining their civil liberties. Their adoption should require public disclosure, but this rarely occurs. These new records provide insight into the continued adoption of this technology. It remains unclear whether MSP has policies to govern its use. CSS may also interfere with the ability to call emergency services, especially for people who have to use accessibility technologies for those who cannot hear.

Important to the MSP contract is the modification of a Chevrolet Silverado with the CSS system. This includes both the surreptitious installment of the CSS hardware into the truck and the integration of its software user interface into the navigational system of the vehicle. According to Jacobs, this is the kind of installation with which they have a lot of experience.

Jacobs has built its CSS project on military and intelligence community relationships, which are now informing development of a tool used in domestic communities, not foreign warzones in the years after September 11, 2001. Harris Corporation, later L3Harris Technologies, Inc., was the largest provider of CSS technology to domestic law enforcement but stopped selling to non-federal agencies in 2020. Once Harris stopped selling to local law enforcement the market was open to several competitors, one of the largest of which was KeyW Corporation. Following Jacobs’s 2019 acquisition of The KeyW Corporation and its Engineering Integration Group (EIG), Jacobs is now a leading provider of CSS to police, and it claims to have more than 300 current CSS deployments globally. EIG’s CSS engineers have experience with the tool dating to late 2001, and they now provide the spectrum of CSS-related services to clients, including integration into vehicles, training, and maintenance, according to the document. Jacobs CSS equipment is operational in 35 state and local police departments, according to the documents.

EFF has been able to identify 13 agencies using the Jacobs equipment, and, according to EFF’s Atlas of Surveillance, more than 70 police departments have been known to use CSS. Our team is currently investigating possible acquisitions in California, Massachusetts, Michigan, and Virginia. 

An image of the Jacobs CSS system interface integrated into the factory-provided vehicle navigation system. Source: 2024 Jacobs Proposal Response

The proposal also includes details on other agencies’ use of the tool, including that of the Fontana, CA Police Department, which it says has deployed its CSS more than 300 times between 2022 and 2023, and Prince George's County Sheriff (MO), which has also had a Chevrolet Silverado outfitted with CSS. 

Jacobs isn’t the lone competitor in the domestic CSS market. Cognyte Software and Tactical Support Equipment, Inc. also bid on the MSP contract, and last month, the City of Albuquerque closed a call for a cell-site simulator that it awarded to Cognyte Software Ltd. 

Beryl Lipton

Shhh. Did you hear that?

1 week 6 days ago

It’s Day One of EFF’s summer membership drive for internet freedom! Gather round the virtual campfire because I’ve got special treats and a story for you:

  1. New member t-shirts and limited-edition gear drop TODAY.

  2. Through EFF's 34th birthday on July 10, you can get 2 rare gifts and become an EFF member for just $20! AND new automatic monthly or annual donors get an instant match.

  3. I’m proud to share the first post in a series from our friends, The Encryptids—the rarely-seen enigmas who inspire campfire lore. But this time, they’re spilling secrets about how they survive this ever-digital world. We begin by checking in with the legendary Bigfoot de la Sasquatch...

EFF Membership Team



eople say I'm the most famous of The Encryptids, but sometimes I don't want the spotlight. They all want a piece of me: exes, ad trackers, scammers, even the government. A picture may be worth a thousand words, but my digital profile is worth cash (to skeezy data brokers). I can’t hit a city block without being captured by doorbell cameras, CCTV, license plate readers, and a maze of street-level surveillance. It can make you want to give up on privacy altogether. Honey, no. Why should you have to hole up in some dank, busted forest for freedom and respect? You don’t.

Privacy isn't about hiding. It's about revealing what you want to who you want on your terms. It's your basic right to dignity.

Privacy isn't about hiding...It's your basic right to dignity.

A wise EFF technologist once told me, “Nothing makes you a ghost online.” So what we need is control, sweetie! You're not on your own! EFF worked for decades to set legal precedents for us, to push for good policy, fight crap policy, and create tools so you can be more private and secure on the web RIGHT NOW. They even have whole ass guides that help people around the world protect themselves online. For free!

I know a few things about strangers up in your business, leaked photos, and wanting to live in peace. Your rights and freedoms are too important to leave them up to tech companies and politicians. This world is a better place for having people like the lawyers, activists, and techs at EFF.

Join EFF

Privacy is a "human" right

Privacy is a team sport and the team needs you. Sign up with EFF today and not only can you get fun stuff (featuring ya boy Footy), you’ll make the internet better for everyone.


Bigfoot DLS


EFF is a member-supported U.S. 501(c)(3) organization celebrating TEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

Bigfoot de la Sasquatch

EFF Covers Secrets in Your Data on NOVA

2 weeks 3 days ago

It’s the weekend. You decide you want to do something fun with your family—maybe go to a local festival or park. So, you start searching on your favorite social media app to see what other people are doing. Soon after, you get ads on other platforms about the activities you were just looking at. What the heck?

That’s the reality we’re in today. As EFF’s Associate Director of Legislative Activism Hayley Tsukayama puts it, “That puts people in a really difficult position, when we’re supposed to manage our own privacy, but we’re also supposed to use all these things that are products that will make our lives better.”

Watch EFF’s Cory Doctorow, Eva Galperin, Hayley Tsukayama, and others in the digital rights community explain how your data gets scooped up by data brokers—and common practices to protect your privacy online—in Secrets in Your Data on NOVΛ. You can watch the premier or read the transcript here below:

Watch Secrets in Your Data on PBS.org

EFF continues pushing for a comprehensive data privacy law that would reign in data brokers' ability to collect our information and share it to the highest bidders, including law enforcement. Additionally, you can use these resources to help keep you safe online

Christian Romero
1 hour 22 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed