EFF Asks Court to Uphold Federal Law That Protects Online Video Viewers’ Privacy and Free Expression

2 months 3 weeks ago

As millions of internet users watch videos online for news and entertainment, it is essential to uphold a federal privacy law that protects against the disclosure of everyone’s viewing history, EFF argued in court last month.

For decades, the Video Privacy Protection Act (VPPA) has safeguarded people’s viewing habits by generally requiring services that offer videos to the public to get their customers’ written consent before disclosing that information to the government or a private party. Although Congress enacted the law in an era of physical media, the VPPA applies to internet users’ viewing habits, too.

The VPPA, however, is under attack by Patreon. That service for content creators and viewers is facing a lawsuit in a federal court in Northern California, brought by users who allege that the company improperly shared information about the videos they watched on Patreon with Facebook.

Patreon argues that even if it did violate the VPPA, federal courts cannot enforce it because the privacy law violates the First Amendment on its face under a legal doctrine known as overbreadth. This doctrine asks whether a substantial number of the challenged law’s applications violate the First Amendment, judged in relation to the law’s plainly legitimate sweep.  Courts have rightly struck down overbroad laws because they prohibit vast amounts of lawful speech. For example, the Supreme Court in Reno v. ACLU invalidated much of the Communications Decency Act’s (CDA) online speech restrictions because it placed an “unacceptably heavy burden on protected speech.”

EFF is second to none in fighting for everyone’s First Amendment rights in court, including internet users (in Reno mentioned above) and the companies that host our speech online. But Patreon’s First Amendment argument is wrong and misguided. The company seeks to elevate its speech interests over those of internet users who benefit from the VPPA’s protections.

As EFF, the Center for Democracy & Technology, the ACLU, and the ACLU of Northern California argued in their friend-of-the-court brief, Patreon’s argument is wrong because the VPPA directly advances the First Amendment and privacy interests of internet users by ensuring they can watch videos without being chilled by government or private surveillance.

“The VPPA provides Americans with critical, private space to view expressive material, develop their own views, and to do so free from unwarranted corporate and government intrusion,” we wrote. “That breathing room is often a catalyst for people’s free expression.”

As the brief recounts, courts have protected against government efforts to learn people’s book buying and library history, and to punish people for viewing controversial material within the privacy of their home. These cases recognize that protecting people’s ability to privately consume media advances the First Amendment’s purpose by ensuring exposure to a variety of ideas, a prerequisite for robust debate. Moreover, people’s video viewing habits are intensely private, because the data can reveal intimate details about our personalities, politics, religious beliefs, and values.

Patreon’s First Amendment challenge is also wrong because the VPPA is not an overbroad law. As our brief explains, “[t]he VPPA’s purpose, application, and enforcement is overwhelmingly focused on regulating the disclosure of a person’s video viewing history in the course of a commercial transaction between the provider and user.” In other words, the legitimate sweep of the VPPA does not violate the First Amendment because generally there is no public interest in disclosing any one person’s video viewing habits that a company learns purely because it is in the business of selling video access to the public.

There is a better path to addressing any potential unconstitutional applications of the video privacy law short of invalidating the statute in its entirety. As EFF’s brief explains, should a video provider face liability under the VPPA for disclosing a customer’s video viewing history, they can always mount a First Amendment defense based on a claim that the disclosure was on a matter of public concern.

Indeed, courts have recognized that certain applications of privacy laws, such as the Wiretap Act and civil claims prohibiting the disclosure of private facts, can violate the First Amendment. But generally courts address the First Amendment by invalidating the case-specific application of those laws, rather than invalidating them entirely.

“In those cases, courts seek to protect the First Amendment interests at stake while continuing to allow application of those privacy laws in the ordinary course,” EFF wrote. “This approach accommodates the broad and legitimate sweep of those privacy protections while vindicating speakers’ First Amendment rights.”

Patreon's argument would see the VPPA gutted—an enormous loss for privacy and free expression for the public. The court should protect against the disclosure of everyone’s viewing history and protect the VPPA.

You can read our brief here.

Aaron Mackey

Victory! Police Drone Footage is Not Categorically Exempt From California’s Public Records Law

2 months 3 weeks ago

Video footage captured by police drones sent in response to 911 calls cannot be kept entirely secret from the public, a California appellate court ruled last week.

The decision by the California Court of Appeal for the Fourth District came after a journalist sought access to videos created by Chula Vista Police Department’s “Drones as First Responders” (DFR) program. The police department is the first law enforcement agency in the country to use drones to respond to emergency calls, and several other agencies across the U.S. have since adopted similar models.

After the journalist, Arturo Castañares of La Prensa, sued, the trial court ruled that Chula Vista police could withhold all footage because the videos were exempt from disclosure as law enforcement investigatory records under the California Public Records Act. Castañares appealed.

EFF, along with the First Amendment Coalition and the Reporters Committee for Freedom of the Press, filed a friend-of-the-court brief in support of Castañares, arguing that categorically excluding all drone footage from public disclosure could have troubling consequences on the public’s ability to understand and oversee the police drone program.

Drones, also called unmanned aerial vehicles (UAVs) or unmanned aerial systems (UAS), are relatively inexpensive devices that police use to remotely surveil areas. Historically, law enforcement have used small systems, such as quadrotors, for situational awareness during emergency situations, for capturing crime scene footage, or for monitoring public gatherings, such as parades and protests. DFR programs represent a fundamental change in strategy, with police responding to a much, much larger number of situations with drones, resulting in pervasive, if not persistent surveillance of communities.

Because drones raise distinct privacy and free expression concerns, foreclosing public access to their footage would make it difficult to assess whether police are following their own rules about when and whether they record sensitive places, such as people’s homes or public protests.

The appellate court agreed that drone footage is not categorically exempt from public disclosure. In reversing the trial court’s decision, the California Court of Appeal ruled that although some 911 calls are likely part of law enforcement investigation or at least are used to determine whether a crime occurred, not all 911 calls involve crimes.

“For example, a 911 call about a mountain lion roaming a neighborhood, a water leak, or a stranded motorist on the freeway could warrant the use of a drone but do not suggest a crime might have been committed or is in the process of being committed,” the court wrote.

Because it’s possible that some of Chula Vista’s drone footage involves scenarios in which no crime is committed or suspected, the police department cannot categorically withhold every moment of video footage from the public.

The appellate court sent the case back to the trial court and ordered it and the police department to take a more nuanced approach to determine whether the underlying call for service was a crime or was an initial investigation into a potential crime.

“The drone video footage should not be treated as a monolith, but rather, it can be divided into separate parts corresponding to each specific call,” the court wrote. “Then each distinct video can be evaluated under the CPRA in relation to the call triggering the drone dispatch.”

This victory sends a message to other agencies in California adopting copycat programs, such as the Beverly Hills Police Department, Irvine Police Department, and Fremont Police Department, that they can’t abuse public records laws to shield every second of drone footage from public scrutiny.

Aaron Mackey

Digital Rights for LGBTQ+ People: 2023 Year in Review

2 months 3 weeks ago

An increase in anti-LGBTQ+ intolerance is impacting individuals and communities both online and offline across the globe. Throughout 2023, several countries sought to pass explicitly anti-LGBTQ+ initiatives restricting freedom of expression and privacy. This fuels offline intolerance against LGBTQ+ people, and forces them to self-censor their online expression to avoid being profiled, harassed, doxxed, or criminally prosecuted. 

One growing threat to LGBTQ+ people is data surveillance. Across the U.S., a growing number of states prohibited transgender youths from obtaining gender-affirming health care, and some restricted access for transgender adults. For example, the Texas Attorney General is investigating a hospital for providing gender-affirming health care to transgender youths. We can expect anti-trans investigators to use the tactics of anti-abortion investigators, including seizure of internet browsing and private messaging

It is imperative that businesses are prevented from collecting and retaining this data in the first place, so that it cannot later be seized by police and used as evidence. Legislators should start with Rep. Jacobs’ My Body, My Data bill. We also need new laws to ban reverse warrants, which police can use to identify every person who searched for the keywords “how do I get gender-affirming care,” or who was physically located near a trans health clinic. 

Moreover, LGBTQ+ expression was targeted by U.S. student monitoring tools like GoGuardian, Gaggle, and Bark. The tools scan web pages and documents in students’ cloud drives for keywords about topics like sex and drugs, which are subsequently blocked or flagged for review by school administrators. Numerous reports show regular flagging of LGBTQ+ content. This creates a harmful atmosphere for students; for example, some have been outed because of it. In a positive move, Gaggle recently removed LGBTQ+ terms from their keyword list and GoGuardian has done the same. But, LGBTQ+ resources are still commonly flagged for containing words like "sex," "breasts," or "vagina." Student monitoring tools must remove all terms from their blocking and flagging lists that trigger scrutiny and erasure of sexual and gender identity. 

Looking outside the U.S., LGBTQ+ rights were gravely threatened by expansive cybercrime and surveillance legislation in the Middle East and North Africa throughout 2023. For example, the Cybercrime Law of 2023 in Jordan, introduced as part of King Abdullah II’s modernization reforms, will negatively impact LGBTQ+ people by restricting encryption and anonymity in digital communications, and criminalizing free speech through overly broad and vaguely defined terms. During debates on the bill in the Jordanian Parliament, some MPs claimed that the new cybercrime law could be used to criminalize LGBTQ+ individuals and content online. 

For many countries across Africa, and indeed the world, anti-LGBTQ+ discourses and laws can be traced back to colonial rule. These laws have been used to imprison, harass, and intimidate LGBTQ+ individuals. In May 2023, Ugandan President Yoweri Museveni signed into law the extremely harsh Anti-Homosexuality Act 2023. It imposes, for example, a 20-year sentence for the vaguely worded offense of “promoting” homosexuality. Such laws are not only an assault on the rights of LGBTQ+ people to exist, but also a grave threat to freedom of expression. They lead to more censorship and surveillance of online LGBTQ+ speech, the latter of which will lead to more self-censorship, too.

Ghana’s draft Promotion of Proper Human Sexual Rights and Ghanaian Family Values Bill 2021 goes much further. It threatens up to five years in jail to anyone who publicly identifies as LGBTQ+ or “any sexual or gender identity that is contrary to the binary categories of male and female.” The bill assigns criminal penalties for speech posted online, and threatens online platforms—specifically naming Twitter, Facebook, and Instagram—with criminal penalties if they do not restrict pro-LGBTQ+ content. If passed, Ghanaian authorities could also probe the social media accounts of anyone applying for a visa for pro-LGBTQ+ speech or create lists of pro-LGBTQ+ supporters to be arrested upon entry. EFF this year joined other human rights groups to oppose this law.

Taking inspiration from Uganda and Ghana, a new proposed law in Kenya—the Family Protection Bill 2023—would impose ten years imprisonment for homosexuality, and life imprisonment for “aggravated homosexuality.” The bill also allows for the expulsion of refugees and asylum seekers who breach the law, irrespective of whether the conduct is connected with asylum requests. Kenya today is the sole country in East Africa to accept LGBTQ+ individuals seeking refuge and asylum without questioning their sexual orientation; sadly, that may change. EFF has called on the authorities in Kenya and Ghana to reject their respective repulsive bills, and for authorities in Uganda to repeal the Anti-Homosexuality Act.

2023 was a challenging year for the digital rights of LGBTQ+ people. But we are optimistic that in the year to come, LGBTQ+ people and their allies, working together online and off, will make strides against censorship, surveillance, and discrimination.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Paige Collings

Year In Review: Google’s Corporate Paternalism in The Browser

2 months 3 weeks ago

It’s a big year for the oozing creep of corporate paternalism and ad-tracking technology online. Google and its subsidiary companies have tightened their grips on the throat of internet innovation, all while employing the now familiar tactic of marketing these things as beneficial for users. Here we’ll review the most significant changes this year, all emphasizing the point that browser privacy tools (like Privacy Badger) are more important than ever.

Manifest V2 to Manifest V3: Final Death of Legacy Chrome Extensions

Chrome, the most popular web browser by all measurements, recently announced the official death date for Manifest V2, hastening the reign of its janky successor, Manifest V3. We've been complaining about this since the start, but here's the gist: the finer details of MV3 have gotten somewhat better over time (namely that it won't completely break all privacy extensions). However, what security benefits it has are bought by limiting what all extensions can do. Chrome could invest in a more robust extension review process. Doing so would protect both innovation and security, but it’s clear that the true intention of this change is somewhere else. Put bluntly: Chrome, a browser built by an advertising company, has positioned itself as the gatekeeper for in-browser privacy tools, the sole arbiter of how they should be designed. Considering that Google’s trackers are present on at least 85% of the top 50,000 websites, contributing to an overall profit of approximately 225 billion dollars in 2022, this is an unsurprising, yet still disappointing, decision.

For what it's worth, Apple's Safari browser imposes similar restrictions to allegedly protect Safari users from malicious extensions. While it’s important to protect users from said malicious extensions, it’s equally important to honor their privacy.

Topics API

This year also saw the rollout of Google's planned "Privacy Sandbox" project, which also uses a lot of mealy-mouthed marketing to justify its questionable characteristics. While it will finally get rid of third-party cookies, an honestly good move, it is replacing that form of tracking with another called the "Topics API." At best, this reduces the number of parties that are able to track a user through the Chrome browser (though we aren’t the only privacy experts casting doubt toward its so-called benefits). But it limits tracking so it's only done by a single powerful party, Chrome itself, who then gets to dole out its learnings to advertisers that are willing to pay. This is just another step in transforming the browser from a user agent to an advertising agent.

Privacy Badger now disables the Topics API by default.

YouTube Blocking Access for Users With Ad-Blockers

Most recently, people with ad-blockers began to see a petulant message from Youtube when trying to watch a video. The blocking message gave users a countdown until they would no longer be able to use the site unless they disabled their ad-blockers. Privacy and security benefits be damned. YouTube, a Google owned company which saw its own all-time high in third quarter advertising revenue (a meager 8 billion dollars), has no equivocal announcement laden with deceptive language for this one. If you’re on Chrome or a Chromium-based browser, expect YouTube to be broken unless you turn off your ad-blocker.

Privacy Tools > Corporate Paternalism

Obviously this all sucks. User security shouldn’t be bought by forfeiting privacy. In reality, one is deeply imbricated with the other. All this bad decision-making drives home how important privacy tools are. Privacy Badger is one of many. It’s not just that Privacy Badger is built to protect the disempowered users, that it's a plug-n-play tool working quietly (but ferociously) behind the scenes to halt the tracking industry, but that it exists in an ecosystem of other like minded privacy projects that complement each other. Where one tool might miss, another hones in.

This year, Privacy Badger has unveiled exciting support projects and new features:

Until we have comprehensive privacy protections in place, until corporate tech stops abusing our desires to not be snooped on, privacy tools must be empowered to make up for these harms. Users deserve the right to choose what privacy means to them, not have that decision made by an advertising company like Google.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Daly Barnett

How To Fight Bad Patents: 2023 Year In Review

2 months 4 weeks ago

At EFF, we believe that all the rights we have in the offline world–to speak freely, create culture, play games, build things and do business–must hold up in the digital world, as well. 

EFF’s longstanding project of fighting for a more balanced, just patent system has always borne free expression in mind. And patent trolls, who simply use intellectual property (IP) rights to extract money from others, continue to be a barrier to people who want to freely innovate, or even just use technology. 

Defending IPR 

The inter partes review (IPR) process that Congress created about a decade ago is far from perfect, and we’ve supported a few ideas that would make it stronger. But overall, IPR has been a big step forward for limiting the damage of wrongly granted patents. Thousands of patent claims have been canceled through this process, which uses specialized administrative judges and is considerably faster and less expensive than federal courts. 

And IPR does no harm to legitimate patent holders. In fact, it only affects a tiny proportion of patents at all. In fiscal year 2023, there were 392 patents that were partially invalidated, and 133 patents that were fully invalidated. That’s out of a universe of an estimated 3.8 million “live” patents, according to the U.S. Patent and Trademark Office’s (USPTO) own data. 

Patent examiners have less than 20 hours, on average, to go through the entire review process for a particular patent application. The process ends with the patent applicant getting a limited monopoly from the government–a monopoly right that’s now given out more than 300,000 times per year. It only makes sense to have some type of post-grant review system to challenge the worst patents at the patent office. 

Despite this, patent trolls and other large, aggressive patent holders are determined to roll back the IPR process. This year, they lobbied the USPTO to begin a process that would allow wrongheaded rule changes that would severely threaten access to the IPR process. 

EFF, allied organizations, and tens of thousands of individuals wrote to the U.S. Patent Office opposing the proposed rules, and insisting that patent challenges should remain open to the public. 

We’re also opposing an even more extreme set of rule changes to IPR that has been unfortunately put forward by some key Senators. The PREVAIL Act would sharply limit IPR to only the immediately affected parties, and bar groups like EFF from accessing IPR at all. (A crowdfunded IPR process is how we shut down the dangerous “podcasting” patent.) 

Defending Alice

The Supreme Court’s 2014 decision in Alice v. CLS Bank barred patents that were nothing more than abstract ideas with computer jargon added in. Using the Alice test, federal courts have kicked out a rogue’s gallery of hundreds of the worst patents, including patents claiming “matchmaking”, online picture menus, scavenger hunts, and online photo contests

Dozens of individuals and small businesses have been saved by the Alice precedent, which has done a decent job of stopping the worst computer patents from surviving–at least when a defendant can afford to litigate the case. 

Unfortunately, certain trade groups keep pushing to roll back the Alice framework. For the second year in a row, we saw the introduction of a bill called the Patent Eligibility Restoration Act. This proposal would reverse course not only on the Alice rule, but also authorize the patenting of human genes that currently cannot be patented thanks to another Supreme Court case, AMP v. Myriad. It would “restore” the absolute worst patents on computer technology, and on human genes. 

We also called out the U.S. Solicitor General when that office wrote a shocking brief siding with a patent troll, suggesting that the Supreme Court re-visit Alice. 

The Alice precedent protects everyday internet users. We opposed the Solicitor General when she came out against users, and we’ll continue to strongly oppose PERA

Until our patent laws get the kind of wholesale change we have advocated for, profiteers and scam artists will continue to claim they “own” various types of basic internet use. That myth is wrong, it hurts innovation, and it hurts free speech. With your help, EFF remains a bulwark against this type of patent abuse.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Joe Mullin

Taking Back the Web with Decentralization: 2023 in Review

2 months 4 weeks ago

When a system becomes too tightly-controlled and centralized, the people being squeezed tend to push back to reclaim their lost autonomy. The internet is no exception. While the internet began as a loose affiliation of universities and government bodies, that emergent digital commons has been increasingly privatized and consolidated into a handful of walled gardens. Their names are too often made synonymous with the internet, as they fight for the data and eyeballs of their users.

In the past few years, there's been an accelerating swing back toward decentralization. Users are fed up with the concentration of power, and the prevalence of privacy and free expression violations, and many users are fleeing to smaller, independently operated projects.

This momentum wasn’t only seen in the growth of new social media projects. Other exciting projects have emerged this year, and public policy is adapting.  

Major gains for the Federated Social Web

After Elon Musk acquired Twitter (now X) at the end of 2022,  many people moved to various corners of the “IndieWeb” at an unprecedented rate. It turns out those were just the cracks before the dam burst this year. 2023 was defined as much by the ascent of federated microblogging as it was by the descent of X as a platform. These users didn't just want a drop-in replacement for twitter, they wanted to break the major social media platform model for good by forcing hosts to compete on service and respect.

The other major development in the fediverse came from a seemingly unlikely source—Meta.

This momentum at the start of the year was principally seen in the fediverse, with Mastodon. This software project filled the microblogging niche for users leaving Twitter, while conveniently being one of the most mature projects using the ActivityPub protocol, the basic building block at the heart of interoperability in the many fediverse services.

Filling a similar niche, but built on the privately developed Authenticated Transfer (AT) Protocol, Bluesky also saw rapid growth despite remaining invite-only and not-yet being open to interoperating until next year. Projects like Bridgy Fed are already working to connect Bluesky to the broader federated ecosystem, and show some promise of a future where we don’t have to choose between using the tools and sites we prefer and connecting to friends, family, and many others. 

The other major development in the fediverse came from a seemingly unlikely source—Meta.  Meta owns Facebook and Instagram, which have gone to great lengths to control user data—even invoking privacy-washing claims to maintain their walled gardens. So Meta’s launch of Threads in July, a new microblogging site using the fediverse’s ActivityPub protocol, was surprising. After an initial break-out success, thanks to bringing Instagram users into the new service, Threads is already many times larger than the fediverse and Bluesky combined. While such a large site could mean federated microblogging joins federated direct messages (email) in the mainstream, Threads has not yet interoperated, and may create a rift among hosts and users wary of Meta’s poor track record in protecting user privacy and content moderation

We also saw the federation of social news aggregation. In June, Reddit outraged its moderators and third party developers by updating its API pricing policy to become less interoperable. This outrage manifested into a major platform-wide blackout protesting the changes and the unfair treatment of the unpaid and passionate volunteers who make the site worthwhile. Again, users turned to the maturing fediverse as a decentralized refuge, specifically the more reddit-like cousins of Mastodon, Lemmy and Kbin. Reddit, echoing Twitter once again, also came under fire for briefly banning users and subreddits related to these fediverse alternatives. While the protests continued well beyond their initial scope, and continued to remain in the public eye, order was eventually restored. However, the formerly fringe alternatives in the fediverse continue to be active and improving.

Some of our friends are hard at work figuring out what comes next.

Finally, while these projects made great strides in gaining adoption and improving usability, many remain generally small and under-resourced. For the decentralized social web to succeed, it must be sustainable and maintain high standards for how users are treated and safeguarded. These indie hosts face similar liability risks and governmental threats as the billion dollar companies. In a harrowing example we saw this year, an FBI raid on a Mastodon server admin for unrelated reasons resulted in the seizure of an unencrypted server database. It’s a situation that echoes EFF’s founding case over 30 years ago, Steve Jackson Games v. Secret Service, and it underlines the need for small hosts to be prepared to guard against government overreach.

With so much momentum towards better tools and a wider adoption of better standards, we remain optimistic about the future of these federated projects.

Innovative Peer-to-Peer Apps

This year has also seen continued work on components of the web that live further down the stack, in the form of protocols and libraries that most people never interact with but which enable the decentralized services that users rely on every day. The ActivityPub protocol, for example, describes how all the servers that make up the fediverse communicate with each other. ActivityPub opened up a world of federated decentralized social media—but progress isn't stopping there.

Some of our friends are hard at work figuring out what comes next. The Veilid project was officially released in August, at DEFCON, and the Spritely project has been throwing out impressive news and releases all year long. Both projects promise to revolutionize how we can exchange data directly from person to person, securely and privately, and without needing intermediaries. As we wrote, we’re looking forward to seeing where they lead us in the coming year.

The European Union’s Digital Markets Act went into effect in May of 2023, and one of its provisions requires that messaging platforms greater than a certain size must interoperate with other competitors. While each service with obligations under the DMA could offer its own bespoke API to satisfy the law’s requirements, the better result for both competition and users would be the creation of a common protocol for cross-platform messaging that is open, relatively easy to implement, and, crucially, maintains end-to-end encryption for the protection of end users. Fortunately, the More Instant Messaging Interoperability (MIMI) working group at the Internet Engineering Task Force (IETF) has taken up that exact challenge. We’ve been keeping tabs on the group and are optimistic about the possibility of open interoperability that promotes competition and decentralization while protecting privacy.

EFF on DWeb Policy

DWeb Camp 2023

The “star-studded gala” (such as it is) of the decentralized web, DWeb Camp, took place this year among the redwoods of Northern California over a weekend in late June. EFF participated in a number of panels focused on the policy implications of decentralization, how to influence policy makers, and the future direction of the decentralized web movement. The opportunity to connect with others working on both policy and engineering was invaluable, as were the contributions from those living outside the US and Europe.  

Blockchain Testimony

Blockchains have been the focus of plenty of legislators and regulators in the past handful of years, but most of the focus has been on the financial uses and implications of the tool. EFF had a welcome opportunity to direct attention toward the less-often discussed other potential uses of blockchains when we were invited to testify before the United States House Energy and Commerce Committee Subcommittee on Innovation, Data, and Commerce. The hearing focused specifically on non-financial uses of blockchains, and our testimony attempted to cut through the hype to help members of Congress understand what it is and how and when it can be helpful while being clear about its potential downsides. 

The overarching message of our testimony was that blockchain at the end of the day is just a tool and, just as with other tools, Congress should refrain from regulating it specifically because of what it is. The other important point we made was that the individuals that contribute open source code to blockchain projects should not, absent some other factor, be the ones held responsible for what others do with the code they write.

A decentralized system means that individuals can “shop” for the moderation style that best suits their preferences.

Moderation in Decentralized Social Media

One of the major issues brought to light by the rise of decentralized social media such as Bluesky and the fediverse this year has been the promises and complications of content moderation in a decentralized space. On centralized social media, content moderation can seem more straightforward. The moderation team has broad insight into the whole network, and, for the major platforms most people are used to, these centralized services have more resources to maintain a team of moderators. Decentralized social media has its own benefits when it comes to moderation, however. For example, a decentralized system means that individuals can “shop” for the moderation style that best suits their preferences. This community-level moderation may scale better than centralized models, as moderators have more context and personal investment in the space

But decentralized moderation is certainly not a solved problem, which is why the Atlantic Council created the Task Force for a Trustworthy Future Web. The Task Force started out by compiling a comprehensive report on the state of trust and safety work in social media and the upcoming challenges in the space. They then conducted a series of public and private consultations focused on the challenges of content moderation in these new platforms. Experts from many related fields were invited to participate, including EFF, and we were excited to offer our thoughts and to hear from the other assembled groups. The Task Force is compiling a final report that will synthesize the feedback and which should be out early next year.

The past year has been a strong one for the decentralization movement. More and more people are realizing that the large centralized services are not all there is to the internet, and exploration of alternatives is happening at a level that we haven’t seen in at least a decade. New services, protocols, and governance models are also popping up all the time. Throughout the year we have tried to guide newcomers through the differences in decentralized services, inform public policies surrounding these technologies and tools, and help envision where the movement should grow next. We’re looking forward to continuing to do so in 2024.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Ross Schulman

States Attack Young People’s Constitutional Right to Use Social Media: 2023 Year in Review

2 months 4 weeks ago

Legislatures in more than half of the country targeted young people’s use of social media this year, with many of the proposals blocking adults’ ability to access the same sites. State representatives introduced dozens of bills that would limit young people’s use of some of the most popular sites and apps, either by requiring the companies to introduce or amend their features or data usage for young users, or by forcing those users to get permission from parents, and in some cases, share their passwords, before they can log on. Courts blocked several of these laws for violating the First Amendment—though some may go into effect later this year. 

Fourteen months after California passed the AADC, it feels like a dam has broken.

How did we get to a point where state lawmakers are willing to censor large parts of the internet? In many ways, California’s Age Appropriate Design Code Act (AADC), passed in September of 2022, set the stage for this year’s battle. EFF asked Governor Newsom to veto that bill before it was signed into law, despite its good intentions in seeking to protect the privacy and well-being of children. Like many of the bills that followed it this year, it runs the risk of imposing surveillance requirements and content restrictions on a broader audience than intended. A federal court blocked the AADC earlier this year, and California has appealed that decision.

Fourteen months after California passed the AADC, it feels like a dam has broken: we’ve seen dangerous social media regulations for young people introduced across the country, and passed in several states, including Utah, Arkansas, and Texas. The severity and individual components of these regulations vary. Like California’s, many of these bills would introduce age verification requirements, forcing sites to identify all of their users, harming both minors’ and adults’ ability to access information online. We oppose age verification requirements, which are the wrong approach to protecting young people online. No one should have to hand over their driver’s license, or, worse, provide biometric information, just to access lawful speech on websites.

A Closer Look at State Social Media Laws Passed in 2023

Utah enacted the first child social media regulation this year, S.B. 152, in March. The law prohibits social media companies from providing accounts to a Utah minor, unless they have the express consent of a parent or guardian. We requested that Utah’s governor veto the bill.

We identified at least four reasons to oppose the law, many of which apply to other states’ social media regulations. First, young people have a First Amendment right to information that the law infringes upon. With S.B. 152 in effect, the majority of young Utahns will find themselves effectively locked out of much of the web absent their parents permission. Second, the law  dangerously requires parental surveillance of young peoples’ accounts, harming their privacy and free speech. Third, the law endangers the privacy of all Utah users, as it requires many sites to collect and analyze private information, like government issued identification, for every user, to verify ages. And fourth, the law interferes with the broader public’s First Amendment right to receive information by requiring that all users in Utah tie their accounts to their age, and ultimately, their identity, and will lead to fewer people expressing themselves, or seeking information online. 

Federal courts have blocked the laws in Arkansas and California.

The law passed despite these problems, as did Utah’s H.B. 311, which creates liability for social media companies should they, in the view of Utah lawmakers, create services that are addictive to minors. H.B. 311 is unconstitutional because it imposes a vague and unscientific standard for what might constitute social media addiction, potentially creating liability for core features of a service, such as letting you know that someone responded to your post. Both S.B. 152 and H.B. 311 are scheduled to take effect in March 2024.

Arkansas passed a similar law to Utah's S.B. 152 in April, which requires users of social media to prove their age or obtain parental permission to create social media accounts. A federal court blocked the Arkansas law in September, ruling that the age-verification provisions violated the First Amendment because they burdened everyone's ability to access lawful speech online. EFF joined the ACLU in a friend-of-the-court brief arguing that the statute was unconstitutional.

Texas, in June, passed a regulation similar to the Arkansas law, which would ban anyone under 18 from having a social media account unless they receive consent from parents or guardians. The law is scheduled to take effect in September 2024.

Given the strong constitutional protections for people, including children, to access information without having to identify themselves, federal courts have blocked the laws in Arkansas and California. The Utah and Texas laws are likely to suffer the same fate. EFF has warned that such laws were bad policy and would not withstand court challenges, in large part because applying online regulations specifically to young people often forces sites to use age verification, which comes with a host of problems, legal and otherwise. 

To that end, we spent much of this year explaining to legislators that comprehensive data privacy legislation is the best way to hold tech companies accountable in our surveillance age, including for harms they do to children. For an even more detailed account of our suggestions, see Privacy First: A Better Way to Address Online Harms. In short, comprehensive data privacy legislation would address the massive collection and processing of personal data that is the root cause of many problems online, and it is far easier to write data privacy laws that are constitutional. Laws that lock online content behind age gates can almost never withstand First Amendment scrutiny because they frustrate all internet users’ rights to access information and often impinge on people’s right to anonymity.

Of course, states were not alone in their attempt to regulate social media for young people. Our Year in Review post on similar federal legislation that was introduced this year covers that fight, which was successful. Our post on the UK’s Online Safety Act describes the battle across the pond. 2024 is shaping up to be a year of court battles that may determine the future of young people’s access to speak out and obtain information online. We’ll be there, continuing to fight against misguided laws that do little to protect kids while doing much to invade everyone’s privacy and speech rights.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Jason Kelley

Fighting European Threats to Encryption: 2023 Year in Review 

2 months 4 weeks ago

Private communication is a fundamental human right. In the online world, the best tool we have to defend this right is end-to-end encryption. Yet throughout 2023, politicians across Europe attempted to undermine encryption, seeking to access and scan our private messages and pictures. 

But we pushed back in the EU, and so far, we’ve succeeded. EFF spent this year fighting hard against an EU proposal (text) that, if it became law, would have been a disaster for online privacy in the EU and throughout the world. In the name of fighting online child abuse, the European Commission, the EU’s executive body, put forward a draft bill that would allow EU authorities to compel online services to scan user data and check it against law enforcement databases. The proposal would have pressured online services to abandon end-to-end encryption. The Commission even suggested using AI to rifle through peoples’ text messages, leading some opponents to call the proposal “chat control.”

EFF has been opposed to this proposal since it was unveiled last year. We joined together with EU allies and urged people to sign the “Don’t Scan Me” petition. We lobbied EU lawmakers and urged them to protect their constituents’ human right to have a private conversation—backed up by strong encryption. 

Our message broke through. In November, a key EU committee adopted a position that bars mass scanning of messages and protects end-to-end encryption. It also bars mandatory age verification, which would have amounted to a mandate to show ID before you get online; age verification can erode a free and anonymous internet for both kids and adults. 

We’ll continue to monitor the EU proposal as attention shifts to the Council of the EU, the second decision-making body of the EU. Despite several Member States still supporting widespread surveillance of citizens, there are promising signs that such a measure won’t get majority support in the Council. 

Make no mistake—the hard-fought compromise in the European Parliament is a big victory for EFF and our supporters. The governments of the world should understand clearly: mass scanning of peoples’ messages is wrong, and at odds with human rights. 

A Wrong Turn in the U.K.

EFF also opposed the U.K.’s Online Safety Bill (OSB), which passed and became the Online Safety Act (OSA) this October, after more than four years on the British legislative agenda. The stated goal of the OSB was to make the U.K. the world’s “safest place” to use the internet, but the bill’s more than 260 pages actually outline a variety of ways to undermine our privacy and speech. 

The OSA requires platforms to take action to prevent individuals from encountering certain illegal content, which will likely mandate the use of intrusive scanning systems. Even worse, it empowers the British government, in certain situations, to demand that online platforms use government-approved software to scan for illegal content. The U.K. government said that content will only be scanned to check for specific categories of content. In one of the final OSB debates, a representative of the government noted that orders to scan user files “can be issued only where technically feasible,” as determined by the U.K. communications regulator, Ofcom. 

But as we’ve said many times, there is no middle ground to content scanning and no “safe backdoor” if the internet is to remain free and private. Either all content is scanned and all actors—including authoritarian governments and rogue criminals—have access, or no one does. 

Despite our opposition, working closely with civil society groups in the UK, the bill passed in September, with anti-encryption measures intact. But the story doesn't end here. The OSA remains vague about what exactly it requires of platforms and users alike. Ofcom must now take the OSA and, over the coming year, draft regulations to operationalize the legislation. 

The public understands better than ever that government efforts to “scan it all” will always undermine encryption, and prevent us from having a safe and secure internet. EFF will monitor Ofcom’s drafting of the regulation, and we will continue to hold the UK government accountable to the international and European human rights protections that they are signatories to. 

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Christoph Schmon

First, Let’s Talk About Consumer Privacy: 2023 Year in Review

2 months 4 weeks ago

Whatever online harms you want to alleviate on the internet today, you can do it better—with a broader impact—if you enact strong consumer data privacy legislation first. That is a grounding principle that has informed much of EFF’s consumer protection work in 2023.

While consumer privacy will not solve every problem, it is superior to many other proposals that attempt to address issues like child mental health or foreign government surveillance. That is true for two reasons: well written consumer privacy laws address the root source of corporate surveillance, and they can withstand constitutional scrutiny.

EFF’s work on this issue includes: (1) advocating for strong comprehensive consumer data privacy laws; (2) fighting bad laws; (3) protecting existing sectoral privacy laws.

Advocating for Strong Comprehensive Consumer Data Privacy


This year, EFF released a report titled “Privacy First: A Better Way to Address Online Harms.” The report listed the key pillars of a strong privacy law (like no online behavioral ads and minimization) and how these principles can help address current issues (like protecting children’s mental health or reproductive health privacy).

We highlighted why data privacy legislation is a form of civil rights legislation and why adtech surveillance often feeds government surveillance.

And we made the case why well-written privacy laws can be constitutional when they regulate the commercial processing of personal data; that personal data is private and not a matter of public concern; and the law is tailored to address the government’s interest in privacy, free expression, security, and guarding against discrimination.

Fighting Bad Laws Based in Censorship of Internet Users


We filed amicus briefs in lawsuits challenging laws in Arkansas and Texas that required internet users to submit to age verification before accessing certain online content. These challenges continue to make their way through the courts, but they have so far been successful. We plan to do the same in a case challenging California’s Age Appropriate Design Code, while cautioning the court not to cast doubt on important privacy principles.

We filed a similar amicus brief in a lawsuit challenging Montana’s TikTok ban, where a federal court recently ruled that the law violated users’ First Amendment rights to speak and to access information online, and the company’s First Amendment rights to select and curate users’ content.

Protecting Existing Sectoral Laws


EFF is also gearing up to file an amicus brief supporting the constitutionality of the federal law called the Video Privacy Protection Act, which limits how video providers can sell or share their users’ private viewing data with third-party companies or the government. While we think a comprehensive privacy law is best, we support strong existing sectoral laws that protect data like video watch history, biometrics, and broadband use records.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Mario Trujillo

Fighting For Your Digital Rights Across the Country: Year in Review 2023

2 months 4 weeks ago

EFF works every year to improve policy in ways that protect your digital rights in states across the country. Thanks to the messages of hundreds of EFF members across the country, we've spoken up for digital rights this year from Sacramento to Augusta.

Much of EFF's state legislative work has, historically, been in our home state of California—also often the most active state on digital civil liberties issues. This year, the Golden State passed several laws that strengthen consumer digital rights.

Two major laws we supported stand out in 2023. The first is S.B. 244, authored by California Sen. Susan Eggman, which makes it easier for individuals and independent repair shops to access materials and parts needed for maintenance on electronics and appliances. That means that Californians with a broken phone screen or a busted washing machine will have many more options for getting them fixed. Even though some electronics are not included, such as video game consoles, it still raises the bar for other right-to-repair bills.

S.B. 244 is one of the strongest right-to-repair laws in the country, doggedly championed by a group of advocates led by the California Public Interest Research Group, and we were proud to support it.

Another significant win comes with the signing of S.B. 362, also known as the CA Delete Act, authored by California Sen. Josh Becker. Privacy Rights Clearinghouse and Californians for Consumer Privacy led the fight on this bill, which builds on the state's landmark data privacy law and makes it easier for Californians to control their data through the state's data broker registry.

In addition to these wins, several other California bills we supported are now law. These include a measure that will broaden protections for immigration status data and one to facilitate better broadband access.

Health Privacy Is Data Privacy

States across the country continue to legislate at the intersection of digital privacy and reproductive rights. Both in California and beyond, EFF has worked with reproductive justice activists, medical practitioners, and other digital rights advocates to ensure that data from apps, electronic health records, law enforcement databases, and social media posts are not weaponized to prosecute those seeking or aiding those who seek reproductive or gender-affirming care. 

While some states are directly targeting those who seek this type of health care, other states are taking different approaches to strengthen protections. In California, EFF supported a bill that passed into law—A.B. 352, authored by CA Assemblymember Rebecca Bauer-Kahan—which extended the protections of California's health care data privacy law to apps such as period trackers. Washington, meanwhile, passed the "My Health, My Data Act"—H.B. 1155, authored by WA Rep. Vandana Slatter—that, among other protections, prohibits the collection of health data without consent. While EFF did not take a position on H.B. 1155, we do applaud the law's opt-in consent provisions and encourage other states to consider similar bills.

Consumer Privacy Bills Could Be Stronger

Since California passed the California Consumer Privacy Act in 2018, several states have passed their own versions of consumer privacy legislation. Unfortunately, many of these laws have been more consumer-hostile and business-friendly than EFF would like to see. In 2023, eight states—Delaware, Florida, Indiana, Iowa, Montana, Oregon, Tennessee and Texas— passed their own versions of broad consumer privacy bills.

EFF did not support any of these laws, many of which can trace their lineage to a weak Virginia law we opposed in 2021. Yet not all of them are equally bad.

For example, while EFF could not support the Oregon bill after a legislative deal stripped it of its private right of action, the law is a strong starting point for privacy legislation moving forward. While it has its flaws, unique among all other state privacy laws, it requires businesses to share the names of actual third parties, rather than simply the categories of companies that have your information. So, instead of knowing a "data broker" has your information and hitting a dead end in following your own data trail, you can know exactly where to file your next request. EFF participated in a years-long process to bring that bill together, and we thank the Oregon Attorney General's office for their work to keep it as strong as it is.

EFF also wants to give plaudits to Montana for another bill—a strong genetic privacy bill passed this year. The bill is a good starting point for other states, and shows Montana is thinking critically about how to protect people from overbroad data collection and surveillance.

Of course, one post can't capture all the work we did in states this year. In particular, the curious should read our Year in Review post specifically focused on children’s privacy, speech, and censorship bills introduced in states this year. But EFF was able to move the ball forward on several issues this year—and will continue to fight for your digital rights in statehouses from coast to coast.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Hayley Tsukayama

In the Trenches of Broadband Policy: 2023 Year In Review

2 months 4 weeks ago

EFF has long advocated for affordable, accessible, future-proof internet access for all. Nearly 80% of Americans already consider internet access to be as essential as water and electricity, so as our work, health services, education, entertainment, social lives, etc. increasingly have an online component, we cannot accept a future where the quality of your internet access—and so the quality of your connection to these crucial facets of your life—is determined by geographic, socioeconomic, or otherwise divided lines. 

Lawmakers recognized this during the pandemic and set in motion once-in-a-generation opportunities to build the future-proof fiber infrastructure needed to close the digital divide once and for all.

As we exit the pandemic however, that dedication is wavering. Monopolistic internet service providers (ISPs), with business models that created the digital divide in the first place, are doing everything they can to maintain control over the broadband market—including stopping the construction of any infrastructure they do not control. Further, while some government agencies are continuing to make rules to advance equitable and competitive access to broadband, others have not. Regardless, EFF will continue to fight for the vision we’ve long advocated.

New York City Abandons Revolutionary Fiber Plan 

This year, New York City Mayor Eric Adams turned his back on the future of broadband accessibility for New Yorkers.

In 2020, then Mayor Bill de Blasio unveiled New York City’s Internet Master Plan to deliver broadband to low-income New Yorkers by investing in public fiber infrastructure. Public fiber infrastructure would have been an investment in New York City’s future, a long-term solution to permanently bridge the digital divide and bring affordable, accessible future-proof service to New Yorkers for generations to come. This kind of public infrastructure, especially if provisioned on an open and affordable basis dramatically lowers barriers to entry, which in turn creates competition, lower prices, and better customer service in the market as a whole.

Mayor Eric Adams not only abandoned this plan, but subsequently introduced a three-year $90 million dollar subsidy plan called Big Apple Connect. Instead of building physical infrastructure to bridge the digital divide for decades to come, New York City will now subsidize NYC’s oligopolist ISPs, Charter Spectrum and Altice, to continue doing business as usual. This does nothing to address the needs of underinvested communities whose legacy networks physically cannot handle a fast connection. All it does is put taxpayer dollars into corporate pockets instead of into infrastructure that actually serves the people.

The Adams administration even asked a cooperatively-run community based ISP that had been a part of the Internet Master Plan and had already installed fiber infrastructure to dismantle their network so the city can further contract with the big ISPs.

California Wavers On Its Commitments

New York City is not the only place public commitment to bridging the digital divide has wavered. 

In 2021, California invested nearly $7 billion to bring affordable fiber infrastructure to all Californians. As part of this process California’s Department of Technology was meant to build 10,000 miles of middle-mile fiber infrastructure, the physical foundation through which community-level last mile connections would be built to serve underserved communities for decades to come.

Unfortunately, in August the Department of Technology not only reduced the number of miles to be built but also cut off entire communities that had traditionally been underserved. Despite fierce community pushback, the Department of Technology stuck to their revised plans and awarded contracts accordingly.

Governor Newsom has promised to restore the lost miles in 2024, which EFF and California community groups intend to hold him to, but the fact remains that the reduction of miles should not have been done the way they were.

FCC Rules on Digital Discrimination and Rulemaking on Net Neutrality

On the federal level, the Federal Communications Commission finally received its fifth commissioner in Anna Gomez September of this year, allowing them to begin their rulemaking on net neutrality and promulgate rules on digital discrimination. We submitted comments on the net neutrality proceeding, advocating for a return to light-touch, targeted, and enforceable net neutrality protections for the whole country.

On digital discrimination, EFF applauds the Commission for adopting a disparate treatment as well as disparate impact standard. Companies can now be found liable for digital discrimination not only when they intentionally treat communities differently, but when the impact of their decisions—regardless of intent—affect a community differently.  Further, for the first time the Commission recognized the link between historic redlining in housing and digital discrimination, making the connection between the historic underinvestment of lower income communities of color and the continued underinvestment by the monopolistic ISPs.

Next year will bring more fights around broadband implementation. The questions will be who gets funding, whether and where infrastructure gets built, and whether long-neglected communities will finally be heard and brought into the 21st-century or left behind by public neglect or private greed. The path to affordable, accessible, future-proof internet for all will require the political will to invest in physical infrastructure and hold incumbents to nondiscrimination rules that preserve speech and competition online.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Chao Liu

Protecting Students from Faulty Software and Legislation: 2023 Year in Review

3 months ago

Lawmakers, schools districts, educational technology companies and others keep rolling out legislation and software that threatens students’ privacy, free speech, and access to social media, in the name of “protecting” children. At EFF, we fought back against this overreach and demand accountability and transparency.

Bad bills and invasive monitoring systems, though sometimes well-meaning, hurt students rather than protect them from the perceived dangers of the internet and social media. We saw many efforts to bar young people, and students, from digital spaces, censor what they are allowed to see and share online, and monitor and control when and how they can do it. This makes it increasingly difficult for them to access information about everything from gun violence and drug abuse to politics and LGBTQ+ topics, all because some software or elected official considers these topics “harmful.”

In response, we doubled down on exposing faulty surveillance software, long a problem in many schools across the country. We launched a new project called the Red Flag Machine, an interactive quiz and report demonstrating the absurd inefficiency—and potential dangers—of student surveillance software that schools across the country use and that routinely invades the privacy of millions of children.

We’ll continue to fight student surveillance and censorship, and we are heartened to see students fighting back

The project grew out of our investigation of GoGuardian, computer monitoring software used in about 11,500 schools to surveil about 27 million students—mostly in middle and high school—according to the company. The software allows school officials and teachers to monitor student’s computers and devices, talk to them via chat or webcam, block sites considered “offensive,” and get alerts when students access content that the software, or the school, deems harmful or explicit.

Our investigation showed that the software inaccurately flags massive amounts of useful material. The software flagged sites about black authors and artists, the Holocaust, and the LGBTQ+ rights movement. The software flagged the official Marine Corps’ fitness guide and the bios of the cast of Shark Tank. Bible.com was flagged because the text of Genesis 3 contained the word “naked.” We found thousands more examples of mis-flagged sites.

EFF built the Red Flag Machine to expose the ludicrous results of GoGuardian’s flagging algorithm. In addition to reading our research about the software, you can take a quiz that presents websites flagged by the software, and guess which of five possible words triggered the flag. The results would be funny if they were not so potentially harmful.

Congress Takes Aim At Students and Young People

Meanwhile, Congress this year resurrected the Kids Online Safety Act (KOSA), a bill that would increase surveillance and restrict access to information in the name of protecting children online—including students. KOSA would give power to state attorneys general to decide what content on many popular online platforms is dangerous for young people, and would enable censorship and surveillance. Sites would likely be required to block important educational content, often made by young people themselves, about how to deal with anxiety, depression, eating disorders, substance use disorders, physical violence, online bullying and harassment, sexual exploitation and abuse, and suicidal thoughts. We urged Congress to reject this bill and encouraged people to tell their senators and representative that KOSA will censor the internet but not help kids. 

We also called out the brazen Eyes on the Board Act, which aims to end social media use entirely in schools. This heavy-handed bill would cut some federal funding to any school that doesn’t block all social media platforms. We can understand the desire to ensure students are focusing on schoolwork when in class, but this bill tells teachers and school officials how to do their jobs, and enforces unnecessary censorship.

Many schools already don’t allow device use in the classroom and block social media sites and other content on school issued devices. Too much social media is not a problem that teachers and administrators need the government to correct—they already have the tools and know-how to do it.

Unfortunately, we’ve seen a slew of state bills that also seek to control what students and young people can access online. There are bills in Texas, Utah, Arkansas, Florida, Montana, to name just a few, and keeping up with all this bad legislation is like a game of whack a mole.

Finally, teachers and school administrators are grappling with whether generative AI use should be allowed, and if they should deploy detection tools to find students who have used it. We think the answer to both is no. AI detection tools are very inaccurate and carry significant risks of falsely flagging students for plagiarism. And AI use is growing exponentially and will likely have significant impact on students’ lives and futures. They should be learning about and exploring generative AI now to understand some of the benefits and flaws. Demonizing it only deprives students from gaining knowledge about a technology that may change the world around us.

We’ll continue to fight student surveillance and censorship, and we are heartened to see students fighting back against efforts to supposedly protect children that actually give government control over who gets to see what content. It has never been more important for young people to defend our democracy and we’re excited to be joining with them. 

If you’re interested in learning more about protecting your privacy at school, take a look at our Surveillance Self-Defense guide on privacy for students.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Karen Gullo

Kids Online Safety Shouldn’t Require Massive Online Censorship and Surveillance: 2023 Year in Review

3 months ago

There’s been plenty of bad news regarding federal legislation in 2023. For starters, Congress has failed to pass meaningful comprehensive data privacy reforms. Instead, legislators have spent an enormous amount of energy pushing dangerous legislation that’s intended to limit young people’s use of some of the most popular sites and apps, all under the guise of protecting kids. Unfortunately, many of these bills would run roughshod over the rights of young people and adults in the process. We spent much of the year fighting these dangerous “child safety” bills, while also pointing out to legislators that comprehensive data privacy legislation would be more likely to pass constitutional muster and address many of the issues that these child safety bills focus on. 

But there’s also good news: so far, none of these dangerous bills have been passed at the federal level, or signed into law. That's thanks to a large coalition of digital rights groups and other organizations pushing back, as well as tens of thousands of individuals demanding protections for online rights in the many bills put forward.

Kids Online Safety Act Returns

The biggest danger has come from the Kids Online Safety Act (KOSA). Originally introduced in 2022, it was reintroduced this year and amended several times, and as of today, has 46 co-sponsors in the Senate. As soon as it was reintroduced, we fought back, because KOSA is fundamentally a censorship bill. The heart of the bill is a “Duty of Care” that the government will force on a huge number of websites, apps, social networks, messaging forums, and online video games. KOSA will compel even the smallest online forums to take action against content that politicians believe will cause minors “anxiety,” “depression,” or encourage substance abuse, among other behaviors. Of course, almost any content could easily fit into these categories—in particular, truthful news about what’s going on in the world, including wars, gun violence, and climate change. Kids don’t need to fall into a  wormhole of internet content to get anxious; they could see a newspaper on the breakfast table. 

Fortunately, so many people oppose KOSA that it never made it to the Senate floor for a full vote.

KOSA will empower every state’s attorney general as well as the Federal Trade Commission (FTC) to file lawsuits against websites or apps that the government believes are failing to “prevent or mitigate” the list of bad things that could influence kids online. Platforms affected by KOSA would likely find it impossible to filter out this type of “harmful” content, though they would likely try. Online services that want to host serious discussions about mental health issues, sexuality, gender identity, substance abuse, or a host of other issues, will all have to beg minors to leave, and institute age verification tools to ensure that it happens. Age verification systems are surveillance systems that threaten everyone’s privacy. Mandatory age verification, and with it, mandatory identity verification, is the wrong approach to protecting young people online.

The Senate passed amendments to KOSA later in the year, but these do not resolve its issues. As an example, liability under the law was shifted to be triggered only for content that online services recommend to users under 18, rather than content that minors specifically search for. In practice, that means platforms could not proactively show content to young users that could be “harmful,” but could present that content to them. How this would play out in practice is unclear; search results are recommendations, and future recommendations are impacted by previous searches. But however it’s interpreted, it’s still censorship—and it fundamentally misunderstands how search works online. Ultimately, no amendment will change the basic fact that KOSA’s duty of care turns what is meant to be a bill about child safety into a censorship bill that will harm the rights of both adult and minor users. 

Fortunately, so many people oppose KOSA that it never made it to the Senate floor for a full vote. In fact, even many of the young people it is intended to help are vehemently against it. We will continue to oppose it in the new year, and urge you to contact your congressperson about it today

Most KOSA Alternatives Aren’t Much Better

KOSA wasn’t the only child safety bill Congress put forward this year. The Protecting Kids on Social Media Act would combine some of the worst elements of other social media bills aimed at “protecting the children” into a single law. It includes elements of KOSA as well as several ideas pulled from state bills that have passed this year, such as Utah’s surveillance-heavy Social Media Regulations law

When originally introduced, the Protecting Kids on Social Media Act had five major components: 

  • A mandate that social media companies verify the ages of all account holders, including adults 
  • A ban on children under age 13 using social media at all
  • A mandate that social media companies obtain parent or guardian consent before minors over 12 years old and under 18 years old may use social media
  • A ban on the data of minors (anyone over 12 years old and under 18 years old) being used to inform a social media platform’s content recommendation algorithm
  • The creation of a digital ID pilot program, instituted by the Department of Commerce, for citizens and legal residents, to verify ages and parent/guardian-minor relationships

EFF is opposed to all of these components, and has written extensively about why age verification mandates and parental consent requirements are generally dangerous and likely unconstitutional. 

In response to criticisms, senators updated the bill to remove some of the most flagrantly unconstitutional provisions: it no longer expressly mandates that social media companies verify the ages of all account holders, including adults. Nor does it mandate that social media companies obtain parent or guardian consent before teens may use social media.  

One silver lining to this fight is that it has activated young people. 

Still, it remains an unconstitutional bill that replaces parents’ choices about what their children can do online with a government-mandated prohibition. It would still prohibit children under 13 from using any ad-based social media, despite the vast majority of content on social media being lawful speech fully protected by the First Amendment. If enacted, the bill would suffer a similar fate to a California law struck down in 2011 for violating the First Amendment, which was aimed at restricting minors’ access to violent video games. 

What’s Next

One silver lining to this fight is that it has activated young people. The threat of KOSA, as well as several similar state-level bills that did pass, has made it clear that young people may be the biggest target for online censorship and surveillance, but they are also a strong weapon against them

The authors of these bills have good, laudable intentions. But laws that would force platforms to determine the age of their users are privacy-invasive, and laws that restrict speech—even if only for those who can’t prove they are above a certain age—are censorship laws. We expect that KOSA, at least, will return in one form or another. We will be ready when it does.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Jason Kelley

The Atlas of Surveillance Hits Major Milestones: 2023 in Review

3 months ago

"The EFF are relentless."

That's what a New York Police Department lieutenant wrote on LinkedIn after someone sent him a link to the Atlas of Surveillance, EFF's moonshot effort to document which U.S. law enforcement agencies are using which technologies, including drones, automated license plate readers and face recognition. Of course, the lieutenant then went on to attack us with unsubstantiated accusations of misinformation — but we take it all as a compliment.

If you haven't checked out the Atlas of Surveillance recently, or ever before, you absolutely should. It includes a searchable database and an interactive map, and anyone can download the data for their own projects. As this collaboration with the University of Nevada Reno's Reynolds School of Journalism (RSJ) finishes its fifth year, we are proud to announce that we've hit a major milestone: more than 12,000 data points that document the use of police surveillance nationwide, all collected using open-source investigative techniques, data journalism, and public records requests.

We’ve come a long way since the Atlas of Surveillance launched as a pilot project with RSJ back in the spring semester of 2019. By that summer, with the help of a few dozen journalism students, we had accumulated 250 data points, focused on the 23 counties along the U.S.-Mexico border. When we launched the formal website in 2020, we had collected a little more than 5,500 data points. Today's dataset represents more than a 100% increase since then.

That isn't the only major milestone we accomplished this year. To collect data for the project, EFF and RSJ designed a tool called Report Back, which allows us to distribute micro-research assignments (about 10-20 minutes each) to students in our classes. This winter, the 3,000th assignment was completed using Report Back.

This year we also dug into one particular technology. As part of our Atlas efforts, we began to see Fusus—a company working to bring real-time surveillance to local police departments via camera registries and real-time crime centers—appear more frequently as a tool used by law enforcement. In collaboration with the Thomson Reuters Foundation, we decided to do a deeper dive into the adoption of Fusus, and the Atlas has served as a resource for other reporters working to investigate this company in their own towns and across the country.

We’re proud to have built the Atlas because it’s meant to be a tool for the public, and we're excited to see more and more people are discovering it. This year, we clocked about 250,000 pageviews, more than double what we've seen in previous years. This tells us not only that more people care about police surveillance than ever before, but that we're better able to inform them about what's happening locally in their communities. The top 20 jurisdictions with the most traffic for include:

  1. Phoenix, Ariz.
  2. Chicago, Ill.
  3. Los Angeles, Calif.
  4. Atlanta, Ga.
  5. New York City, N.Y.
  6. Austin, Texas
  7. Houston, Texas
  8. San Antonio, Texas
  9. Seattle, Wash.
  10. Columbus, Ohio  
  11. Las Vegas, Nev.
  12. Dallas, Texas
  13. Philadelphia, Penn.
  14. Denver, Colo. 
  15. Tampa, Fla.
  16. West Bloomfield, Mich.
  17. Portland, Ore.
  18. San Diego, Calif.
  19. Nashville, Tenn.
  20. Pittsburgh, Penn. 

One of the primary goals of the Atlas of Surveillance project is to reach journalists, academics, activists, and policymakers, so they can use our data to better inform their research. In this sense, 2023 was a huge success. Here are some of our favorite projects that used Atlas of Surveillance data this year:

  • Social justice advocates were trained on how to use the Atlas of Surveillance in a workshop titled "Data Brokers & Modern Surveillance: Dangers for Marginalized People" at an annual Friends (Quakers) conference. 
  • A team of master’s students at the University of Amsterdam built a website called "Beyond the Lens" that analyzes the police surveillance industry using primary data from the Atlas of Surveillance. 
  • The Markup combined Atlas data with census data, crime data, and emails obtained through the California Public Records Act to investigate the Los Angeles Police Department's relationship with Ring, Amazon's home video surveillance subsidiary. 

The Atlas has also been cited in government proceedings and court briefs:

The Atlas made appearances in many academic and legal scholarship publications in 2023, including:

Meanwhile, print, radio, and television journalists continue to turn to the Atlas as a resource, either to build stories about police surveillance or provide context. This year, these have included:

Activists, advocates, and concerned citizens around the nation have also used the Atlas of Surveillance to support their actions against expansion of surveillance:

These victories wouldn't be possible without the students at RSJ, especially our 2023 interns Haley Ekberg, Kieran Dazzo, Dez Peltzer, and Colin Brandes. We also owe thanks to lecturers Paro Pain, Ran Duan, Jim Scripps, and Patrick File for sharing their classrooms with us.

In 2024, EFF will expand the Atlas to capture more technologies used by law enforcement agencies. We are also planning new features, functions and fixes that allow users to better browse and analyze the data.  And of course, you should keep an eye out in the new year for new workshops, talks, and other opportunities to learn more and get involved with the project.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Dave Maass

International Threats to Freedom of Expression: 2023 Year in Review

3 months ago

2023 has been an unfortunate reminder that the right to free expression is most fragile for groups on the margins, and that it can quickly become a casualty during global conflicts. Threats to speech arose out of the ongoing war in Palestine. They surfaced in bills and laws around the world that explicitly restrict LGBTQ+ freedom of expression and privacy. And past threats—and acts—were ignored by the United Nations, as the UN’s Secretary-General announced it would grant Saudi Arabia host status for the 2024 Internet Governance Forum (IGF).

LGBTQ+ Rights

Globally, an increase in anti-LGBTQ+ intolerance is impacting individuals and communities both online and off. The digital rights community has observed an uptick in censorship of LGBTQ+ websites as well as troubling attempts by several countries to pass explicitly anti-LGBTQ+ bills restricting freedom of expression and privacy—bills that also fuel offline intolerance against LGBTQ+ people, and force LGBTQ+ individuals to self-censor their online expression to avoid being profiled, harassed, doxxed, or criminally prosecuted. 

One prominent example is Ghana's draconian ‘'Promotion of Proper Human Sexual Rights and Ghanaian Family Values Bill, 2021.' This year, EFF and other civil society partners continued to call on the government of Ghana to immediately reject this draconian bill and commit instead to protecting the human rights of all people in Ghana.

To learn more about this issue, read our 2023 Year in Review post on threats to LGBTQ+ speech.

Free Expression in Times of Conflict

The war in Palestine has exacerbated existing threats to free expression Palestinians already faced,, particularly those living in Gaza. Most acutely, the Israeli government began targeting telecommunications infrastructure early on in the war, inhibiting Palestinians’ ability to share information and access critical services. At the same time, platforms have failed to moderate misinformation (while overmoderating other content), which—at a time when many Palestinians can’t access the internet—has created an imbalance in information and media coverage.

EFF teamed up with a number of other digital rights organizations—including 7amleh, Access Now, Amnesty International, and Article 19—to demand that Meta take steps to ensure Palestinian content is moderated fairly. This effort follows the 2021 campaign of the same name.

The 2024 Internet Governance Forum

Digital rights organizations were shocked to learn in October that the 2024 Internet Governance Forum is slated to be held in Saudi Arabia. Following the announcement, we joined numerous digital rights organizations in calling on the United Nations to reverse their decision.

EFF has, for many years, expressed concern about the normalization of the government of Saudi Arabia by Silicon Valley companies and the global community. In recent years, the Saudi government has spied on its own citizens on social media and through the use of spyware; imprisoned Wikipedia volunteers for their contributions to access to information on the platform; sentenced a PhD student and mother of two to 34 years in prison and a subsequent travel ban of the same length; and sentenced a teacher to death for his posts on social media.

The UK Threatens Expression

We have been disheartened this year to see the push in the UK to pass its Online Safety Bill. EFF has long opposed the legislation, and throughout 2023 we stressed that mandated scanning obligations will lead to censorship of lawful and valuable expression. The Online Safety Bill also threatens another basic human right: our right to have a private conversation. From our point of view, the UK pushed the Bill through aware of the damage it would cause.

Despite our opposition, working closely with civil society groups in the UK, the bill passed in September. But the story doesn't end here. The Online Safety Act remains vague about what exactly it requires of platforms and users alike, and Ofcom must now draft regulations to operationalize the legislation. EFF will monitor Ofcom’s drafting of the regulation, and we will continue to hold the UK government accountable to the international and European human rights protections that they are signatories to. 

New Hope for Alaa Abd El Fattah Case

While 2023 has overall been a disappointing year for free expression, there is always hope, and for us this has come in the form of renewed efforts to free our friend and EFF Award Winner, Alaa Abd El Fattah

This year, on Alaa’s 42nd birthday (and his tenth in prison), his family filed a new petition to the UN Working Group on Arbitrary Detention in the hopes of finally securing his release. This latest appeal comes after Alaa spent more than half of 2022 on a hunger strike in protest of his treatment in prison, which he started on the first day of Ramadan. A few days after the strike began, on April 11, Alaa’s family announced that he had become a British citizen through his mother. There was hope last year, following a groundswell of protests that began in the summer and extended to the COP27 conference, that the UK foreign secretary could secure his release, but so far, this has not happened. Alaa's hunger strike did result in improved prison conditions and family visitation rights, but only after it prompted protests and fifteen Nobel Prize laureates demanded his release.

This holiday season, we are hoping that Alaa can finally be reunited with his family.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Jillian C. York

Equitable Access to the Law Got Stronger: 2023 Year in Review

3 months ago

It seems like a no-brainer that everyone should be able to read, copy, and share the laws we all must follow, but few things are simple in the internet age. Public.Resource.Org’s victory at the D.C. Circuit appeals court in September, in which the court ruled that non-commercial copying of codes and standards that have been incorporated into the law is not copyright infringement, was ten years in the making.

The American Society for Testing and Materials (ASTM), National Fire Protection Association Inc. (NFPA), and American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) are non-governmental organizations that develop codes and standards for building and product safety, compatibility, and spurring innovation. Regulators at all levels of government frequently incorporate these codes and standards into regulations, making them law. Public Resource, a nonprofit organization founded by Carl Malamud, collects and posts these laws online as part of its mission to make government more accessible to all. ASTM, NFPA, and ASHRAE sued Public Resource in 2013 for copyright and trademark infringement and unfair competition.

A federal trial court in Washington, D.C. initially ruled against Public Resource, and a three-judge panel of the D.C. Circuit then returned the case to the trial court for more fact-finding. This year, another panel of the D.C. Circuit found that Public Resource’s use of the standards is for nonprofit, educational purposes, that this use serves a different purpose than that of the plaintiffs, and that the evidence did not show significant harm to the standards organizations’ commercial markets. “Public Resource posts standards that government agencies have incorporated into law—no more and no less,” the court ruled. “If an agency has given legal effect to an entire standard, then its entire reproduction is reasonable in relation to the purpose of the copying, which is to provide the public with a free and comprehensive repository of the law.” Posting these codes online is therefore a fair use.

The decision also preserved equitable online access to the law. While the standards organizations put some of their standards into online “reading rooms,” the text “is not searchable, cannot be printed or downloaded, and cannot be magnified without becoming blurry. Often, a reader can view only a portion of each page at a time and, upon zooming in, must scroll from right to left to read a single line of text.” These reading rooms collect information about people who come to read the law and present access challenges for people who use screen reader software and other accessibility tools. The court recognized that Public Resource had stepped in to address this problem.

The internet lets more people understand and participate in government than ever before. It also enables new ways for powerful organizations to control and surveil people who simply want to do this. That’s why Public Resource’s work, and a balanced copyright law that protects access to law and participation in government, is so important.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Related Cases: Freeing the Law with Public.Resource.Org
Mitch Stoltz

Just a Little Does a Whole Lot

3 months ago

You might’ve heard that most of EFF’s funding comes from regular people’s modest donations—we’re proud of that. But did you know that EFF members who donate $10 or less each month raised over $400,000 for digital rights this year?

That covers multiple staff members who work in the courts, run advocacy campaigns, and build privacy-enhancing free tech. It might seem like a small gift from you, but it's a huge help to EFF. Truly, just a little does a whole lot. You can be a Sustaining Donor for as little as five bucks a month and, right now, you’ll help EFF get a Year-End Challenge boost!

Join EFF

Become a Sustaining Donor

Start an automatic monthly or annual gift by December 31 and help unlock bonus grants! Every supporter gets EFF closer to a series of seven Year-End Challenge grants set by EFF’s board. These grants become larger as the number of online rights supporters grows. See the counter.

But Wait! There’s More...

Protecting digital privacy and free speech is its own reward. The Year-End Challenge bonus grants make it even better. But here’s one more reason to start a small monthly donation: choice EFF member gear every year.

EFF’s new Watching the Watchers t-shirt is now available with the $5/month Copper Level Membership! There’s an array of fun conversation-starting member perks so you can show your support.

Your freedom is EFF’s guiding star. Help us keep advancing toward a future that supports the rights of tech users everywhere. Start a convenient monthly or annual recurring donation to support digital rights and you’ll help EFF unlock bonus grants before the year ends!

Aaron Jue

The U.S. Supreme Court’s Busy Year of Free Speech and Tech Cases: 2023 Year in Review

3 months ago

The U.S. Supreme Court has taken an unusually active interest in internet free speech issues. EFF participated as amicus in a whopping nine cases before the court this year. The court decided four of those cases, and decisions in the remaining five cases will be published in 2024.   

Of the four cases decided this year, the results are a mixed bag. The court showed restraint and respect for free speech rights when considering whether social media platforms should be liable for ISIS content, while also avoiding gutting one of the key laws supporting free speech online. The court also heightened protections for speech that may rise to the level of criminal “true threats.” But the court declined to overturn an overbroad law that relates to speech about immigration.  

Next year, we’re hopeful that the court will uphold the right of individuals to comment on government officials’ social media pages, when those pages are largely used for governmental purposes and even when the officials don’t like what those comments say; and that the court will strike down government overreach in mandating what content must stay up or come down online, or otherwise distorting social media editorial decisions. 

Platform Liability for Violent Extremist Content 

Cases: Gonzalez v. Google and Twitter v. Taamneh – DECIDED 

The court, in two similar cases, declined to hold social media companies—YouTube and Twitter—responsible for aiding and abetting terrorist violence allegedly caused by user-generated content posted to the platforms. The case against YouTube (Google) was particularly concerning because the plaintiffs had asked the court to narrow the scope of Section 230 when internet intermediaries recommend third-party content. As we’ve said for decades, Section 230 is one of the most important laws for protecting internet users’ speech. We argued in our brief that narrowing Section 230, the law that generally protects users and online services from lawsuits based on content created by others, in any way would lead to increased censorship and a degraded online experience for users; as would holding platforms responsible for aiding and abetting acts of terrorism. Thankfully, the court declined to address the scope of Section 230 and held that the online platforms may not generally be held liable under the Anti-Terrorism Act. 

True Threats Online 

Case: Counterman v. Colorado – DECIDED 

The court considered what state of mind a speaker must have to lose First Amendment protection and be liable for uttering “true threats,” in a case involving Facebook messages that led to the defendant’s conviction. The issue before the court was whether any time the government seeks to prosecute someone for threatening violence against another person, it must prove that the speaker had some subjective intent to threaten the victim, or whether the government need only prove, objectively, that a reasonable person would have known that their speech would be perceived as a threat. We urged the court to require some level of subjective intent to threaten before an individual’s speech can be considered a "true threat" not protected by the First Amendment. In our highly digitized society, online speech like posts, messages, and emails, can be taken out of context, repackaged in ways that distort or completely lose their meaning, and spread far beyond the intended recipients. This higher standard is thus needed to protect speech such as humor, art, misunderstandings, satire, and misrepresentations. The court largely agreed and held that subjective understanding by the defendant is required: that, at minimum, the speaker was in fact subjectively aware of the serious risk that the recipient of the statements would regard their speech as a threat, but recklessly made them anyway.  

Encouraging Illegal Immigration  

Case: U.S. v. Hansen - DECIDED  

The court upheld the Encouragement Provision that makes it a federal crime to “encourage or induce” an undocumented immigrant to “reside” in the United States, if one knows that such “coming to, entry, or residence” in the U.S. will be in violation of the law. We urged the court to uphold the Ninth Circuit’s ruling, which found that the language is unconstitutionally overbroad under the First Amendment because it threatens an enormous amount of protected online speech. This includes prohibiting, for example, encouraging an undocumented immigrant to take shelter during a natural disaster, advising an undocumented immigrant about available social services, or even providing noncitizens with Know Your Rights resources or certain other forms of legal advice. Although the court declined to hold the law unconstitutional, it sharply narrowed the law’s impact on free speech, ruling that the Encouragement Provision applies only to the intentional solicitation or facilitation of immigration law violations. 

Public Officials Censoring Social Media Comments 

Cases: O’Connor-Ratcliff v. Garnier and Lindke v. Freed – PENDING 

The court is considering a pair of cases related to whether government officials who use social media may block individuals or delete their comments because the government disagrees with their views. The First Amendment generally prohibits viewpoint-based discrimination in government forums open to speech by members of the public. The threshold question in these cases is what test must be used to determine whether a government official’s social media page is largely private and therefore not subject to First Amendment limitations, or is largely used for governmental purposes and thus subject to the prohibition on viewpoint discrimination and potentially other speech restrictions. We argued that the court should establish a functional test that looks at how an account is actually used. It is important that the court make clear once and for all that public officials using social media in furtherance of their official duties can’t sidestep their First Amendment obligations because they’re using nominally “personal” or preexisting campaign accounts. 

Government Mandates for Platforms to Carry Certain Online Speech 

Cases: NetChoice v. Paxton and Moody v. NetChoice - PENDING 

The court will hear arguments this spring about whether laws in Florida and Texas violate the First Amendment because they allow those states to dictate when social media sites may not apply standard editorial practices to user posts. Although the state laws differ in how they operate and the type of mandates they impose, each law represents a profound intrusion into social media sites’ ability to decide for themselves what speech they will publish and how they will present it to users. As we argued in urging the court to strike down both laws, allowing social media sites to be free from government interference in their content moderation ultimately benefits internet users. When platforms have First Amendment rights to curate the user-generated content they publish, they can create distinct forums that accommodate diverse viewpoints, interests, and beliefs. To be sure, internet users are rightly frustrated with social media services’ content moderation practices, which are often perplexing and mistaken. But permitting Florida and Texas to deploy the state’s coercive power in retaliation for those concerns raises significant First Amendment and human rights concerns. 

Government Coercion in Content Moderation 

Case: Murthy v. Missouri – PENDING 

Last, but certainly not least, the court is considering the limits on government involvement in social media platforms’ enforcement of their policies. The First Amendment prohibits the government from directly or indirectly forcing a publisher to censor another’s speech. But the court has not previously applied this principle to government communications with social media sites about user posts. We urged the court to recognize that there are both circumstances where government involvement in platforms’ policy enforcement decisions is permissible and those where it is impermissible. We also urged the court to make clear that courts reviewing claims of impermissible government involvement in content moderation are obligated to conduct fact and context-specific inquires. And we argued that close cases should go against the government, as it is the best positioned to ensure that its involvement in platforms’ policy enforcement decisions remains permissible. 

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Related Cases: United States v. Helaman HansenCounterman v. Colorado
Sophia Cope

EFF Membership: 2023 Year In Review

3 months ago

Throughout the many years that EFF has been around, our goal has remained consistent: creating a future where you have your rights when you go online, and one where they are enhanced by new technologies. But our goal isn't the only part of EFF that has remained consistent: for decades, the digital freedom supporters that lift our organization have been stalwart allies that help ensure we can continue fighting for privacy, free expression, and innovation online.

It's because of these supporters that we can fight tough battles, but also take the time to celebrate our accomplishments and come together as a community. And we did celebrate and have fun together this year!

Give a Year-End Donation

Unlock bonus grants in 2023!

Starting off with our 7th annual Tech Trivia Night and 15th annual Cyberlaw Trivia Night—both filled with delicious food, drinks, and of course, a myriad of trivia questions to test the contestant's tech and internet culture know-how. Both events boasted a full house, with Cyberlaw Trivia even selling out tickets. We of course had some very snazzy judges for the event—including our Cybertiger for Tech Trivia!

EFF's Cybertiger, Cooper Quintin, and our judges for Tech Trivia.

We didn't want to stop there, so next up we hosted our Spring Members' Speakeasy in Oakland. Attendees were invited to a super-secret location for drinks and a chance to meet EFF staff and other like-minded digital freedom supporters. Members even got a behind-the-scenes look into how we fight for encryption and oppose bad bills from Congress with EFF's Director of Federal Affairs India McKinney.

Of course, we got to see EFF supporters in full force when we headed to Las Vegas for the summer security conferences—BSides Las Vegas, Black Hat USA, and DEF CON. This week, consisting of these extraordinary conferences, is always one of the busiest times of the year for EFF, and this year was no exception. Throughout the week, we had more than one thousand people either start or renew their EFF membership(!), raising enough money to fund a lawyer for a full year, and then some. It was great to see such strong support and a bunch of attendees walking around in new, and vintage, EFF swag! 

You’d think we’d take a break and slow down after that hectic week in Las Vegas. But you’d be mistaken, because just a month after that we celebrated our second EFF Awards in San Francisco! This is always our most ambitious event, involving a ton of EFF staff, digital freedom supporters, past award winners, and of course, our new EFF Award winners. We were even able to record the event for folks to watch at home if they couldn’t attend—give it a watch if you couldn’t make it. The EFF Awards are a great opportunity for EFF to catch up with the community and shine light on others doing great work in this space. 

EFF staff at the second EFF Awards Ceremony

To cap the year off, we wanted to focus on how widespread and diverse our membership base can be. So, we held our fall Members' Speakeasy online, where attendees from around the world got a chance to hear about some of the work that EFF is doing in the states and in Europe regarding various "child safety" laws that would threaten privacy and encryption around the world.

With 2023 ending, it’s good to reflect on what happened over the last year. I think that it is safe to say that throughout EFF’s existence, particularly the last few years, EFF members have been one of the driving forces of the organization and are the reason we can continue working towards a better digital future. 

Many thanks to all of the EFF members who joined forces with us this year! If you’ve been meaning to join, but haven’t yet, year-end is a great time to do so. 

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Give a Year-End Donation

Unlock bonus grants in 2023!

Christian Romero

Surveillance Self-Defense: 2023 Year in Review

3 months ago

It's been a big year for Surveillance Self-Defense (SSD), our repository of self-help resources for helping better protect you and your friends from online spying. We've done a number of updates and tackled a few new emerging topics with blog posts.

Fighting for digital security and privacy rights is important, but sometimes we all just need to know what steps we can take to minimize spying, and when steps aren't possible, explaining how things work to help keep you safe. To do this, we break SSD into four sections:

  • Basics: A starter resource that includes overviews of how digital surveillance works.
  • Tool Guides: Step-by-step tutorials on using privacy and security tools.
  • Further Learning: Explainers about protecting your digital privacy.
  • Security Scenarios: Playlists of our resources for specific use cases, such as LGBTQ+ youth, journalists, activists, and more.

But not everything makes sense in SSD, so sometimes we also tackle security education issues with blogs, which tend to focus more on news events or new technology that may not have rolled out widely yet. Each has its place, and each saw a variety of new guidance this year.

Re-tooling Our SSD Tool Guides

Surveillance Self-Defense has provided expert guidance for security and privacy for 14 years. And in those years it has seen a number of revisions, expansions, and changes. We try to consistently audit and update SSD so it contains up to date information. Each guide has a "last reviewed" date so you can quickly see at the start when it last got an expert review.

This year we tackled a number of updates, and took the time to take a new approach with two of our most popular guides: Signal and WhatsApp. For these, we combined the once-separate Android and iPhone guides into one, making them easier to update (and translate) in the future.

We also updated many other guides this year with new information, screenshots, and advice:

SSD also received two new guides. The first was a new guide for choosing a password manager, one of the most important security tools, and one that can be overwhelming to research and start using. The second was a guide for using Tor on mobile devices, which is an increasingly useful place to use the privacy-protecting software.

Providing New Guidance and Responding to News

Part of security education is explaining new and old technologies, responding to news events, and laying out details of any technological quirks we find. For this, we tend to turn to our blog instead of SSD. But the core idea is the same: provide self-help guidance for navigating various security and privacy concerns.

We came up with guidance for passkeys, a new type of login that eliminates the need for passwords altogether. Passkeys can be confusing, both from a security perspective and from a basic usability perspective. We do think there's work that can be done to improve them, and like most security advice, the answer to the question of whether you should use them is "it depends." But for many people, if you’re not already using a password manager, passkeys will be a tremendous increase in security.

When it comes to quirks in apps, we took a look at what happens when you delete a replied-to message in encrypted messaging apps. There are all sorts of little oddities with end-to-end encrypted messaging apps that are worth being aware of. While they don't compromise the integrity of the messaging—your communications are safe from the companies that run them—they can sometimes act unexpectedly, like keeping a message you deleted around longer than you may realize if someone in the chat replied to it directly.

The DNA site 23andMe suffered a “credential stuffing” attack that resulted in 6.9 million user's data appearing on hacker forums. There were only a relatively small number of accounts actually compromised, but once in, the attacker was able to scrape information about other users using a feature known as DNA Relatives, which provided users with an expansive family tree. There's nothing you can do after this if your data was included, but we explained what happened, and the handful of steps you could take to better secure your account and make it more private in the future.

Google released its "Privacy Sandbox" feature, which, while improved from initial proposals back in 2019, still tracks your internet use for behavioral advertising by using your web browsing to define "topics" of interest, then queuing up ads based on those interests. The idea is that instead of the dozens of third-party cookies placed on websites by different advertisers and tracking companies, Google itself will track your interests in the browser itself, controlling even more of the advertising ecosystem than it already does. Our blog shows you how to disable it, if you choose to.

We also took a deep dive into an Android tablet meant for kids that turned out to be filled with sketchyware. The tablet was riddled with all sorts of software we didn't like, but we shared guidance for how to better secure an Android tablet—all steps worth taking before you hand over any Android tablet as a holiday gift.

After a hard fought battle pushing Apple to encrypt iCloud backups, the company actually took it a step further, allowing you to encrypt nearly everything in iCloud, including those backups, with a new feature they call Advanced Data Protection. Unfortunately, it's not the default setting, so you should enable it for yourself as soon as you can.

Similarly, Meta finally rolled out end-to-end encryption for Messenger, which is thankfully enabled by default, though there are some quirks with how backups work that we explain in this blog post.

EFF worked hard in 2023 to explain new consumer security technologies, provide guidance for tools, and help everyone communicate securely. There's plenty more work to be done next year, and we'll be here to explain what you can, how to do it, and how it works in 2024.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Thorin Klosowski
Checked
38 minutes 17 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed