Expanding Broadband in Portland, The Time Is Now

2 months 2 weeks ago

Access to high-speed internet in our homes is an essential service like water and electricity, not a luxury. Our local and regional governments have a responsibility to provide equitable, accessible, and affordable fast-internet service to every home and business- just like electricity, water, and waste removal. Portland, Oregon, has existing infrastructure that can be used to provide affordable access to fast-internet for all Portlanders: a publicly owned dark fiber network used for essential city services – IRNE (Integrated Regional Network Enterprise) Net. Expanding and opening access to IRNE Net would encourage the growth of new local internet service providers (ISPs), provide a new source of revenue for the City of Portland, and create the means through which to give Portlanders affordable access to high-speed internet, in every home and business. 

This essential service in Portland is provided by corporations charged with making as much profit as possible - resulting in predatory behavior towards consumers. In the past, the capital investment necessary to build robust broadband service resulted in only a few large under-regulated corporations controlling internet access and service offerings in the Portland Metropolitan region. Oftentimes that infrastructure was built with public funds, through federal grant programs. Yet ownership remains with private corporations, who also pocket significant profits.

Municipal Broadband Across the U.S.

In the United States, in 2018, over 100 communities nationwide were offering some form of high-speed internet service. Since then, there has been a dramatic expansion in this space. Currently, over 600 communities offer municipal broadband in some capacity, an increase of more than 600 percent since 2018. Municipal broadband can be both faster and more affordable than internet offered by privately owned ISPs and helps bring high-quality internet to places with limited access, such as rural and low-income areas. It also keeps taxpayer money local, and local control of internet provision leads to more accountability and greater competition in areas with only one or two providers, incentivizing those providers to offer better, more affordable services.

Research has also supported these claims. For example, researchers at Harvard University’s Berkman Klein Center for Internet & Society Research found that in 23 of 27 communities examined, community-owned networks offered lower pricing than their privately owned counterparts when costs were averaged over four years. Furthermore, according to the Institute for Local Self-Reliance (ILSR), a nonprofit advocacy group, municipal networks account for nine of the ten fastest broadband networks in the country. One of the best-known examples of municipal broadband in the U.S. is in Chattanooga, Tennessee, where, in 2010, the city-owned utility EPB famously became the first provider in the country to offer gigabit internet service throughout its entire service area. Today, EPB is the largest municipally-owned fiber-to-the-home (FTTH) network in the country, and one of several ISPs nationwide to offer speeds of up to 10 gigabits per second. Instead of bankrupting the city or putting stress on taxpayers, it’s attracting industry and businesses to the area, helping to revitalize a community that was once dependent on pollution-heavy manufacturing. Developers, computer programmers, investors and entrepreneurs now call the city home. 

ISP’s Political Influence

The reach of private providers like Comcast and CenturyLink is far and wide when it comes to controlling access to fast-internet in Oregon. Those corporations contribute to political campaigns to influence policy, and provide talking points and information to elected officials about the telecommunications industry. Our elected representatives are facing a myriad of problems that need solutions, and the telecom industry is very good at making it easy for policy makers to “learn” from their industry propaganda.

Local elected officials lack a fundamental understanding of how public broadband infrastructure can result in community and economic development when we remove the barriers to access created by private providers in their pursuit of profits. Expensive internet service provided by a few non-Oregon based corporations is hurting communities and the regional economy.

Power ultimately comes from the electorate, and Portlanders have the power to change this.The ‘not-so-local’ corporate owners of Portland’s fast-internet service companies don’t want people to know that public ownership of broadband infrastructure is both feasible and would provide benefits to all Portlanders.

Opportunity for Change in Portland, OR

Starting in 2024, Portland will switch from five citywide council seats (Mayor + 4 commissioners) to four districts, each represented by three City Council members. The change in the Portland city government charter means all the newly created council seats are up for election and creates an unprecedented opportunity to influence how fast-internet service is provided in the Portland Metropolitan region. Residents need to start now to educate the new City Council representatives on how to change the way of doing business in city government. It is possible to reclaim power from the Big Tech internet providers in Portland, and offer truly fast, affordable internet service to every Portland home and business. It is time to create fast-internet infrastructure in the Portland Metropolitan region that is treated as a public good for the public good.

Please email info@municipalbroadbandpdx.org to start or join a broadband action team in your neighborhood today!

Christopher Vines

The U.S. Patent Office Should Drop Proposed Rules That Favor Patent Trolls

2 months 2 weeks ago

More than 14,000 people and organizations, including EFF, have sent public comments responding to a wrongheaded proposal by the U.S. Patent and Trademark Office (USPTO) to change the rules about patent challenges in a way that would favor patent trolls. 

If implemented, the proposed rule changes could seriously damage the system of “inter partes reviews,” or IPRs, that Congress created 10 years ago. The IPR system is far from perfect, but it has been effective in holding patent trolls accountable for some of their outrageous and harmful patent claims. In the 10 years it’s been operating, the Patent Trial and Appeal Board has thrown out thousands of patent claims that never should have been issued in the first place. 

At EFF, we used the IPR process to crowd-fund a challenge to the Personal Audio “podcasting patent” that tried to extract patent royalty payments from U.S. podcasters. We won that proceeding and our victory was confirmed on appeal.

Earlier this month, we asked supporters to speak out and file public comments with the USPTO asking them to withdraw these proposed rules. The response was fantastic. More than 600 supporters filed comments using our suggested language, and countless more chose to explain their own reasons for opposing this proposal. 

This is an unprecedented amount of public input for a proposed federal rule change to the patent system. To those of you who sent comments, thank you! We can only maintain the progress we’ve made towards a more fair patent system with your help. 

While it’s impossible for us to read all of the comments, it’s clear that thousands of people—an overwhelming majority of the total—sent comments explaining to the USPTO that effective inter partes reviews are critical to American innovators. Bad patents increase costs to consumers, and harm software developers in particular. 

Lobbyists for patent trolls and large-scale patent licensors talk to Congress all the time. But in this case, the public at large was alerted to the proposal that they had pushed for, and is opposing it. The USPTO’s proposed rules would install arbitrary limits on who could challenge patents, and greatly exceed the office’s authority to tinker with a process that was created by Congress. 

As we’ve said for years, when it comes to putting our broken patent system back in balance, the USPTO must—at a minimum—run a robust IPR system, as Congress intended

The USPTO should work in the public interest, not the interest of patent trolls. People have a right to challenge bad patents. And they have a right to work together to do so, including through membership-based companies like Unified Patents, or non-profits like the Linux Foundation and EFF. 

We hope that USPTO reads and processes the full array of public comments on this matter, and withdraws these proposed rules. 

You can read EFF’s comments on our website. You also can watch a recording of our online event discussing the rule changes, together with experts from Unified Patents and the Linux Foundation.

Joe Mullin

Nurturing the Internet Freedom Movement 🌱

2 months 2 weeks ago

One of my favorite things to do in the summertime is get my hands dirty among sunkissed flowers, vegetables, and greenery in my garden. Just as plants thrive when cared for—turning from a small seed to something with roots, branches, and a canopy of leaves—EFF works to protect and grow your digital freedom. It’s inspiring to create something beautiful and strong together.

We’ve been cultivating a better internet for all users for over 30 years. Together, we till hard ground, plant the ideas, nurture the discussions, and nourish the movement we know today.

But the seeds of digital freedom cannot be sown alone. Will you join us and help shape tech’s future for the users?

Nurture a Better Internet

Join During EFF's Summer Membership Drive

Through July 20, you can become an EFF member for just $20, and receive two limited-edition items! Get a Digital Garden Sticker and a Privacy Pro Magnet representing how we nurture a better internet, powered by your own hands.

Digital Garden Sticker (top) and Privacy Pro Magnet (bottom)

EFF’s engineers, lawyers, and skilled advocates tend the path for technology users and for your rights to privacy, expression, and innovation online. Member support ensures that EFF can continue to weed out attacks on digital freedom with nuanced expertise and sharp determination.

We're Watching the Watchers

When you support EFF at the Copper Level or above, you can choose EFF’s new Watching the Watchers t-shirt or a number of other conversation-starting member perks!

Watch the Watchers with our new member t-shirt!

Member support keeps EFF fighting every day. Help out today when you donate or even start a small automatic monthly gift. Most of EFF’s funding comes from ordinary individuals giving what they can, and everything helps.

Thanks to support from people like you, recently EFF has been able to help keep one of the key laws supporting free expression online—Section 230—intact, make strides passing protections for the right to repair your tech, and continue to push back against various legislative proposals across the U.S. that would increase surveillance and restrict access to information.

Tell a Friend & Plant the Seed

Encourage your peers to support internet freedom! Here’s some language you can share with your friends, family, and more:

Let's make sure our digital world has someone to watch the watchers. Join EFF to plant the seeds for a better internet today ☀️ https://eff.org/summer
Twitter | Facebook | LinkedIn

Maintaining our digital garden isn’t an easy task, but with help, it’s one we’re prepared to do. Support EFF during our summer membership drive and help us plant the seeds for a better internet.

Give Today!

EFF is a member-supported U.S. 501(c)(3) organization celebrating nine years of top ratings from the nonprofit watchdog Charity Navigator! Donations are tax-deductible as allowed by law.

Christian Romero

Preliminary Injunction Limiting Government Communications with Platforms Tackles Illegal “Jawboning,” But Fails to Provide Guidance on What’s Unconstitutional

2 months 2 weeks ago

A July 4 preliminary injunction issued by a federal judge in Louisiana limiting government contacts with social media platforms deals with government “jawboning”—urging private persons and entities to censor another’s speech—a serious issue deserving serious attention and judicial scrutiny.

The First Amendment forbids the government from coercing a private entity to censor, whether the coercion is direct or subtle. This has been an important principle in countering efforts to threaten and  pressure intermediaries like bookstores and credit card processors to limit others’ speech.

But not every communication to an intermediary about users’ speech is unconstitutional. And the distinction between proper and improper speech is often obscure.

So, while the court order is notable as the first to hold the government accountable for unconstitutional jawboning of social media platforms, and appropriately recognizes the First Amendment right of persons to receive information online free of unlawful government interference, it is not the serious examination of jawboning issues that is sorely needed. The court did not distinguish between unconstitutional and constitutional interactions or provide guideposts for distinguishing between them in the future.

The injunction comes in a lawsuit brought by Louisiana, Missouri, and several individuals alleging federal government agencies and officials illegally pushed the platforms to censor content about COVID safety measures and vaccines, elections, and Hunter Biden’s laptop, among other issues. The court sided with the plaintiffs, issuing a broad injunction that does not clearly track First Amendment standards.

Oddly, the injunction includes exceptions that permit some of the most concerning government interactions and indicates that the court may have been more concerned with the subject matter of the government’s complaints—for instance, posts encouraging vaccine hesitancy—than with drawing a workable line on the government’s conduct.

Government Involvement in Content Moderation Raises Human Rights Issues

 Because government involvement in private platforms’ content moderation processes raises serious human rights concerns, we have urged companies to proceed with caution in their editorial decision-making. As we have written:

“When sites cooperate with government agencies, it leaves the platform inherently biased in favor of the government's favored positions. It gives government entities outsized influence to manipulate content moderation systems for their own political goals—to control public dialogue, suppress dissent, silence political opponents, or blunt social movements. And once such systems are established, it is easy for government—and particularly law enforcement—to use the systems to coerce and pressure platforms to moderate speech they may not otherwise have chosen to moderate.”

EFF was also one of the co-authors and original endorsers of the second version of the Santa Clara Principles, which specifically scrutinizes “State Involvement in Content Moderation,” and affirms that “state actors must not exploit or manipulate companies’ content moderation systems to censor dissenters, political opponents, social movements, or any person.” The Santa Clara Principles recognize that government involvement in private companies’ content moderation processes raises human rights concerns not raised by the companies’ consultations with other experts.

“Companies should recognize the particular risks to users’ rights that result from state involvement in content moderation processes. This includes a state’s involvement in the development and enforcement of the company’s rules and policies, either to comply with local law or serve other state interests. Special concerns are raised by demands and requests from state actors (including government bodies, regulatory authorities, law enforcement agencies and courts) for the removal of content or the suspension of accounts.”

 Bar Should Be Low When Government is Accused of Jawboning

Recognizing the gravity of the issue, we have written about jawboning and filed several amicus briefs in cases that raise the issue. In those briefs, we have focused primarily on the question of when private platforms may be liable when they respond to government jawboning. On that issue we have set a fairly high bar—private entities shall not be considered state actors unless “first, the government replaces the intermediary’s editorial policy with its own, second, the intermediary willingly cedes its editorial implementation of that policy to the government regarding the specific user speech, and third, the censored party has no possible remedy against the government.”

But we have set a fairly low bar for when the government itself should be liable for trying to coerce private entities to censor speech:

“When the government exhorts private publishers to censor, the censored party’s first and favored recourse is against the government. And the narrow path to holding private publishers liable as state actors proposed above in no way limits a plaintiff’s ability to hold governments liable for their role in pressuring social media companies to censor user speech. . . . In First Amendment cases, there is a lower threshold for suits against government agencies and officials that coerce private censorship: the government may violate speakers’ First Amendment rights with “system[s]of informal censorship” aimed at speech intermediaries. Bantam Books v. Sullivan, 372 U.S. 58, 61, 71 (1963).”

We also filed a FOIA lawsuit designed to uncover the US government’s involvement in the widespread removal of programs featuring a Palestinian activist from Zoom, YouTube, Facebook, and Eventbrite. We joined with other organizations to urge the administration to drop its planned Disinformation Governance Board. And we sharply criticized the “trusted flagger” provisions of the EU’s Digital Services Act under which a state’s Digital Services Coordinator can designate law enforcement agencies to be among those whose “flags” to hosting services of illegal content must be given priority. We also filed comments with Meta’s Oversight Board protesting Facebook acting upon law enforcement flags and removing drill music videos.

 Not All Communications Between Platforms and Government Are Improper

We have also acknowledged that not every communication, interaction, or cooperative effort between a social media company and the government is unwise. As we have written in our amicus briefs:

“...content moderation is a difficult and often fraught process that even the largest and best resourced social media companies struggle with, often to the frustration of users. To even hope for fairness and consistency in their decisions, social media companies need to have breathing room to draw on outside resources. Indeed, the First Amendment protects this information gathering part of their editorial process. . . . [In addition to seeking input from users and NGOs] Platforms also seek input from governments.  Although concerning, this is appropriate where the government is uniquely situated to verify information—such as the location of polling places, a list of street closures, or a synopsis of the CDC’s current COVID policies.”

Nor is every government communication to an intermediary about its users’ speech unconstitutional. The First Amendment bars the government from coercing censorship or providing “such significant encouragement” that the ultimate choice to censor must be considered that of the state, not the intermediary. Encouragement falling short of that extreme does not violate the First Amendment. Nor are all exhortations to intermediaries improper. Mere approval of or acquiescence with the intermediary’s decision is not a constitutional violation.

The Supreme Court has held that government need not ”renounce all informal contacts with persons” and may advise them, for example, how to comply with the law. Government should be able to criticize the content moderation practices and policies of social media companies without violating the First Amendment, as long as they are not expressly or implicitly threatening them with a penalty for failing to do the government’s bidding.

Unfortunately, the order does not make an adequate effort to distinguish between proper and improper communications by the government.

While these distinctions may be difficult, the district court did not seriously engage with them. The court’s ruling looks at the government’s actions broadly, and then deems all the various agencies’ and individuals’ actions as improper encouragement. While some of the instances seem to be coercion from the court’s findings, like those of the president’s former Director of Digital Strategy Rob Flaherty, others do not. For example, it is not clear what the Census Bureau or the Centers for Disease Control did to cross the First Amendment line. The court’s finding of improper coordination with several private misinformation remediation projects also seems thin.

The court’s injunction likewise applies to whole government agencies and perhaps thousands of federal government employees. It is not limited to the specific examples of interactions discussed. And it prohibits not only coercion and forceful encouragement, but all urging and encouraging.

Unnecessary Exemptions

The preliminary injunction also specifically allows the Biden administration to “notify and contact” social media platforms about numerous topics. These exceptions were unnecessary—the First Amendment doesn’t bar the government from contacting or notifying anyone about anything as long as there is not coercion or forceful encouragement. But the subjects the court lists reveal a lot about the court’s own value judgments about subjects the government has a legitimate concern in advancing—correcting public health misinformation is noticeably excluded, while law enforcement flagging is included.

Some of the subjects would seem to apply to the very matters complained of in the complaint. The injunction does not apply to contacting or notifying social media companies about postings involving criminal activity or criminal conspiracies, of national security threats, extortion, or other threats posted on its platform, or criminal efforts to suppress voting. It also doesn’t apply contacting or notifying platforms about illegal campaign contributions, cyber-attacks against election infrastructure, or foreign attempts to influence elections, threats that threaten the public safety or security of the US, or postings intending to mislead voters about voting requirements and procedure.

The injunction does not block the government from exercising “permissible public government speech” promoting government policies or views on matters of public concern. And it does not bar communicating with social-media companies to detect, prevent, or mitigate malicious cyber activity, and communicating with social-media companies about deleting, removing, suppressing, or reducing posts on social-media platforms that are not protected free speech by the Free Speech Clause in the First Amendment to the United States Constitution.

It seems clear the court recognizes that it is appropriate for the government in many circumstances to “inform” or “notify” social media platforms about what it considers to be problematic social media content. But it sharply criticizes in the opinion many of the systems the companies and the government have for such exchanges of information. And it offers little guidance as to when “notifying” and “contacting” rise to the level of coercion or improper encouragement.

It also bears noting that the type of law enforcement involvement in content moderation that is allowed by the court’s order raise some of the most serious human rights concerns. This is why we have strongly criticized granting “trusted flagger” status to law enforcement agencies.

Lastly, in an unfortunate moment that has caused many to question the seriousness of the court’s endeavor, the court characterizes the complaint as describing “arguably the most massive attack against free speech in United States history.”

One could argue about what actually is the most massive assault on freedom of speech in our nation’s history. But without denigrating the seriousness of the allegations in this complaint, my vote is on the 42-year reign of Anthony Comstock as a special agent to the U.S. Post Office, where he zealously sought to enforce the morality law he pushed Congress to pass, the effects of which we are living with to this day, 80 years later.





David Greene

DSA Must Follow a Human-Rights Centered Enforcement Process, With Regulators Engaging International Civil Society Voices

2 months 2 weeks ago

EFF and its partners in the Digital Services Act (DSA) Human Rights Alliance called on European Union (EU) regulators today to engage international civil society voices and forge a human rights centered approach in talks about the implementation and enforcement of the DSA, which sets out new responsibilities and rules for how platforms handle and make decisions about billions of users’ posts.

In a letter addressing the EU Commission, national DSA coordinators, and internet companies, the DSA Human Rights Alliance reminded the parties that the co-regulatory model of the DSA ensures that civil society organizations and digital rights defenders worldwide have a voice in EU state talks, allowing them to represent and advocate for the interests of users, including vulnerable groups who are frequently impacted by badly designed legislation affecting privacy, free expression, and other rights.

“For the DSA to constitute a positive framework aimed at protecting digital rights also beyond the EU, there must be human rights-centered implementation and enforcement of the text over the next few years, accompanied with proactive and meaningful engagement of international civil society voices,” the letter says. “The DSA HR Alliance has a critical role to play in this process.”

The Alliance, formed in 2022, works to ensure that the DSA embraces a human rights-centered approach to platform governance and that EU lawmakers consider the global impacts of European legislation. The DSA, which came into force last year, incorporated many Alliance recommendations concerning governance and platform accountability. But the DSA still features problematic aspects that can have negative consequences for vulnerable and historically oppressed groups. It gives a lot of power to government agencies and other parties with partisan interests to flag and remove potentially illegal content. It’s still not clear about how very large online platforms will mitigate risks in practice, while the role of civil society groups, researchers, and other stakeholders in the due diligence process has never been formalized.

So, now is an important time for regulators to reengage with Alliance members, as EU countries set up authorities to enforce DSA rules and monitor the internet ecosystem for compliance with the Act. How and why internet companies remove users’ posts, combat hate speech and disinformation, and allow users more control over their internet experience are central aspects of the DSA. A wrong turn in shaping enforcement policies could invite shadow negotiations benefitting platforms and exclude voices advocating protections for fundamental rights, ultimately deepening the already vast imbalance of power between platforms and users, particularly vulnerable communities.

It’s imperative that regulators and internet companies recognize their responsibility to create strong DSA enforcement without compromising human rights protections, free speech and expression rights, and users’ privacy and security. We are witnessing the spread of platform regulatory bills in regions outside the EU, many of which are inspired by or directly copy the principles of the DSA. And, as part of its work, the Alliance has begun assessing the undeniable impact that the DSA has throughout the Global Majority.

“The DSA HR Alliance calls on EU regulators to establish transparent international regulatory dialogues, and for an inclusive implementation and enforcement approach that includes meaningful and formalized stakeholder engagement,” the letter says.

“We urge them to value the insights that non-EU organizations can bring to the implementation process of the DSA. This is the case especially for grassroots organizations operating in the Global Majority and civil rights groups fighting for the protections of historically oppressed and vulnerable groups. These groups frequently find themselves on the receiving end of badly designed legislation and can contribute substantially to minimizing the damage throughout the platforms’ value chains.”

For the letter: https://www.eff.org/document/dsa-hr-alliance-letter-july6
For more on the DSA Human Rights Alliance: https://www.eff.org/pages/dsa-human-rights-allianc

Karen Gullo

Raise a Glass: EFF's 15th Annual Cyberlaw Trivia Winners!

2 months 2 weeks ago

What do you get when you gather a bunch of the sharpest legal minds in one room with delicious food and obscure tech law trivia? That's right, you get EFF's 15th Annual Cyberlaw Trivia night!

On June 29th we had a full house—with eight teams from technology law firms and internet companies throughout the Bay Area being put to the test with six rounds of trivia, ready to battle it out for the chance to win champion steins and of course, bragging rights.

The prizes: EFF steins!

After welcoming everyone to the event, EFF's Hannah Diaz began the evening's activities by introducing our snazzy Quiz Master Kurt Opsahl, and our judges Cindy Cohn, David Greene, and Jennifer Lynch. The judges donned their very authentic robes—with Cindy even wearing a wig(!)—and the competition was on!

Kurt started the event by reminding everyone that, "By participating in this contest you acknowledge and agree that Section 230 does not require an ISP to be a neutral platform." After that requirement—that got a good laugh out of everyone—each team's trivia muscles warmed up with "General Questions" for round one.

Round two of trivia, titled "Intellectual Property," included a typo on a slide that the detail oriented lawyers were very quick to point out. At the end of this round, the score was very close, with the leading team "MofoGPT" winning by only one point! The game would continue to be close between the top teams, even with an upset in round six where the judges only awarded a full point to teams that made sure to write down "King Charles III" and award half points to those that only wrote "King Charles."

The evening wrapped with two tie breaker questions, one requiring teams to look at a "logical map of the entire internet" and guess what month and year it took place in. Attendees had to get up from their seats and put on their glasses to answer correctly! The second question was based on more recent news, asking how many TikTok users the CEO said the company had in the US at the time of his testimony in 2023. Teams wouldn't know how they did for the tie breakers till the very end.

Although EFF's legal interns were not eligible for prizes, they also joined the fun and made a great attempt to be a top team by the end of trivia!

But, by the end of round six and the tie breakers, the scores had been tabulated with the winning trivia masterminds being:

MofoGPT takes 1st place!

Cage Match takes 2nd place!

Elon's Mom Let Us Compete takes 3rd place!

EFF hosts Cyberlaw Trivia Night to gather those in the legal community who help protect online freedom for tech users. Among the many firms that continue to dedicate their time, talent, and resources to the cause, we would especially like to thank Morrison Foerster and Wilson Sonsini for sponsoring this event! Thank you to No Starch Press for their support of EFF as well.

If you are an attorney working to defend civil liberties in the digital world, consider joining EFF's Cooperating Attorneys list. This network helps EFF connect people to legal assistance when we are unable to assist. Lawyers who are interested can go here to join the Cooperating Attorneys list.

Are you interested in attending or sponsoring an upcoming EFF Trivia Night? Please reach out to mei@eff.org for more information.

Christian Romero

EFF Urges Supreme Court to Make Clear That Government Officials Have First Amendment Obligations When They Use Their Social Media Accounts for Governmental Purposes

2 months 3 weeks ago
Officials Using Nominally Personal or Pre-existing Campaign Accounts Can’t Sidestep the First Amendment and Block People

Washington, D.C. — Electronic Frontier Foundation urged the Supreme Court today to send a loud and clear message to government officials around the country who use social media in furtherance of their official duties, but then block people who criticize them: Doing so violates our First Amendment right to receive and respond to government communications.

EFF, Knight First Amendment Institute at Columbia University, and Woodhull Freedom Foundation asked the court in a brief filed today to protect the First Amendment rights of people to access and comment on the communications that elected officials post on social media to advance their official duties.

The use of social media by government officials and agencies is routine, and courts are grappling with the question of when that use is subject to First Amendment limitations and when it is not, including whether they can block people whose views they don’t like. In today’s Supreme Court brief, EFF and its partners argued that the Justices should establish that, in determining whether an official’s use of social media is state action subject to the First Amendment, courts must employ a functional test that looks to how an account is actually used.

If the use does qualify as state action, the brief argues, then courts must apply the well-established ban on viewpoint discrimination in public and nonpublic forums, meaning that the officials cannot block views just because they disagree with them.

“Social media has become an essential part of modern civic engagement,” said EFF Civil Liberties Director David Greene. “Public officials and agencies use social media for a wide variety of governmental functions, including providing the public with critical public safety information. Our First Amendment rights to get this information and to interact with our public officials shouldn’t be so easily negated by our officials using preexisting ‘personal’ accounts rather than accounts specific to the public office.”

“We are asking the Court to find that the ultimate test is how an account is used. If officials choose to mix government and nongovernment content on their account, they must accept the First Amendment obligations that go with using their account for governmental purposes,” said EFF Senior Staff Attorney Sophia Cope.

“Woodhull is proud to join EFF in presenting these important arguments to the Court, as viewpoint discrimination by government officials often impacts those expressing non-conforming positions on matters involving sexual freedom,” Woodhull Freedom Foundation President Ricci Levy said.

The court is reviewing two cases. In Lindke v. Freed, a city manager used his Facebook page to communicate about his administrative directives and posted pictures of his family, dog, and home improvement projects. He deleted comments by, and blocked, a local resident who posted comments critical of the city’s response to the COVID-19 pandemic.

The resident sued, alleging violations of his First Amendment rights. A federal district court ruled against him, a decision that was upheld by the 6th Circuit, which said that no law required the manager to operate a Facebook page and no government employees maintained it.

In the second case, O’Connor-Ratcliff v. Garnier, two school district trustees continued to use the same Facebook and Twitter accounts they created to promote their campaigns after they were elected. They used the accounts to solicit public input about school board decisions and to communicate with parents about school safety.

They blocked parents who posted comments on their pages critical of the school board. The parents sued for First Amendment violations. The trustees argued that blocking people on their social media accounts didn’t violate the First Amendment because the accounts were “personal” accounts that shouldn’t be constrained by free speech rules imposed on the government. A district court and the 9th Circuit ruled against them.

The Supreme Court should clear up once and for all the question of whether elected officials using social media in furtherance of their official duties can sidestep their First Amendment obligations because they’re using a nominally “personal” or preexisting campaign account. The answer is no.  

Contact:  SophiaCopeSenior Staff Attorneysophia@eff.org DavidGreeneCivil Liberties Directordavidg@eff.org
Karen Gullo

Digital Rights Updates with EFFector 35.8

2 months 3 weeks ago

There's a lot happening in the digital rights movement, but don't worry, we've got you covered! Catch up on the latest news with our EFFector newsletter, featuring updates, upcoming events, and more. Our latest issue features updates from Reddit's moderator strike and mass exodus of users, a recap of the work EFF has been doing a year after the Supreme Court's Dobbs decision, and more.

Learn more about the latest happenings by reading the full newsletter here, or you can listen to the audio version below!

Listen on YouTube

EFFector 35.8 | This Pride, Support LGBTQ+ Rights Both Online and offline

Make sure you never miss an issue by signing up by email to receive EFFector as soon as it's posted! Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

The United States vs. Hansen Decision Is Not “Encouraging” for Speech Rights

2 months 3 weeks ago

The U.S. Supreme Court's recent ruling in United States v. Hansen upholds a law that makes it a crime to “encourage” a person to remain in the country without authorization. The Court had two choices in this case: instruct Congress and all legislatures to use the words they actually mean (and normal people use) when they write laws; or force the public to accept—and try to comply with—criminal laws written in specialized language unknowable to most non-lawyers. Regrettably, the Court chose the latter

The main question in Hansen was whether a federal immigration law criminalizing speech that merely “encourages or induces” an undocumented immigrant to unlawfully reside in the United States unconstitutionally violates the First Amendment’s guarantee of free speech.

Earlier this year, EFF (along with Immigrants Rising, Defending Rights & Dissent, and Woodhull Freedom Foundation) filed a friend of the court brief in support of the defendant, Helaman Hansen. There, we urged the court to uphold the Ninth Circuit’s ruling, which found that the language in the law's section about encouragement is unconstitutionally overbroad because it threatens an enormous amount of protected online speech. This includes prohibiting, for example, “encouraging an undocumented immigrant to take shelter during a natural disaster, advising an undocumented immigrant about available social services,” or even providing noncitizens with Know Your Rights resources or certain other forms of legal advice. Our brief also asserted that the provision’s ambiguity further chills online speech because platforms, faced with the difficult task of drawing lines between protected speech and unlawful “encouragement,” are likely to simply remove the content rather than make a hard decision.

To the Court, “encourage” does not mean “encourage” in the same way that every non-lawyer who must comply with the law thinks it does

The Good: Narrowed Application of the Federal Criminal Law

In a 7-2 opinion authored by Justice Barrett, the Court unfortunately upheld the encouragement provision and ruled that the government can only criminalize speech where the defendant purposefully solicits or aids and abets specific unlawful acts.

In doing so, the Court significantly narrowed the extent to which this law criminalizes expressive speech. It imported not only the specialized legal meaning of the “encourages or induces” clause, but also the intent requirement traditionally associated with soliciting and facilitating, or aiding and abetting, criminal activity. Now, the law’s encouragement provision applies only to the intentional solicitation or facilitation of immigration law violations. That means it’s no longer illegal for a lawyer to inform noncitizens of their rights, for a family member to express a desire for their noncitizen relative to stay in the United States, or for an immigrant advocacy organization to direct undocumented people to online educational resources. But, while it’s great that this result narrows successful prosecutions under this particular law, it is still concerning that the intent standard is nowhere in the language of the law.  

The Bad: Broad Impact on First Amendment Doctrine

Regrettably, this decision also comes with disappointing implications for how the Supreme Court will treat future First Amendment challenges to statutes that similarly threaten to chill broad swaths of protected speech. As Justices Jackson and Sotomayor correctly note in their dissent, by “depart[ing] from ordinary principles of statutory interpretation,” the Court “avoids having to invalidate this statute under our well-established First Amendment overbreadth doctrine” and thus “subverts the speech-protective goals” of that constitutional doctrine. 

By rejecting the ordinary meanings of “encourage” and “induce,” the majority said that when those words are used in criminal laws, they bring with them the accumulated legal tradition and the “cluster of ideas” attached to each word. Rather than telling legislatures to use the word “solicit” when they want to criminalize solicitation, the Court goes out of its way—even resorting to digging through erstwhile legislative history—to import not only a technical legal meaning but also the intent requirement traditionally associated with solicitation into the clause “encourages or induces.” 

In effect, the Supreme Court upheld the law making it a federal crime to “encourage” illegal immigration because, to the Court, “encourage” does not mean “encourage” in the same way that every non-lawyer who must comply with the law thinks it does. By adopting specialized legal jargon over plain language definitions, the Court signals that it is not worried about this broad-reaching law’s chilling effect on speech. But for you? Good luck figuring out whether you are committing a crime without consulting a lawyer!

Related Cases: United States v. Helaman Hansen
Molly Buckley

Data Sanctuary for Trans People

2 months 3 weeks ago

A growing number of states have prohibited transgender youths from obtaining gender-affirming health care. Some of these states are also restricting access by transgender adults. Fortunately, other states have responded by enacting sanctuary laws to protect trans people who visit to obtain this health care.

Even before these new laws, most trans people had to travel out of state to obtain gender-affirming surgery. After these laws, we can expect much more out-of-state travel for all forms of gender-affirming health care.

To be the most welcoming health care sanctuary, a pro-trans state must also be a data sanctuary.

Anti-trans states are investigating people who provide gender affirming care, or help others receive it. For example, the Texas Attorney General is investigating a hospital for providing gender-affirming health care to transgender youths. Likewise, the Texas Governor last year ordered child welfare officials to launch child abuse investigations against parents whose trans children received such care. We can expect anti-trans investigators to use the tactics of anti-abortion investigators, including seizure of internet browsing and private messaging.

So it is great news that California Gov. Newsom last year signed S.B. 107, a trans health data sanctuary bill authored by Sen. Scott Wiener. EFF supported this bill. In important ways, S.B. 107 limits how California entities disclose personal data to out-of-state entities that would use it to investigate and punish trans health care.

First, the new law bars California’s state and local government agencies, and their employees, from providing information to any individual or out-of-state agency regarding provision of gender-affirming health care.

Second, the new law bars California’s health care providers from disclosing medical information about a person who allows a youth to receive gender-affirming care, in response to an out-of-state civil or criminal action against allowing such care.

Third, the new law bars California’s superior court clerks from issuing subpoenas to disclose information, based on out-of-state laws against a person allowing a youth to receive such care.

Three cheers for California! Other pro-transgender states should enact similar data sanctuary laws.

More work remains in the Golden State. Anti-trans officials will continue to seek information located in California, and policymakers must enact new laws as needed. For example, California may need new limits on disclosure of data held by California-based communications and computing services. Eternal vigilance is the price of data sanctuary.

Data sanctuary must extend beyond trans people visiting a pro-trans state for gender-affirming health care. For example, it must also include abortion seekers visiting an abortion sanctuary, and immigrants living in an immigrant sanctuary.

Data sanctuary is strongest if there is less data to protect. That’s one more reason why Congress and the states must enact comprehensive consumer data privacy legislation that limits how businesses collect, retain, use, and share our data. A great way to stop anti-trans officials from seizing data from businesses is to stop these businesses from collecting and retaining this data in the first place. Legislators should start with Rep. Jacobs’ My Body, My Data bill.

This article is part of our EFF Pride series. Read other articles highlighting this years work at the intersection of digital rights and LGBTQ+ on our issue page.

Adam Schwartz

Around the World, Threats to LGBTQ+ Speech Deepen

2 months 3 weeks ago

Globally, an increase in anti-LGBTQ+ intolerance is impacting individuals and communities both online and off. The digital rights community has observed an uptick in censorship of LGBTQ+ websites as well as troubling attempts by several countries to pass explicitly anti-LGBTQ+ bills restricting freedom of expression and privacy—bills that also fuel offline intolerance against LGBTQI+ people, and force LGBTQI+ individuals to self-censor their online expression to avoid being profiled, harassed, doxxed, or criminally prosecuted. 

LGBTQ+ researchers and advocates have also noted an increase in threats of violence and hate speech targeted at LGBTQ+ individuals and communities, ever increasingly with the intention of stifling trans rights and canceling drag events. These orchestrated online campaigns—often fueled by the far right—have proliferated in connection with surge of bills attacking LGBTQ+ rights. In the U.S., a report from the Center for Countering Digital Hate (CCDH) and Human Rights Campaign tracked a 406% increase in tweets connecting LGBTQ+ communities to “grooming” in the month after the “Don’t Say Gay” bill passed in March 2022. Moreover, earlier this year, ILGA Europe reported online hate speech as a serious issue in Armenia, Austria, Latvia, Montenegro, and Romania.

The use of slurs like “groomer,” “pedophile,” and “predator” have permeated from fringe discourses into mainstream, as well as from the online to offline environment—threatening LGBTQ+ rights from all vectors, affecting the quality of life of LGBTQ+ individuals, and leading to physical violence.

The following post highlights just six countries where limitations on LGBTQ+ expression are on the rise. This Pride—and all year round—we urge you to join us in taking a stand to support the freedom of LGBTQ+ individuals and communities everywhere. 

(This post focuses on non-US content—visit our issue page for our other posts covering LGBTQ+ rights in the U.S.)


Following the outbreak of Russia's war against Ukraine, many LGBTQ+ people were forced into fleeing Russia. At the same time, Russian President Putin signed into law the country's new propaganda law in December 2022, which prohibits both positive and neutral information about LGBTQ+ people to minors and adults, and bans “gender reassignment” and the “promotion of paedophilia”. That same month, online streaming services in Russia censored scenes in TV shows like Gossip Girl and The White Lotus, and Russia’s media regulator was granted new powers to ban all websites that feature “LGBT propaganda”.

Moreover, a Moscow court fined Meta four million rubles ($47,590) for refusing to take down content that was considered to be “propagating the LGBT+ community”. A different court in Moscow also fined TikTok two million rubles ($23,599) for not removing content that was “propagating homosexual relations”. Earlier in 2022, Meta was dubbed an “extremist organization,” and users of Meta products like Facebook could be considered a member of an extremist organization and thus imprisoned for up to six years.

Human rights organizations have also noted an increase in hate crimes against LGBTQ+ people in Russia, including murderphysical violence and assault, and extortion


Indonesia has long restricted freedom of expression. The Southeast Asian country with a population of 273 million has long blocked access to certain websites, including that which the government deems blasphemous, and has laws imposing criminal or civil liability for certain online activities.

In recent years, despite homosexuality not being criminalized, the Indonesian government and some of the country’s ISPs have cracked down on LGBTQ+ expression in particular. A 2021 report from the Open Observatory for Network Interference (OONI) and Outright International named Indonesia as one of six countries restricting LGBTQ+ content. The report noted that most censorship was conducted using DNS hijacking and that it was not consistent across internet service providers, suggesting that some of it could be extralegal.

Certainly some of the country’s censorship is government-ordered, however; in one instance from 2019, Instagram removed a comic depicting the struggles of gay Muslims at the behest of Indonesian authorities. In another instance, a U.S. citizen was deported after stating on Twitter that Bali was “LGBT friendly.”

United Arab Emirates

The United Arab Emirates enjoys a positive reputation throughout much of the world thanks to a business-friendly environment and significant investment in tourism, leading some to believe that the country is somehow liberal. But behind the façade lies severe violations of human rights: Dissent by citizens and non-citizens is not tolerated, with the government surveilling and imprisoning some of its critics.

Although there is some tolerance of LGBTQ+ expression, the UAE has a highly controlled online environment and, according to OONI and Outright’s 2021 report, its ISPs block a significant number of foreign LGBTQ+ websites, limiting the amount of information available to locals. Furthermore, the country’s 2012 Cybercrime Law—amended in 2018 to restrict the use of VPNs—imposes substantial penalties for criticizing the government or its institutions.


In May 2023, Ugandan President Yoweri Museveni signed into law an extremely harsh anti-LGBTQ+ bill. The Anti-Homosexuality Bill 2023—now the Anti-Homosexuality Act 2023—doesn’t criminalize identifying as LGBTQ+, but  introduces a 20-year sentence for “promoting” homosexuality, and a 10-year sentence for “aggravated homosexuality,” which includes sexual relations involving people infected with HIV. Since the Legislature approved of the law, the legislation severely restricts and impedes the rights of lesbian, gay, bisexual, transgender and queer citizens in Uganda. EFF calls on the authorities in Uganda to repeal this legislation and uphold human rights for all.

For many countries across Africa, and indeed the world, the codification of anti-LGBTQ+ discourses and beliefs can be traced back to colonial rule. Since then these laws have been used and implemented by authorities to imprison, harass, and intimidate LGBTQI+ individuals. 

The Anti-Homosexuality Act is not only an assault on the rights of LGBTQ+ people to exist, but it also represents a grave threat to freedom of expression. And, of course, other countries may see Uganda’s bill as a blueprint for oppressing the LGBTQ+ community, empowered by the silence of the international community. 


Like in Uganda, Ghanaian law already criminalizes same-sex sexual activity. But Ghana's ‘Promotion of Proper Human Sexual Rights and Ghanaian Family Values Bill, 2021’ goes much further by threatening up to five years in jail to anyone who publicly identifies as LGBTQ+ or “any sexual or gender identity that is contrary to the binary categories of male and female.” The sentence increases if the offending person expresses their gender beyond or identifies outside of the so-called "binary gender."

The bill also criminalizes identifying as an LGBTQ+ ally. The bill has a blanket prohibition on advocating for LGBTQ+ rights and explicitly assigns criminal penalties for speech posted online, and threatens online platforms—specifically naming Twitter and Meta products Facebook and Instagram—with criminal penalties if they do not restrict pro-LGBTQ+ content.

If passed, Ghanaian authorities could probe the social media accounts of anyone applying for a visa for pro-LGBTQ+ speech or create lists of pro-LGBTQ+ supporters to be arrested upon entry. They could also require the platforms to suppress content about LGBTQ+ issues, regardless of where it was created. 

And as platforms that purport to support freedom of expression and the safety of its users, and who have declared themselves allies of the LGBTQ+ community, Meta and Twitter must not remain silent. At the very least, the global LGBTQ+ and ally community has a right to know if the posts they make today could one day be in the hands of government agents that will use it to imprison them.


Taking inspiration from the bills in Uganda and Ghana, a new proposed law in Kenya—the Family Protection Bill 2023—prohibits homosexuality with imprisonment for a minimum 10 years and mandates life imprisonment for convictions of “aggravated homosexuality”. The bill also allows for the expulsion of refugees and asylum seekers who breach the law, irrespective of whether the conduct is connected with asylum requests. 

Kenya became a primary destination for LGBTQ+ individuals seeking refuge after Uganda sought to introduce the Anti-Homosexuality Bill in 2014. Kenya is the sole country in East Africa to accept LGBTQ+ individuals seeking refuge and asylum without questioning their sexual orientation. 

If this bill passes, Kenya would limit the rights to privacy, rights to free assembly and association, and rights to free expression and information both offline and on. 

EFF calls on the authorities in Kenya and Ghana to kill their respective repulsive bills, and to ensure that all LGBTQ+ are free to live without fear of persecution, prosecution, or violence just for existing. 

For more information on how to fight back against these measures in Uganda, Ghana, and Kenya, follow Access Now’s campaign.

This article is part of our EFF Pride series. Read other articles highlighting this years work at the intersection of digital rights and LGBTQ+ on our issue page.

Paige Collings

VICTORY! Maine Increases Transparency and Accountability for its Fusion Center

2 months 4 weeks ago

In a major step for reigning in the unaccountable power of fusion centers, the Maine state House and Senate have passed HP 947, An Act to Increase the Transparency and Accountability of the Maine Information and Analysis Center. The bill creates an Auditor position within the Office of the Attorney General whose job it is to conduct regular reviews of the Main Information and Analysis Center’s (MIAC) activities, to keep records, and to share information with the public. The bill also makes any information MIAC shares with private entities a public record and therefore accessible to the public.

This bill comes after a years-long concerted effort by Maine activists and concerned citizens who have been fighting for accountability in how MIAC collects, shares, and utilizes information about Mainers. In June 2021, a bill that would have defunded the fusion center entirely passed 88-54 out of the Maine House of Representatives before being defeated in the state senate.

Fusion centers are yet another unnecessary cog in the surveillance state—and one that serves the intrusive function of coordinating surveillance activities and sharing information between federal law enforcement, the national security surveillance apparatus, and local and state police, with little to no oversight. Across the United States, there are at least 78 fusion centers that were formed by the Department of Homeland Security in the wake of the War on Terror and the rise of post-9/11 mass surveillance. Since their creation, fusion centers have been hammered by politicians, academics, and civil society groups for their ineffectiveness, dysfunction, mission creep, and unregulated tendency to veer into policing political views. As scholar Brendan McQuade wrote in his book Pacifying the Homeland: Intelligence Fusion and Mass Supervision:

“On paper, fusion centers have the potential to organize dramatic surveillance powers. In practice however, what happens at fusion centers is circumscribed by the politics of law enforcement. The tremendous resources being invested in counterterrorism and the formation of interagency intelligence centers are complicated by organization complexity and jurisdictional rivalries. The result is not a revolutionary shift in policing but the creation of uneven, conflictive, and often dysfunctional intelligence-sharing systems.”

An explosive 2023 report from Rutgers University’s Center for Security, Race and Rights also provides more evidence of why these centers are invasive, secretive, and dangerous. In the report, researchers documented how New Jersey’s fusion center leveraged national security powers to spy almost exclusively on Muslim, Arab, and Black communities and push an already racially biased criminal justice system into overdrive through aggressive enforcement of misdemeanor and quality of life offenses.

After a series of leaks that revealed communications from within police departments, fusion centers, and law enforcement agencies across the country, MIAC came under particular scrutiny for sharing dubious intelligence generated by far-right wing social media accounts with local law enforcement. Specifically, MIAC helped perpetuate disinformation that stacks of bricks and stones had been strategically placed throughout a Black Lives Matter protest as part of a larger plan for destruction, and caused police to plan and act accordingly. This was, to put it plainly, a government intelligence agency spreading fake news that could have deliberately injured people exercising their First Amendment rights. This controversy unfolded shortly after a whistleblower lawsuit from a state trooper that alleged the fusion center routinely violated civil rights.

When it comes to fighting these dangerous relics of the War on Terror, activists in Maine are leading the way for the rest of the country. EFF will continue to support organizations and local groups willing to take on fusion centers in their legislatures. Congratulations to the hard-working activists and concerned residents in Maine.

Matthew Guariglia

Civil Society Calls on Tech Firms to Oppose Protest Song Ban

3 months ago

EFF and more than 24 civil society organizations have written to tech companies including Apple, Google, Meta, Twitter, and Spotify urging them to oppose the Hong Kong government’s application for an injunction to ban broadcasting and distribution of the 2019 protest song, “Glory to Hong Kong.” 

The injunction, if ordered by the court, would ban intermediaries from broadcasting, performing, selling, or distributing the song and its lyrics. It would also require companies to remove the song from their platforms.

This would have a disastrous impact on the rights to freedom of expression and access to information not only in Hong Kong, but globally, and would exacerbate concerns around the tendency of Hong Kong authorities to apply abusive laws for actions committed outside Hong Kong’s territory.

In December 2022, Google refused a request from authorities in Hong Kong to replace “Glory to Hong Kong” with Hong Kong’s national anthem as the top search item. More broadly, during 2022, the Hong Kong government requested that Google remove 330 items, of which 30 per cent were complied with. Similarly, between July 2020 and June 2022, Meta reported the removal of content in 50 instances after pressure from the Hong Kong government.

The letter continues:

We urge you to [...] oppose the Hong Kong government’s petition for an injunction by visiting Wan Chai Police Station in Hong Kong on or before June 21 to accept service, and then file an opposition within seven days. It is critical that internet intermediaries take a collective stance against Hong Kong’s censorship. 

Paige Collings

Californians: Tell the Governor and Legislature to Keep Their Promise on Broadband Funding

3 months ago

We need your help telling Governor Newsom and the California Legislature to keep their promise on broadband infrastructure funding—giving it full funding without any cuts or delays. California’s broadband infrastructure fund created by S.B. 156 in 2021 creates several critical programs to finally deliver 21st century broadband access to every Californian. These programs would address the systemic inequalities inflicting our cities and counties, which were exacerbated and painfully revealed as a result of the pandemic.


Tell Gov. newsom and the Legislature

to Keep their broadband promises

S.B. 156 and California’s infrastructure law lay the groundwork by which local communities would be empowered to solve their own problems. By making $6 billion available over several years and in several different pathways, the state set down a path for businesses, local governments, co-ops, Tribes, and nonprofits to work together to bring 21st-century access to every California resident lacking broadband speeds up to 25/3 mbps. This budget was specifically established by utilizing detailed cost model data collected by the state government to project the total budget needed when accounting for the high one-time sunk cost of infrastructure. And it is from the promise of this budget that these local efforts have planned their investments and strategies to bring service to their respective communities. Any delay or cut to these funds would derail years of planning to connect these Californians.

When these state funds are combined with federal Broadband Equity Access and Deployment (BEAD) funds, we will truly be able to connect every Californian with accessible, reliable, and affordable service for decades to come. However, if the state cuts down on its own investment and utilizes federal dollars to primarily service the “unserved,” it would be choosing to take away grant funds from “underserved” communities all across California otherwise entitled to those funds under federal law, leaving them behind. As noted, S.B. 156 was tailored to ensure fiber optic deployment to every “unserved” Californian. Combined with federal funding, we can provide the same level of access to virtually every “underserved” Californian (residents lacking broadband access that delivers speeds up to 100/20 mbps) as well. Taken together, we are talking about funding available to connect virtually every Californian.

Our elected officials made a promise in 2021 to bring future-proof, affordable, reliable, and accessible internet to all Californians. They should be proud of charting the right course then. Let us remind them that right now we have a once-in-a-lifetime opportunity to bridge the digital divide, create unprecedented economic development opportunities, and address systemic harms for generations to come. But only if we stay the course on broadband infrastructure funding.


Tell Gov. newsom and the Legislature

to Keep their broadband promises


Chao Liu

A Year Since Dobbs, The Fight For Reproductive Privacy and Information Access Continues

3 months ago

A year ago this Saturday, the Supreme Court's Dobbs abortion ruling overturned Roe v. Wade. This decision deprived millions of people of a fundamental right. As we wrote then, it also underscored the importance of fair and meaningful protections for data privacy. In the past year, EFF staff have worked with reproductive justice and civil liberties organizations to protect and advocate for the digital rights of people seeking or supporting reproductive care; here are some highlights from just the past month.

Right now, EFF is a proud sponsor—along with If/When/How and ACLU California Action—of Assemblymember Mia Bonta (D-Oakland)'s A.B. 793. This bill would protect people seeking abortion and gender-affirming care from dragnet-style digital surveillance. AB 793 targets a type of dragnet surveillance that can compel tech companies to search their records and reveal the identities of all people who have been in a certain location or looked up a particular keyword online. These demands, known as “reverse demands,” “geofence warrants,” or “keyword warrants,” enable local law enforcement in states across the country to request the names and identities of all people whose digital data shows they’ve spent time near a California abortion clinic or searched for information about gender-affirming care online.

A coalition of more than 50 reproductive justice, civil liberties, LGBTQI+ and privacy groups are supporting the bill; it is also supported by Google and the Law Enforcement Action Partnership. However, the bill has faced opposition from other law enforcement lobbyists and faces a difficult path in the California Senate. If you live in California and support the privacy rights of people seeking reproductive and gender-affirming care, please tell your lawmakers that you care about this issue:


Support A.B. 793

A.B. 793 builds on some important first steps California took to step up protections around reproductive data, which EFF supported. Along with other states, including Washington and New York, California protects the right to abortion and has passed laws limiting how information can be shared with investigations originating in states that don't protect this right. Those states have also been busy, and we have worked to defeat bills aiming to silence or punish those who post about abortion online. Such bills have implications far beyond abortion. Whenever the government tries to restrict our ability to access information, or tries to decide which websites people may visit, it threatens our First Amendment rights.

EFF has also weighed in at the federal level. In addition to supporting the My Body, My Data Act by Rep. Sarah Jacobs, we were also one of 125 organizations to endorse comments from medical providers asking the U.S. Department of Health and Human Services to improve protections for health information. The comments addressed proposed changes to the Health Information Portability and Accountability Act (HIPAA), a federal law that aims to protect sensitive patient health information. The comments call on the federal government to improve protections for a wide range of services that fall under the umbrella of "reproductive health care" and ensure medical providers aren't put in a position where they feel compelled to report their patients. "Since the Dobbs decision, the specter of criminalization has increased significantly, for both patients and providers. People must feel – and actually be – safe while accessing health care, but the overturning of Roe v. Wade further erodes this very necessary trust between patients and providers," the comments said.

Finally, EFF has also built on its existing work opposing police surveillance technologies by highlighting the dangers these technologies pose to those seeking reproductive care. Along with the ACLU of Northern California and the ACLU of Southern California, EFF sent letters to 71 California police agencies in 22 counties, demanding that they immediately stop sharing automated license plate reader (ALPR) data with law enforcement agencies in anti-abortion states. This data sharing violates California law and could enable prosecution of abortion seekers and providers elsewhere.

The agencies that received the demand letters have shared ALPR data with law enforcement agencies across the country, including states with abortion restrictions such as Alabama, Idaho, Mississippi, Oklahoma, Tennessee, and Texas. Since 2016, sharing any ALPR data with out-of-state or federal law enforcement agencies is a violation of the California Civil Code (SB 34). Nevertheless, many agencies continue to use services like Vigilant Solutions or Flock Safety to make the ALPR data they capture available to out-of-state and federal agencies.

Data privacy, free expression, and freedom from surveillance intersect with just one corner of the broader fight for reproductive justice. But, in the past year, EFF has learned so much from medical providers, academics, and advocates—such as If/When/How, Planned Parenthood, Digital Defense Fund, and many others—about how we can support their long history of advocacy. A year out from Dobbs, we're still in the middle of a long battle and ready to keep up the fight.

Hayley Tsukayama

Steering Mobility Data to a Better Privacy Regime

3 months ago

Cars today collect a lot more data than they used to, often leaving drivers' privacy unprotected. Car insurance is mainly regulated at the state level—there’s no federal privacy law for car data—but unsurprisingly there is an active government and private market for vehicle data, including location data, which is difficult if not impossible to deidentify. Advertisers, investment companies, and insurance companies are among those who want to actively collect or use this data to deliver and enhance their products.

While we can’t anticipate all the issues that will emerge, vehicle data should not be used in ways that people do not understand or know about. And even when consumers agree to share their vehicle data, such as in exchange for better prices, we need proper guardrails in place to ensure data may only be used for purposes and by entities that people have agreed to.

Two components of mobility data have the highest value in the marketplace. The first is location data, which is incredibly sensitive. Where we go can easily point to who we are. A widely cited 2013 study from Nature found that four spatio-temporal points from an “anonymous” dataset can reidentify 95 percent of people. Just two could uniquely recognize 50 percent of people. Currently, much of that data is gathered from smartphones, but vehicle data is another common source.

The second is data used to derive risk, often referred to as telematics data. Some telematics data is intuitively familiar—how hard you brake, how sharp you take turns, whether your behavior indicates you're looking at your phone while you're driving. But we don’t know what, of all of the kinds of personal data that cars already collect—including, for example footage from in-vehicle cameras—companies might find useful for risk assessment. Today, all the top ten insurance companies have opt-in, voluntary programs that allow consumers to contribute their own telematics data used primarily for pricing auto insurance. Insurance companies should only collect what they need to get a clear, fair assessment of driving risk. To do so, they may not need to collect information such as location data—which, as we have outlined, raises serious and possibly insoluble privacy concerns.

Insurance programs are subject to regulations across each state that they are present in; every state except California currently allows the use of telematics data for insurance rating. But privacy protections for this data vary widely across states. EFF neither recommends nor opposes the use of telematics data for insurance rating. But any state that has or is considering telematics rating should understand the risks it poses and ensure it is done responsibly. All states with or considering telematics rating should require strong privacy-protective regulations to mitigate these risks effectively.

Potential For Harm

Location information is particularly useful for someone who wants to learn and infer a lot about you. If you thought smartphones were exciting for advertisers, for example, wait until they really leverage your vehicle data. Some are already pushing to get car data to serve you ads as you drive—Pull over in two exits for a discount on a cup of coffee!—which would also lead to a lot more data about your daily habits being fed into the advertising data ecosystem. That could happen, for example, through a deal with your smart infotainment software system, or through some arrangement with the toll agencies. Governments and companies are increasingly asking for location data—for real-time traffic information, for example—by tracking your location through your smartphone apps, or even to by putting location-trackers right in your license plate.

Cars can also collect information not only about the vehicle itself, but also about what's around the vehicle, and that data can reveal a lot about the people inside of the car. Location data has been and can be weaponized against marginalized and underserved communities. Such data extracted from a car could easily be used to identify those who seek reproductive or gender-affirming care, or who aid others in doing so—a real threat after the Supreme Court’s Dobbs decision and other states’ actions to criminalize care for pregnant people and transgender people.

Privacy isn’t the only problem with the car insurance industry—there are also serious equity issues. Insurance rating can rely on a complicated set of indirect measurements—such as the number of times a driver's been pulled over, the number of years they've been driving, and the garaging address of a car. While these factors can predict risk, they can also have a disparate impact on certain consumers who may be penalized for living in a certain neighborhood, for example. Similarly, groups such as Black drivers—who are more likely to be pulled over by law enforcement—may also see a disproportionate negative impact from this method.

Potential For Innovation

We have identified a lot of potential for harm from vehicle data. However, we recognize that data collected from vehicles can also be used to assess real driver behavior in ways people may want measured that departs from older methods. Many people sign up voluntarily for programs that give insurance companies information on their driving habits. By enrolling in these telematics programs, they confirm they are open to sharing this information—but only for the express purpose of setting insurance rates based on those habits.

New tools and resources that show a potential to improve fairness and equity without compromising privacy should not be ignored. Far more research on this subject is needed, and regulators—both those allowing this system now, and those who may be considering it—should consider the comparative effects of both kinds of system.

Rules of the Road Will Help Everyone

Given the sensitivity of this data and what it can reveal about individuals, companies should clearly spell out which data they collect and how that data is directly relevant to determining a driver’s safety.

Any consideration of telematics data must be accompanied by strong, strict data collection, use, and privacy principles to ensure consumer protection, safety, and equity. The telematics industry should reject the approach of so many other companies —collecting broad amounts of data and trying to justify that collection later. Instead, companies should only hold on to this data for as short a time as is practicable, to avoid data breach or other unanticipated sharing. They should also ensure that information collected to protect driver safety does not end up being sold, shared, or accessed by others who wish to use it for other purposes. And any telematics scheme must be introduced on an opt-in only basis that does not penalize those who wish to protect their privacy and must have strong consumer protections in place.

We call on regulators and insurance companies to consider the following principles at a minimum.

- Data Minimization and Informed Consent. Insurance companies may not collect, process, or use any data before a policyholder accepts the terms and conditions of a telematics program directly from an insurer. Insurance companies also cannot do these things after a policyholder revokes their consent.

- Transparency about Data Use. To use telematics data, insurers must tell their customers, either before or at the time they enroll in a telematics program, that the insurer will abide by data use and collection rules. These should include an explanation of how companies capture data; a full description of what data companies collect and use; what data will be used to determine rates; and how people can request access to their information. People must also be told how to dispute any information they think is inaccurate. Companies should also explain which outside parties can access data and when, and give people clear instructions on how to inquire about a program, how to file complaints about it, and how to end their participation.

- Purpose Limitation and Opt-in Consent. A company that operates a telematics program must obtain consent from a consumer before sharing, selling, or disclosing their data. They must also get consent if they want to use a person's information for marketing or for any other purpose.

- Notice and Transparency about Data Sharing.  Insurers that use telematics must give policyholders notice when they share information. This notice must include the name of the company that received the information.

- Non-Discrimination. All insurers that offer a telematics rating program must also offer an option to be rated without telematics. 

- Location Data Retention and Use. If insurers collect precise geo-locational data, they can only retain it and any information from which precise location may be derived for 18 months after a policy expires, unless required for a claim, litigation hold, or for compliance with a Department of Insurance audit.

We propose these principles because, without appropriate limits and privacy practices regarding the collection and use of personal data, even innovative uses of data can pose enormous harm to consumers and perpetuate structural discrimination and inequity.

People should know what information is being collected about them and have meaningful choices about how and whether that information is shared. Insurers should recognize this; not only because it is right but also because it creates trust with their customers. Privacy is as important behind the wheel as it is for the phone in your pocket—and regulators should give drivers control over how companies collect and use this data.

Hayley Tsukayama

Student Monitoring Tools Should Not Flag LGBTQ+ Keywords

3 months ago

One of the more dangerous features of student monitoring tools like GoGuardian, Gaggle, and Bark is their “flagging” functionality. The tools can scan web pages, documents in students’ cloud drives, emails, video content, and more for keywords about topics like sex, drugs, and violence. They then either block or flag this content for review by school administrators. 

But in practice, these flags don’t work very well—many of the terms flagged by these student monitoring applications are often ambiguous, implicating whole swathes of the net that contain benign content. Worse still, these tools can alert teachers or parents to content that indicates something highly personal about the student—like the phrase “am I gay”—in a context that implies such a phrase is dangerous. Numerous reports show that the regular flagging of LGBTQ+ content creates a harmful atmosphere for students, some of whom have been outed because of it. This is particularly problematic when school personnel or family members do not understand or support a queer identity.

We call on all student monitoring tools to remove LGBTQ+ terms from their blocking and flagging lists

Thankfully, some student monitoring software companies have heard the concerns of students and civil liberties groups. Gaggle recently removed LGBTQ+ terms from their keyword list, and GoGuardian has done the same, per our correspondence with the company. We commend these companies for improving their keyword lists—it’s a good step forward, though not nearly enough to solve the general problem of over-flagging by the apps. In our research, LGBTQ+ resources are still commonly flagged for containing words like ‘sex,’ ‘breasts,’ or ‘vagina.’ 

Though these tools are intended to detect signs that a student may be at-risk of depression, self-harm, suicide, bullying, or violence, their primary use is disciplinary. Eighty-nine percent of teachers say their schools use some form of student monitoring software, which has a greater negative impact on students from low-income families, black students, hispanic students, and those with learning differences. And as we’ve written before, combined with the criminalization of reproductive rights, and an increasing number of anti-trans laws, software that produces and forwards data—sometimes to police—about what young people type into their laptops is a perfect storm of human rights abuses. 

The improvements by Gaggle and GoGuardian don’t solve all the problems with student monitoring apps. But these companies have rightly recognized that flagging young people for online activity related to sexual preference or gender identity creates more danger for students than allowing them to research such topics. We call on all student monitoring tools to remove LGBTQ+ terms from their blocking and flagging lists to ensure that young people’s privacy isn't violated, and to ensure that sexual and gender identity is not penalized. 

This article is part of our EFF Pride series. Read other articles highlighting this years work at the intersection of digital rights and LGBTQ+ on our issue page.

Jason Kelley

Remembering Daniel Ellsberg

3 months 1 week ago

“Popular government without popular information is but the prologue to a farce or tragedy.”  - James Madison

The world lost an unmistakable voice this week, as Daniel Ellsberg passed away at 92.  

Dan will be remembered for many things, of course most prominently providing the Pentagon Papers to the New York Times in 1971. Although he hated being called one, he was rightly a hero to anyone who believes that we must be in a position to evaluate our governments and cast our votes based upon truth rather than lies.  

The biggest lesson Dan taught me was to see the dangers arising from governmental secrecy from the position of those keeping the secrets. Dan talked about how the government was too often driven by what he called “smart dumb” people. He talked about how governmental officials' proximity to power and insider knowledge led them to do stupid things—like continuing a war that was clearly lost, or lying about weapons of mass destruction—and how these kinds of terrible misjudgments and mistakes are as inevitable as they are insidious. Dan was as steadfast in debunking the myths surrounding governmental secrecy as he was in giving unwavering public support to others who took courageous steps to tell the truth about illegal, immoral, and improper governmental actions, especially around matters of national security.

I first met Dan when I was helping with the creation of the Freedom of the Press Foundation. FPF was started in 2012 by Trevor Timm with help from Rainey Reitman and Micah Lee, who were all EFF staffers at the time, along with Laura Poitras, Glenn Greenwald and a few others. One of EFF's founders and Board members, John Perry Barlow, was also a driving force for the creation of FPF. I believe it was Barlow who brought Dan into the founding conversations. EFF served as legal counsel for the fledgling organization, and we still advise it at times.  

I’ll never forget one of the first organizing meetings we held at EFF’s brick Mission District offices. Upon seeing Dan unceremoniously walk in to our little conference room, I was both tongue-tied and star struck.  But he didn’t seem to notice. He sat down and quickly helped us think through what the organization should be and how it should function. He was steadfast that the organization should stand up unapologetically for Wikileaks and Julian Assange, which had just published evidence of war crimes by the U.S. government in Iraq provided by Chelsea Manning. Wikileaks was subject to a financial blockade in which no payment processor would handle contributions to it. Dan went on to stand up for—and attend the trial of—Chelsea Manning, and much later he stood firm in support of Ed Snowden. Dan’s certainty and conviction were contagious, as was his courage.  

EFF later held a public event in Berkeley discussing NSA spying, where Dan spoke. He explained how those who are charged with keeping secrets become convinced that they are smarter and more capable than those who don’t have that information. He discussed how secrecy creates a feedback loop in which officials inside a secrecy bubble start to believe that they are invincible. They become subject to groupthink and so are increasingly unwilling to recognize legitimate criticism or concerns from those outside the bubble. He made clear the dangerous and corrosive power of governmental secrecy, something he had experienced, and then rejected, in himself. 

Dan also was unflinching in asserting that most governmental secrecy is not necessary.  “Most secrecy is not directed at keeping secrets from external nations, enemies, allies, or otherwise. It’s to keep secrets from Americans, Congress, and public courts. They’re the ones that have the votes and write the budgets,” he said at a Harvard Law School Human Rights Clinic event in 2011. “They’re the ones whose blame is to be feared.” 

I'm also happy that EFF helped facilitate the first meeting between Dan and Chelsea Manning, at our November 2018 Pioneer Awards ceremony. Dan was lit up and exuberant that night: “I waited 39 years for her to appear in this world,” he said before detailing the significance of the documents Manning had leaked. He praised both her and Edward Snowden, saying, “I have often said that I identify more with them as 'revelationaries' than with any other people in the world."

The world will never see another Dan Ellsberg, but the legacy he leaves is bigger than his family and friends.  His legacy is in all of the other whistleblowers and truth tellers out there today, and in those who will bravely step forward in the years ahead. May his memory be a blessing to us all. 

Cindy Cohn

There is Nothing Fair About the European Commission’s “Fair Share” Proposal

3 months 1 week ago

In a fight between the big tech companies and the internet provider giants, it can be very tempting to not care who wins and loses. However, in the case of the ISPs' "fair share" proposals, ISP victory would mean undermining one of the very foundations of the internet—net neutrality.

After the European Commission held a public consultation on whether they should adopt what they call a “fair share” proposal, they unfortunately voted to move forward with this dangerous plan. This proposal is nothing but a network usage fees regime, which would force certain companies to pay internet service providers (ISPs) for their ability to deliver content to consumers. This idea not only hurts consumers, but also breaks a status quo that facilitated and continues to facilitate the rapid spread of the global internet. Accordingly,  we filed comments that called for the European Commission to abandon this completely unfair idea altogether.

The ISP Argument for “Fair Share”

The misguided idea behind the consultation is that large ISPs are suffering mightily because the companies that create and/or deliver information and content online, called content and applications providers (CAPs), are freeriding off the ISPs physical infrastructure networks. The CAPs you may be most familiar with go by another acronym — FAANG (Facebook, Amazon, Apple, Netflix, and Google) — but also encompasses companies who provide many other services.

The ISPs claim they incur costs for delivering this content and that as CAPs push more and more content, those costs increase. They also claim that the increase in internet traffic that has led to increasing costs are in fact caused by the CAPs. Taken together, because the CAPs both cause the traffic and don’t pay for the delivery of their services, CAPs should pay ISPs their “fair share” for using the network.

ISPs then claim that the money they receive from this “fair share” will go toward building infrastructure and expanding the reach of their networks.

The ISP Argument Mischaracterizes the Nature of the Internet

The ISP argument completely mischaracterizes the relationship between CAPs and ISPs. As EFF has written about before, CAPs do not freeload and have invested almost $900bn into the physical infrastructure of the internet themselves. Their investments have saved ISPS billions of dollars annually. Furthermore, the costs ISPs incur for delivering traffic have not been drastically rising despite increases in traffic, because their investments in fiber-based infrastructure have allowed them to deliver gigabit and beyond speeds at a lowering operating cost.

Their argument also mischaracterized the nature of the growth of the modern internet. Traffic is not generated by CAPS, but by consumers and end users requesting services (data) from CAPs. If no one used the internet, there would be no traffic—Netflix would not be sending data anywhere. Further, if there was nothing worth doing on the internet, then people would not use the internet and, once again, there would be no traffic. Which is to say, people use the internet because it is worth using and continue to use it at an increasing rate, which means more traffic is demanded and needs to be generated to meet these demands.

To fulfill, and compete to fulfill, increasing consumers demands, CAPs make investments like the aforementioned $900bn to create higher quality content as well as bring the content closer to the consumer via infrastructure that hosts, transports, and delivers their services. Most consumers no longer want to wait minutes for low quality videos and photos to load. CAP's investments are in part what turned minutes into seconds into milliseconds, and low quality into ever increasing quality both in terms of content and user experience.

The ISPs have greatly benefited from this arrangement as well. Increasing user demand and greater amount of available services means users are willing to pay more for increased internet service. If there was nothing worth doing on the internet, consumers would not pay more for greater service. It is because users want to do more that they pay for more. It is because users' willingness to pay for more that ISPs are then motivated to make their own investments into their network infrastructure. Setting aside for now the fundamental flaws in many ISP investment strategies, ISPs profit on user demand.

Taken together, consumers pay for greater internet service to get more content, so they can enjoy more content. This increase in demand prompts CAPs to create more content and make further investments into the network. Consumers, seeing more and better content, demand more content which prompts them to be willing to pay for greater internet service from their ISPs as well as once again signaling to CAPs to further improve and invest. ISPs, seeing consumers willing to pay for greater internet service, invest in their networks to provide that greater service. Consumers get a better experience on the internet, CAPs and the digital economy flourish, and ISPs are rewarded handsome profits. This virtuous cycle built the modern internet and continues to drive its growth and expansion today.

There is Nothing Fair About “Fair Share”

What the European Commission is calling “fair share,” therefore, isn’t fair at all. It is blatant double-dipping into a virtuous market cycle that benefited all participants for the last 40 years. More than that, it threatens to break this cycle to the detriment of everyone except the largest ISPs who, by virtue of their size, would receive the lion’s share of the payment. All players in the digital economy from the smallest digital storefront to the largest platforms will have their operating costs increased. Competition in the ISP market will suffer as small and medium-sized ISPs will be forced to pay the largest ISPs for moving their traffic. And as a result of all this, the consumer will suffer a worse and more expensive internet.

The “fair share” proposal also directly threatens the principles of net neutrality in Europe. Currently, European ISPs have an obligation to provide connectivity to virtually everyone and to not degrade service quality based on commercial considerations or who they are transmitting from. What this means for consumers is they are able to decide for themselves what their online experience is without worrying about it being slower, blocked, or more expensive. If under “fair share” ISPs are allowed to charge differential pricing to CAPs for the traffic they transmit, that would directly violate net neutrality principles. Indeed, any price regulation on data transmission or any sort of penalty on CAPs who refuse to pay ISPs would violate net neutrality principles. Without net neutrality, ISPs will have control over consumers’ online experience. Further, CAPs will pass on the cost of the fees onto consumers, meaning higher subscription fees and worse services. Once again, the consumer pays more for less.

At the end of the day, "fair share" is a solution in search of a problem. The real issue is that ISPs feel they should not act as neutral infrastructure between their customers and the services they want, but should squeeze money out of every part of the internet. That's not a problem. That's just greed.

The European Commission’s public consultation and ill-advised decision is just one step in a long process. We are prepared to point out the fundamental flaws of “fair share” and fight against it every step of the way. We urge the European Commission to once again reject the adoption of network usage fees in all but name. 

Chao Liu

What Reddit Got Wrong

3 months 1 week ago

After weeks of burning through users’ goodwill, Reddit is facing a moderator strike and an exodus of its most important users. It’s the latest example of a social media site making a critical mistake: users aren’t there for the services, they’re there for the community. Building barriers to access is a war of attrition.

Reddit has an admirable record when it comes to defending an open and free internet. While not always perfect, the success of the site is owed to its model of empowering moderators and users to engage with the site in a way that makes sense for them. This freedom for communities to experiment with and extend the platform let it continue to thrive while similar sites, like Fark and Digg, lost major chunks of their user base after making controversial and restrictive design choices to raise profitability.

Reddit maintained openness in two notable ways through its history. It supported community-led moderation from volunteer workers, and it embraced developers looking for automated access to the site, through open protocols (e.g. RSS) and a free API.

What Reddit got right

Content moderation doesn’t work at scale. Any scheme which attempts it is bound to fail. For sites which need continuous user growth, that is a problem. So what can they do?  Well, we know what doesn’t work

  • Simply having minimal or no moderation results in a trash fire of bigotry and illegal content, quickly hemorrhaging any potential revenue and potentially landing a platform in legal trouble. 
  • Automating moderation inevitably blocks legitimate content that wasn’t targeted, and is gamed by bad actors who get around it.

Every approach comes to the same conclusion—a platform needs workers: Lots of them, around the clock. Sites are then stuck trying to minimize this labor cost somehow. The worst version of this is a system of poorly paid workers, typically outsourced, merely reviewing user reports and automated moderation decisions. These mods invisibly compare out-of-context posts to a set of ever-changing and arbitrary rules. It’s grueling work, where one only views the worst the internet has to offer while remaining totally alienated from the community.

When a platform turns its back on the community, it doesn’t end well

A better model, which Reddit’s success is built on, is empowering moderators from within a community. Fortunately for platforms, users care so deeply for these digital commons that they will volunteer to do it– a convenient source of free labor.

It’s a pattern we see in the other component of Reddit's success: empowering motivated users to build useful tools for your site for free. The whole open source ecosystem is built around this truth—that people’s passion for the communal good is enough incentive for them to create and innovate. 

Unfortunately the communal good doesn’t keep the lights on. Unlike moderators on Reddit, who have no established way to seek support from the platform or its users, developers can be compensated by a grateful community in a few ways. Publishing to app stores, offering freemium features, or simply requesting optional donations to support the project are all more accessible means to be compensated. While these schemes are often not enough to repay the hours spent on a project, it can make the work of maintaining and improving their project more sustainable.

This sustained commitment from external developers directly benefits a platform like Reddit in the long run. This ecosystem is why Reddit has a wide array of tools for moderation, accessibility, and content creation, without having to directly employ (and pay) these developers.  These tools range from simple bots to fully developed apps and services. Regardless of size, all of these contributions, in some way,  drove user engagement, since everyone could meet with their community and shape their experience on the platform.

Chasing profitability

Reddit is transparent about the fact that the company is not profitable. But heading into their IPO later this year, with a potential recession looming, they are desperate to show that the platform can make money. This appears to have kicked off the second stage of “enshittification”, in which users are squeezed to appeal to business customers.

The monetization creep has been evident for a while. Reddit has added a subscription ”Reddit premium”; offered “community rewards” as a paid super-vote ; embraced an NFT marketplacechanged the site's design for one with more recommended content; and started nudging users toward the official mobile app. The site has also been adding more restrictions to uploading and viewing “not safe for work” (NSFW) content. All this, while community requests for improvements to moderation tools and accessibility features have gone unaddressed on mobile, driving many users to third-party applications.

Perhaps the worst development was announced on April 18th, when Reddit announced changes to its Data API would be starting on July 1st, including new “premium access” pricing for users of the API. While this wouldn’t affect projects on the free tier, such as moderator bots or tools used by researchers, the new pricing seems to be an existential threat to third-party applications for the site. It also bears a striking resemblance to a similar bad decision Twitter made this year under Elon Musk.

At the center of this controversy has been Apollo, an alternate Reddit client app on iOS with 1.5 million monthly users.  Facing potential API fees allegedly amounting to $20M per year, the app may be forced to shut down entirely. While several clients will shut down, others will need to adopt a monthly subscription model and suspend their free tier to stay viable. Non-commercial and accessibility-focused clients, such as the open source RedReader app, however, were recently offered an exemption for the time being.

Complicating the issue further is a new restriction on API access to NSFW content, putting any third-party app at a disadvantage against the official app or even the web version of the site. Even if a developer can afford API access, they may be left with inferior access to the site, and such restrictions create a disincentive for using the NSFW tag, undermining its utility.

The writing on the wall seems to be that Reddit’s actions would corral users to their official app by limiting third-party competition on mobile, alongside testing of new limitations on the mobile version of their site.

Moderators Strike

Outraged by these changes and the hostile treatment of third-party developers, thousands of moderators on the site have blacked out over 8,000 subreddits in solidarity with developers. (You may have noticed this if you tried to view almost anything on Reddit in the past 24 hours, and couldn't.) Many have vowed to remain locked to new submissions until accessibility features for blind users are implemented and the API is revised to accommodate third-party apps. Some have even doubled down and set their communities to private, making all content inaccessible to non-members. Moderators are putting a lot on the line here, risking the communities they spent countless hours maintaining on the platform.

Since it started, the blackout has kicked off a flurry of news coverage and temporarily crashed the website. The response from Reddit has only stoked the flames of this controversy. 

After a disastrous AMA (i.e. “ask me anything” forum) that breathed new life into the blackout protest, Reddit’s CEO continues to defend the company’s decision. Publicly, Reddit has claimed that these changes are necessary due to operation costs and privacy concerns. “Privacy-washing” has been used as an excuse to limit automated access to a site before, but is undercut here most directly by the availability of a free API tier, which can access the same information as the paid tier for lower-volume uses. 

It’s this labor and worker solidarity which gives users unique leverage over the platform

Details about Reddit’s API-specific costs were not shared, but it is worth noting that an API request is commonly no more burdensome to a server than an HTML request, i.e. visiting or scraping a web page. Having an API just makes it easier for developers to maintain their automated requests. It is true that most third-party apps tend to not show Reddit’s advertisements, and AI developers may make heavy use of the API for training data, but these applications could still (with more effort) access the same information over HTML.

The heart of this fight is for what Reddit’s CEO calls their “valuable corpus of data,” i.e. the user-made content on the company’s servers, and for who gets live off this digital commons. While Reddit provides essential infrastructural support, these community developers and moderators make the site worth visiting, and any worthwhile content is the fruit of their volunteer labor. It’s this labor and worker solidarity which gives users unique leverage over the platform, in contrast to past backlash to other platforms.

Moving to the Fediverse

This tension between these communities and their host have, again, fueled more interest in the Fediverse as a decentralized refuge. A social network built on an open protocol can afford some host-agnosticism, and allow communities to persist even if individual hosts fail or start to abuse their power. Unfortunately, discussions of Reddit-like fediverse services Lemmy and Kbin on Reddit were colored by paranoia after the company banned users and subreddits related to these projects (reportedly due to “spam”). While these accounts and subreddits have been reinstated, the potential for censorship around such projects has made a Reddit exodus feel more urgently necessary, as we saw last fall when Twitter cracked down on discussions of its Fediverse-alternative, Mastodon.

Reddit’s future is still uncertain, as the company doubles down on their changes and communities commit to a stricter and more indefinite blackout. These new API schemes are bad for platforms, and bad for the communities who use them. What we see time and time again, though, is that when a platform turns its back on the community, it doesn’t end well. They’ll revolt and they’ll flee, and the platform will be left trying to squeeze dwindling profits from a colossal wreck.

Rory Mir
2 hours 2 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed