EFF to Tenth Circuit: Protest-Related Arrests Do Not Justify Dragnet Device and Digital Data Searches

1 month ago

The Constitution prohibits dragnet device searches, especially when those searches are designed to uncover political speech, EFF explained in a friend-of-the-court brief filed in the U.S. Court of Appeals for the Tenth Circuit.

The case, Armendariz v. City of Colorado Springs, challenges device and data seizures and searches conducted by the Colorado Springs police after a 2021 housing rights march that the police deemed “illegal.” The plaintiffs in the case, Jacqueline Armendariz and a local organization called the Chinook Center, argue these searches violated their civil rights.

The case details repeated actions by the police to target and try to intimidate plaintiffs and other local civil rights activists solely for their political speech. After the 2021 march, police arrested several protesters, including Ms. Armendariz. Police alleged Ms. Armendariz “threw” her bike at an officer as he was running, and despite that the bike never touched the officer, police charged her with attempted simple assault. Police then used that charge to support warrants to seize and search six of her electronic devices—including several phones and laptops. The search warrant authorized police to comb through these devices for all photos, videos, messages, emails, and location data sent or received over a two-month period and to conduct a time-unlimited search of 26 keywords—including for terms as broad and sweeping as “officer,” “housing,” “human,” “right,” “celebration,” “protest,” and several common names. Separately, police obtained a warrant to search all of the Chinook Center’s Facebook information and private messages sent and received by the organization for a week, even though the Center was not accused of any crime.

After Ms. Armendariz and the Chinook Center filed their civil rights suit, represented by the ACLU of Colorado, the defendants filed a motion to dismiss the case, arguing the searches were justified and, in any case, officers were entitled to qualified immunity. The district court agreed and dismissed the case. Ms. Armendariz and the Center appealed to the Tenth Circuit.

As explained in our amicus brief—which was joined by the Center for Democracy & Technology, the Electronic Privacy Information Center, and the Knight First Amendment Institute at Columbia University—the devices searched contain a wealth of personal information. For that reason, and especially where, as here, political speech is implicated, it is imperative that warrants comply with the Fourth Amendment.

The U.S. Supreme Court recognized in Riley v. California that electronic devices such as smartphones “differ in both a quantitative and a qualitative sense” from other objects. Our electronic devices’ immense storage capacities means that just one type of data can reveal more than previously possible because they can span years’ worth of information. For example, location data can reveal a person’s “familial, political, professional, religious, and sexual associations.” And combined with all of the other available data—including photos, video, and communications—a device such as a smartphone or laptop can store a “digital record of nearly every aspect” of a person’s life, “from the mundane to the intimate.” Social media data can also reveal sensitive, private information, especially with respect to users' private messages.

It’s because our devices and the data they contain can be so revealing that warrants for this information must rigorously adhere to the Fourth Amendment’s requirements of probable cause and particularity.

Those requirements weren’t met here. The police’s warrants failed to establish probable cause that any evidence of the crime they charged Ms. Armendariz with—throwing her bike at an officer—would be found on her devices. And the search warrant, which allowed officers to rifle through months of her private records, was so overbroad and lacking in particularity as to constitute an unconstitutional “general warrant.” Similarly, the warrant for the Chinook Center’s Facebook messages lacked probable cause and was especially invasive given that access to these messages may well have allowed police to map activists who communicated with the Center and about social and political advocacy.

The warrants in this case were especially egregious because they appear designed to uncover First Amendment-protected activity. Where speech is targeted, the Supreme Court has recognized that it’s all the more crucial that warrants apply the Fourth Amendment’s requirements with “scrupulous exactitude” to limit an officer’s discretion in conducting a search. But that failed to happen here, and thus affected several of Ms. Armendariz and the Chinook Center’s First Amendment rights—including the right to free speech, the right to free association, and the right to receive information.

Warrants that fail to meet the Fourth Amendment’s requirements disproportionately burden disfavored groups. In fact, the Framers adopted the Fourth Amendment to prevent the “use of general warrants as instruments of oppression”—but as legal scholars have noted, law enforcement routinely uses low-level, highly discretionary criminal offenses to impose order on protests. Once arrests are made, they are often later dropped or dismissed—but the damage is done, because protesters are off the streets, and many may be chilled from returning. Protesters undoubtedly will be further chilled if an arrest for a low-level offense then allows police to rifle through their devices and digital data, as happened in this case.

The Tenth Circuit should let this case to proceed. Allowing police to conduct a virtual fishing expedition of a protester’s devices, especially when justification for that search is an arrest for a crime that has no digital nexus, contravenes the Fourth Amendment’s purposes and chills speech. It is unconstitutional and should not be tolerated.

Brendan Gilligan

Americans Are Uncomfortable with Automated Decision-Making

1 month ago

Imagine a company you recently applied to work at used an artificial intelligence program to analyze your application to help expedite the review process. Does that creep you out? Well, you’re not alone.

Consumer Reports recently released a national survey finding that Americans are uncomfortable with use of artificial intelligence (AI) and algorithmic decision-making in their day to day lives. The survey of 2,022 U.S. adults was administered by NORC at the University of Chicago and examined public attitudes on a variety of issues. Consumer Reports found:

  • Nearly three-quarters of respondents (72%) said they would be “uncomfortable”— including nearly half (45%) who said they would be “very uncomfortable”—with a job interview process that allowed AI to screen their interview by grading their responses and in some cases facial movements.
  • About two-thirds said they would be “uncomfortable”— including about four in ten (39%) who said they would be “very uncomfortable”— allowing banks to use such programs to determine if they were qualified for a loan or allowing landlords to use such programs to screen them as a potential tenant.
  • More than half said they would be “uncomfortable”— including about a third who said they would be “very uncomfortable"— with video surveillance systems using facial recognition to identity them, and with hospital systems using AI or algorithms to help with diagnosis and treatment planning.

The survey findings indicate that people are feeling disempowered by lost control over their digital footprint, and by corporations and government agencies adopting AI technology to make life-altering decisions about them. Yet states are moving at breakneck speed to implement AI “solutions” without first creating meaningful guidelines to address these reasonable concerns. In California, Governor Newsom issued an executive order to address government use of AI, and recently granted five vendors approval to test and AI for a myriad of state agencies. The administration hopes to apply AI to such topics as health-care facility inspections, assisting residents who are not fluent in English, and customer service.

The vast majority of Consumer Reports’ respondents (83%) said they would want to know what information was used to instruct AI or a computer algorithm to make a decision about them.  Another super-majority (91%) said they would want to have a way to correct the data where a computer algorithm was used.

As states explore how to best protect consumers as corporations and government agencies deploy algorithmic decision-making, EFF urges strict standards of transparency and accountability. Laws should have a “privacy first” approach that ensures people have a say in how their private data is used. At a minimum, people should have a right to access what data is being used to make decisions about them and have the opportunity to correct it. Likewise, agencies and businesses using automated decision-making should offer an appeal process. Governments should ensure that consumers have protections from discrimination in algorithmic decision-making by both corporations and the public sector. Another priority should be a complete ban on many government uses of automated decision-making, including predictive policing.

From deciding who gets housing or the best mortgages, who gets an interview or a job, or who law enforcement or ICE investigates, people are uncomfortable with algorithmic decision-making that will affect their freedoms. Now is the time for strong legal protections.

Catalina Sanchez

The French Detention: Why We're Watching the Telegram Situation Closely

1 month 1 week ago

EFF is closely monitoring the situation in France in which Telegram’s CEO Pavel Durov was charged with having committed criminal offenses, most of them seemingly related to the operation of Telegram. This situation has the potential to pose a serious danger to security, privacy, and freedom of expression for Telegram’s 950 million users.  

On August 24th, French authorities detained Durov when his private plane landed in France. Since then, the French prosecutor has revealed that Durov’s detention was related to an ongoing investigation, begun in July, of an “unnamed person.” The investigation involves complicity in crimes presumably taking place on the Telegram platform, failure to cooperate with law enforcement requests for the interception of communications on the platform, and a variety of charges having to do with failure to comply with  French cryptography import regulations. On August 28, Durov was charged with each of those offenses, among others not related to Telegram, and then released on the condition that he check in regularly with French authorities and not leave France.  

We know very little about the Telegram-related charges, making it difficult to draw conclusions about how serious a threat this investigation poses to privacy, security, or freedom of expression on Telegram, or on online services more broadly. But it has the potential to be quite serious. EFF is monitoring the situation closely.  

There appear to be three categories of Telegram-related charges:  

  • First is the charge based on “the refusal to communicate upon request from authorized authorities, the information or documents necessary for the implementation and operation of legally authorized interceptions.” This seems to indicate that the French authorities sought Telegram’s assistance to intercept communications on Telegram.  
  • The second set of charges relate to “complicité” with crimes that were committed in some respect on or through Telegram. These charges specify “organized distribution of images of minors with a pedopornographic nature, drug trafficking, organized fraud, and conspiracy to commit crimes or offenses,” and “money laundering of crimes or offenses in an organized group.”  
  • The third set of charges all relate to Telegram’s failure to file a declaration required of those who import a cryptographic system into France.  

Now we are left to speculate. 

It is possible that all of the charges derive from “the failure to communicate.” French authorities may be claiming that Durov is complicit with criminals because Telegram refused to facilitate the “legally authorized interceptions.” Similarly, the charges connected to the failure to file the encryption declaration likely also derive from the “legally authorized interceptions” being encrypted. France very likely knew for many years that Telegram had not filed the required declarations regarding their encryption, yet they were not previously charged for that omission. 

Refusal to cooperate with a valid legal order for assistance with an interception could be similarly prosecuted in most international legal systems, including the United States. EFF has frequently contested the validity of such orders and gag orders associated with them, and have urged services to contest them in courts and pursue all appeals. But once such orders have been finally validated by courts, they must be complied with. It is a more difficult situation in other situations such as where the nation lacks a properly functioning judiciary or there is an absence of due process, such as China or Saudi Arabia. 

In addition to the refusal to cooperate with the interception, it seems likely that the complicité charges also, or instead, relate to Telegram’s failure to remove posts advancing crimes upon request or knowledge. Specifically, the charges of complicity in “the administration of an online platform to facilitate an illegal transaction” and “organized distribution of images of minors with a pedopornographic nature, drug trafficking,[and] organized fraud,” could likely be based on not depublishing posts. An initial statement by Ofmin, the French agency established to investigate threats to child safety online, referred to “lack of moderation” as being at the heart of their investigation. Under French law, Article 323-3-2, it is a crime to knowingly allow the distribution of illegal content or provision of illegal services, or to facilitate payments for either. 

It is not yet clear whether Telegram users themselves, or those offering similar services to Telegram, should be concerned.

In particular, this potential “lack of moderation” liability bears watching. If Durov is prosecuted because Telegram simply inadequately removed offending content from the site that it is generally aware of, that could expose most every other online platform to similar liability. It would also be concerning, though more in line with existing law, if the charges relate to an affirmative refusal to address specific posts or accounts, rather than a generalized awareness. And both of these situations are much different from one in which France has evidence that Durov was more directly involved with those using Telegram for criminal purposes. Moreover, France will likely have to prove that Durov himself committed each of these offenses, and not Telegram itself or others at the company. 

EFF has raised serious concerns about Telegram’s behavior both as a social media platform and as a messaging app. In spite of its reputation as a “secure messenger,” only a very small subset of messages  on Telegram are encrypted in such a way that prevents the company from reading the contents of communications—end-to-end encryption. (Only one-to-one messages with the “secret messages” option enabled are end-to-end encrypted) And even so, cryptographers have questioned the effectiveness of Telegram’s homebrewed cryptography. If the French government’s charges have to do with Telegram’s refusal to moderate or intercept these messages, EFF will oppose this case in the strongest terms possible, just as we have opposed all government threats to end-to-end encryption all over the world

This arrest marks an alarming escalation by a state’s authorities. 

It is not yet clear whether Telegram users themselves, or those offering similar services to Telegram, should be concerned. French authorities may ask for technical measures that endanger the security and privacy of those users. Durov and Telegram may or may not comply. Those running similar services may not have anything to fear, or these charges may be the canary in the coalmine warning us all that French authorities intend to expand their inspection of messaging and social media platforms. It is simply too soon, and there is too little information for us to know for sure.  

It is not the first time Telegram’s laissez faire attitude towards content moderation has led to government reprisals. In 2022, the company was forced to pay a fine in Germany for not establishing a lawful way for reporting illegal content or naming an entity in Germany to receive official communication. Brazil fined the company in 2023 for failing to suspend accounts of supporters of former President Jair Bolsonaro. Nevertheless this arrest marks an alarming escalation by a state’s authorities.  We are monitoring the situation closely and will continue to do so.  

David Greene

The California Supreme Court Should Help Protect Your Stored Communications

1 month 1 week ago

When you talk to your friends and family on Snapchat or Facebook, you should be assured that those services will not freely disclose your communications to the government or other private parties.

That is why the California Supreme Court must take up and reverse the appellate opinion in the case of Snap v. The Superior Court of San Diego County. This opinion dangerously weakens the Stored Communications Act (SCA), which is one of the few federal privacy laws on the books. The SCA prevents certain communications providers from disclosing the content of your communications to private parties or the government without a warrant (or other narrow exceptions).

EFF submitted an amicus letter to the court, along with the Center for Democracy & Technology.

The lower court incorrectly ruled that modern services like Snapchat and Facebook largely do not have to comply with the 1986 law. Since those companies already access the content of your communications for their own business purposes—including to target their behavioral advertising—the lower court held that they can also freely disclose the content of your communications to anyone.

The ruling came in the context of a criminal defendant who sought access to the communications of a deceased victim with a subpoena. In compliance with the law, both Meta and Snap resisted disclosing the information.

The lower court’s opinion conflicts with nearly 40 years of interpretation by Congress and other courts. It ignores the SCA’s primary purpose of protecting your communications from disclosure. And the opinion gives too much weight to companies’ terms of service. Those terms, which almost no one reads, is where most companies bury their own right to access to your communications.

There is no doubt that companies should also be restricted in how they access and use your data, and we need stronger laws to make that happen. For years, EFF has advocated for comprehensive data privacy legislation, including data minimization and a ban on online behavioral advertising. But that does not affect the current analysis of the SCA, which protects against disclosure now.

If the California Supreme Court does not take this up, Meta, Snap, and other providers would be allowed to voluntarily disclose the content of their users’ communications to any other corporations for any reason, to parties in civil litigation, and to the government without a warrant. Private parties could also compel disclosure with a mere subpoena.

Mario Trujillo

Copyright Is Not a Tool to Silence Critics of Religious Education

1 month 1 week ago

Copyright law is not a tool to punish or silence critics. This is a principle so fundamental that it is the ur-example of fair use, which typically allows copying another’s creative work when necessary for criticism. But sometimes, unscrupulous rightsholders misuse copyright law to bully critics into silence by filing meritless lawsuits, threatening potentially enormous personal liability unless they cease speaking out. That’s why EFF is defending Zachary Parrish, a parent in Indiana, against a copyright infringement suit by LifeWise, Inc.

LifeWise produces controversial “released time” religious education programs for public elementary school students during school hours. After encountering the program at his daughter’s public school, Mr. Parrish co-founded “Parents Against LifeWise,” a group that strives to educate and warn others about the harms they believe LifeWise’s programs cause. To help other parents make fully informed decisions about signing their children up for a LifeWise program, Mr. Parrish obtained a copy of LifeWise’s elementary school curriculum—which the organization kept secret from everyone except instructors and enrolled students—and posted it to the Parents Against LifeWise website. LifeWise sent a copyright takedown to the website’s hosting provider to get the curriculum taken down, and followed up with an infringement lawsuit against Mr. Parrish.

EFF filed a motion to dismiss LifeWise’s baseless attempt to silence Mr. Parrish. As we explained to the court, Mr. Parrish’s posting of the curriculum was a paradigmatic example of fair use, an important doctrine that allows critics like Mr. Parrish to comment on, criticize, and educate others on the contents of a copyrighted work. LifeWise’s own legal complaint shows why Mr. Parrish’s use was fair: “his goal was to gather information and internal documents with the hope of publishing information online which might harm LifeWise’s reputation and galvanize parents to oppose local LifeWise Academy chapters in their communities.” This is a mission of public advocacy and education that copyright law protects. In addition, Mr. Parrish’s purpose was noncommercial: far from seeking to replace or compete with LifeWise, he posted the curriculum to encourage others to think carefully before signing their children up for the program. And posting the curriculum doesn’t harm LifeWise—at least not in any way that copyright law was meant to address. Just like copyright doesn’t stop a film critic from using scenes from a movie as part of a devastating review, it doesn’t stop a concerned parent from educating other parents about a controversial religious school program by showing them the actual content of that program.

Early dismissals in copyright cases against fair users are crucial. Because, although fair use protects lots of important free expression like the commentary and advocacy of Mr. Parrish, it can be ruinously expensive and chilling to fight for those protections. The high cost of civil discovery and the risk of astronomical statutory damages—which reach as high as $150,000 per work in certain cases—can lead would-be fair users to self-censor for fear of invasive legal process and financial ruin.

Early dismissal helps prevent copyright holders from using the threat of expensive, risky lawsuits to silence critics and control public conversations about their works. It also sends a message to others that their right to free expression doesn’t depend on having enough money to defend it in court or having access to help from organizations like EFF. While we are happy to help, we would be even happier if no one needed our help for a problem like this ever again.

When society loses access to critical commentary and the public dialogue it enables, we all suffer. That’s why it is so important that courts prevent copyright law from being used to silence criticism and commentary. We hope the court will do so here, and dismiss LifeWise’s baseless complaint against Mr. Parrish.

Mitch Stoltz

Backyard Privacy in the Age of Drones

1 month 1 week ago

This article was originally published by The Legal Aid Society's Decrypting a Defense Newsletter on August 5, 2024 and is reprinted here with permission.

Police departments and law enforcement agencies are increasingly collecting personal information using drones, also known as unmanned aerial vehicles. In addition to high-resolution photographic and video cameras, police drones may be equipped with myriad spying payloads, such as live-video transmitters, thermal imaging, heat sensors, mapping technology, automated license plate readers, cell site simulators, cell phone signal interceptors and other technologies. Captured data can later be scrutinized with backend software tools like license plate readers and face recognition technology. There have even been proposals for law enforcement to attach lethal and less-lethal weapons to drones and robots. 

Over the past decade or so, police drone use has dramatically expanded. The Electronic Frontier Foundation’s Atlas of Surveillance lists more than 1500 law enforcement agencies across the US that have been reported to employ drones. The result is that backyards, which are part of the constitutionally protected curtilage of a home, are frequently being captured, either intentionally or incidentally. In grappling with the legal implications of this phenomenon, we are confronted by a pair of U.S. Supreme Court cases from the 1980s: California v. Ciraolo and Florida v. Riley. There, the Supreme Court ruled that warrantless aerial surveillance conducted by law enforcement in low-flying manned aircrafts did not violate the Fourth Amendment because there was no reasonable expectation of privacy from what was visible from the sky. Although there are fundamental differences between surveillance by manned aircrafts and drones, some courts have extended the analysis to situations involving drones, shutting the door to federal constitution challenges.

Yet, Americans, legislators, and even judges, have long voiced serious worries with the threat of rampant and unchecked aerial surveillance. A couple of years ago, the Fourth Circuit found in Leaders of a Beautiful Struggle v. Baltimore Police Department that a mass aerial surveillance program (using manned aircrafts) covering most of the city violated the Fourth Amendment. The exponential surge in police drone use has only heightened the privacy concerns underpinning that and similar decisions. Unlike the manned aircrafts in Ciraolo and Riley, drones can silently and unobtrusively gather an immense amount of data at only a tiny fraction of the cost of traditional aircrafts. Additionally, drones are smaller and easier to operate and can get into spaces—such as under eaves or between buildings—that planes and helicopters can never enter. And the noise created by manned airplanes and helicopters effectively functions as notice to those who are being watched, whereas drones can easily record information surreptitiously.

In response to the concerns regarding drone surveillance voiced by civil liberties groups and others, some law enforcement agencies, like the NYPD, have pledged to abide by internal policies to refrain from warrantless use over private property. But without enforcement mechanisms, those empty promises are easily discarded by officials when they consider them inconvenient, as NYC Mayor Eric Adams did in announcing that drones would, in fact, be deployed to indiscriminately spy on backyard parties over Labor Day.

Barring a seismic shift away from Ciraolo and Riley by the U.S. Supreme Court (which seems nigh impossible given the Fourth Amendment approach by the current members of the bench), protection from warrantless aerial surveillance—and successful legal challenges—will have to come from the states. Indeed, six months after Ciraolo was decided, the California Supreme Court held in People v. Cook that under the state’s constitution, an individual had a reasonable expectation that cops will not conduct warrantless surveillance of their backyard from the air. More recently, other states, such as Hawai’i, Vermont, and Alaska, have similarly relied on their state constitution’s Fourth Amendment corollary to find warrantless aerial surveillance improper. Some states have also passed new laws regulating governmental drone use. And at least half a dozen states, including Florida, Maine, Minnesota, Nevada, North Dakota, and Virginia have statutes requiring warrants (with exceptions) for police use.

Law enforcement’s use of drones will only proliferate in the coming years, and drone capabilities continue to evolve rapidly. Courts and legislatures must keep pace to ensure that privacy rights do not fall victim to the advancement of technology.

For more information on drones and other surveillance technologies, please visit EFF’s Street Level Surveillance guide at https://sls.eff.org/.

Hannah Zhao

Geofence Warrants Are 'Categorically' Unconstitutional | EFFector 36.11

1 month 2 weeks ago

School is back in session, so prepare for your first lesson from EFF! Today you'll learn about the latest court ruling on the dangers of geofence warrants, our letter urging Bumble to require opt-in consent to sell user data, and the continued fight against the UN Cybercrime Treaty.

If you'd like future lessons about the fight for digital freedoms, you're in luck! We've got you covered with our EFFector newsletter. You can read the full issue here, or subscribe to get the next one in your inbox automatically. You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:

LISTEN ON YouTube

EFFECTOR 36.11 - Geofence Warrants Are 'Categorically' Unconstitutional

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

NO FAKES – A Dream for Lawyers, a Nightmare for Everyone Else

1 month 2 weeks ago

Performers and ordinary humans are increasingly concerned that they may be replaced or defamed by AI-generated imitations. We’re seeing a host of bills designed to address that concern – but every one just generates new problems. Case in point: the NO FAKES Act. We flagged numerous flaws in a “discussion draft” back in April, to no avail: the final text has been released, and it’s even worse.  

NO FAKES creates a classic “hecklers’ veto”: anyone can use a specious accusation to get speech they don’t like taken down.

Under NO FAKES, any human person has the right to sue anyone who has either made, or made available, their “digital replica.” A replica is broadly defined as “a newly-created, computer generated, electronic representation of the image, voice or visual likeness” of a person. The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for up to 70 years after the person dies. Because it is a federal intellectual property right, Section 230 protections – a crucial liability shield for platforms and anyone else that hosts or shares user-generated content—will not apply. And that legal risk begins the moment a person gets a notice that the content is unlawful, even if they didn't create the replica and have no way to confirm whether or not it was authorized, or have any way to verify the claim. NO FAKES thereby creates a classic “hecklers’ veto”: anyone can use a specious accusation to get speech they don’t like taken down.  

The bill proposes a variety of exclusions for news, satire, biopics, criticism, etc. to limit the impact on free expression, but their application is uncertain at best. For example, there’s an exemption for use of a replica for a “bona fide” news broadcast, provided that the replica is “materially relevant” to the subject of the broadcast. Will citizen journalism qualify as “bona fide”? And who decides whether the replica is “materially relevant”?  

These are just some of the many open questions, all of which will lead to full employment for lawyers, but likely no one else, particularly not those whose livelihood depends on the freedom to create journalism or art about famous people. 

The bill also includes a safe harbor scheme modelled on the DMCA notice and takedown process. To stay within the NO FAKES safe harbors, a platform that receives a notice of illegality must remove “all instances” of the allegedly unlawful content—a broad requirement that will encourage platforms to adopt “replica filters” similar to the deeply flawed copyright filters like YouTube’s Content I.D. Platforms that ignore such a notice can be on the hook just for linking to unauthorized replicas. And every single copy made, transmitted, or displayed is a separate violation incurring a $5000 penalty – which will add up fast. The bill does throw platforms a not-very-helpful-bone: if they can show they had an objectively reasonable belief that the content was lawful, they only have to cough up $1 million if they guess wrong.  

All of this is a recipe for private censorship. For decades, the DMCA process has been regularly abused to target lawful speech, and there’s every reason to suppose NO FAKES will lead to the same result.  

All of this is a recipe for private censorship. 

What is worse, NO FAKES offers even fewer safeguards for lawful speech than the DMCA. For example, the DMCA includes a relatively simple counter-notice process that a speaker can use to get their work restored. NO FAKES does not. Instead, NO FAKES puts the burden on the speaker to run to court within 14 days to defend their rights. The powerful have lawyers on retainer who can do that, but most creators, activists, and citizen journalists do not.  

NO FAKES does include a provision that, in theory, would allow improperly targeted speakers to hold notice senders accountable. But they must prove that the lie was “knowing,” which can be interpreted to mean that the sender gets off scot-free as long as they subjectively believes the lie to be true, no matter how unreasonable that belief. Given the multiple open questions about how to interpret the various exemptions (not to mention the common confusions about the limits of IP protection that we’ve already seen), that’s pretty cold comfort. 

These significant flaws should doom the bill, and that’s a shame. Deceptive AI-generated replicas can cause real harms, and performers have a right to fair compensation for the use of their likenesses, should they choose to allow that use. Existing laws can address most of this, but Congress should be considering narrowly-targeted and proportionate proposals to fill in the gaps.  

The NO FAKES Act is neither targeted nor proportionate. It’s also a significant Congressional overreach—the Constitution forbids granting a property right in (and therefore a monopoly over) facts, including a person’s name or likeness.  

The best we can say about NO FAKES is that it has provisions protecting individuals with unequal bargaining power in negotiations around use of their likeness. For example, the new right can’t be completely transferred to someone else (like a film studio or advertising agency) while the person is alive, so a person can’t be pressured or tricked into handing over total control of their public identity (their heirs still can, but the dead celebrity presumably won’t care). And minors have some additional protections, such as a limit on how long their rights can be licensed before they are adults.   

TAKE ACTION

Throw Out the NO FAKES Act and Start Over

But the costs of the bill far outweigh the benefits. NO FAKES creates an expansive and confusing new intellectual property right that lasts far longer than is reasonable or prudent, and has far too few safeguards for lawful speech. The Senate should throw it out and start over. 

Corynne McSherry

Court to California: Try a Privacy Law, Not Online Censorship

1 month 2 weeks ago

In a victory for free speech and privacy, a federal appellate court confirmed last week that parts of the California Age-Appropriate Design Code Act likely violate the First Amendment, and that other parts require further review by the lower court.

The U.S. Court of Appeals for the Ninth Circuit correctly rejected rules requiring online businesses to opine on whether the content they host is “harmful” to children, and then to mitigate such harms. EFF and CDT filed a friend-of-the-court brief in the case earlier this year arguing for this point.

The court also provided a helpful roadmap to legislatures for how to write privacy first laws that can survive constitutional challenges. However, the court missed an opportunity to strike down the Act’s age-verification provision. We will continue to argue, in this case and others, that this provision violates the First Amendment rights of children and adults.

The Act, The Rulings, and Our Amicus Brief

In 2022, California enacted its Age-Appropriate Design Code Act (AADC). Three of the law’s provisions are crucial for understanding the court’s ruling.

  1. The Act requires an online business to write a “Data Protection Impact Assessment” for each of its features that children are likely to access. It must also address whether the feature’s design could, among other things, “expos[e] children to harmful, or potentially harmful, content.” Then the business must create a “plan to mitigate” that risk.
  1. The Act requires online businesses to follow enumerated data privacy rules specific to children. These include data minimization, and limits on processing precise geolocation data.
  1. The Act requires online businesses to “estimate the age of child users,” to an extent proportionate to the risks arising from the business’s data practices, or to apply child data privacy rules to all consumers.

In 2023, a federal district court blocked the law, ruling that it likely violates the First Amendment. The state appealed.

EFF’s brief in support of the district court’s ruling argued that the Act’s age-verification provision and vague “harmful” standard are unconstitutional; that these provisions cannot be severed from the rest of the Act; and thus that the entire Act should be struck down. We conditionally argued that if the court rejected our initial severability argument, privacy principles in the Act could survive the reduced judicial scrutiny applied to such laws and still safeguard peoples personal information. This is especially true given the government’s many substantial interests in protecting data privacy.

The Ninth Circuit affirmed the preliminary injunction as to the Act’s Impact Assessment provisions, explaining that they likely violate the First Amendment on their face. The appeals court vacated the preliminary injunction as to the Act’s other provisions, reasoning that the lower court had not applied the correct legal tests. The appeals court sent the case back to the lower court to do so.

Good News: No Online Censorship

The Ninth Circuit’s decision to prevent enforcement of the AADC’s impact assessments on First Amendment grounds is a victory for internet users of all ages because it ensures everyone can continue to access and disseminate lawful speech online.

The AADC’s central provisions would have required a diverse array of online services—from social media to news sites—to review the content on their sites and consider whether children might view or receive harmful information. EFF argued that this provision imposed content-based restrictions on what speech services could host online and was so vague that it could reach lawful speech that is upsetting, including news about current events.

The Ninth Circuit agreed with EFF that the AADC’s “harmful to minors” standard was vague and likely violated the First Amendment for several reasons, including because it “deputizes covered businesses into serving as censors for the State.”

The court ruled that these AADC censorship provisions were subject to the highest form of First Amendment scrutiny because they restricted content online, a point EFF argued. The court rejected California’s argument that the provisions should be subjected to reduced scrutiny under the First Amendment because they sought to regulate commercial transactions.

“There should be no doubt that the speech children might encounter online while using covered businesses’ services is not mere commercial speech,” the court wrote.

Finally, the court ruled that the AADC’s censorship provisions likely failed under the First Amendment because they are not narrowly tailored and California has less speech-restrictive ways to protect children online.

EFF is pleased that the court saw AADC’s impact assessment requirements for the speech restrictions that they are. With those provisions preliminarily enjoined, everyone can continue to access important, lawful speech online.

More Good News: A Roadmap for Privacy-First Laws

The appeals court did not rule on whether the Act’s data privacy provisions could survive First Amendment review. Instead, it directed the lower court in the first instance to apply the correct tests.

In doing so, the appeals court provided guideposts for how legislatures can write data privacy laws that survive First Amendment review. Spoiler alert: enact a “privacy first” law, without unlawful censorship provisions.

Dark patterns. Some privacy laws prohibit user interfaces that have the intent or substantial effect of impairing autonomy and choice. The appeals court reversed the preliminary injunction against the Act’s dark patterns provision, because it is unclear whether dark patterns are even protected speech, and if so, what level of scrutiny they would face.

Clarity. Some privacy laws require businesses to use clear language in their published privacy policies. The appeals court reversed the preliminary injunction against the Act’s clarity provision, because there wasn’t enough evidence to say whether the provision would run afoul of the First Amendment. Indeed, “many” applications will involve “purely factual and non-controversial” speech that could survive review.

Transparency. Some privacy laws require businesses to disclose information about their data processing practices. In rejecting the Act’s Impact Assessments, the appeals court rejected an analogy to the California Consumer Privacy Act’s unproblematic requirement that large data processors annually report metrics about consumer requests to access, correct, and delete their data. Likewise, the court reserved judgment on the constitutionality of two of the Act’s own “more limited” reporting requirements, which did not require businesses to opine on whether third-party content is “harmful” to children.

Social media. Many privacy laws apply to social media companies. While the EFF is second-to-none in defending the First Amendment right to moderate content, we nonetheless welcome the appeals court’s rejection of the lower court’s “speculat[ion]” that the Act’s privacy provisions “would ultimately curtail the editorial decisions of social media companies.” Some right-to-curate allegations against privacy laws might best be resolved with “as-applied claims” in specific contexts, instead of on their face.

Ninth Circuit Punts on the AADC’s Age-Verification Provision

The appellate court left open an important issue for the trial court to take up: whether the AADC’s age-verification provision violates the First Amendment rights of adults and children by blocking them from lawful speech, frustrating their ability to remain anonymous online, and chilling their speech to avoid danger of losing their online privacy.

EFF also argued in our Ninth Circuit brief that the AADC’s age-verification provision was similar to many other laws that courts have repeatedly found to violate the First Amendment.

The Ninth Circuit missed a great opportunity to confirm that the AADC’s age-verification provision violated the First Amendment. The court didn’t pass judgment on the provision, but rather ruled that the district court had failed to adequately assess the provision to determine whether it violated the First Amendment on its face.

As EFF’s brief argued, the AADC’s age-estimation provision is pernicious because it restricts everyone’s access to lawful speech online, by requiring adults to show proof that they are old enough to access lawful content the AADC deems harmful.

We look forward to the district court recognizing the constitutional flaws of the AADC’s age-verification provision once the issue is back before it.

Adam Schwartz

EFF and Partners to EU Commissioner: Prioritize User Rights, Avoid Politicized Enforcement of DSA Rules

1 month 3 weeks ago

EFF, Access Now, and Article 19 have written to EU Commissioner for Internal Market Thierry Breton calling on him to clarify his understanding of “systemic risks” under the Digital Services Act, and to set a high standard for the protection of fundamental rights, including freedom of expression and of information. The letter was in response to Breton’s own letter addressed to X, in which he urged the platform to take action to ensure compliance with the DSA in the context of far-right riots in the UK as well as the conversation between US presidential candidate Donald Trump and X CEO Elon Musk, which was scheduled to be, and was in fact, live-streamed hours after his letter was posted on X. 

Clarification is necessary because Breton’s letter otherwise reads as a serious overreach of EU authority, and transforms the systemic risks-based approach into a generalized tool for censoring disfavored speech around the world. By specifically referencing the streaming event between Trump and Musk on X, Breton’s letter undermines one of the core principles of the DSA: to ensure fundamental rights protections, including freedom of expression and of information, a principle noted in Breton’s letter itself.

The DSA Must Not Become A Tool For Global Censorship

The letter plays into some of the worst fears of critics of the DSA that it would be used by EU regulators as a global censorship tool rather than addressing societal risks in the EU. 

The DSA requires very large online platforms (VLOPs) to assess the systemic risks that stem from “the functioning and use made of their services in the [European] Union.” VLOPs are then also required to adopt “reasonable, proportionate and effective mitigation measures,”“tailored to the systemic risks identified.” The emphasis on systemic risks was intended, at least in part, to alleviate concerns that the DSA would be used to address individual incidents of dissemination of legal, but concerning, online speech. It was one of the limitations that civil society groups concerned with preserving a free and open internet worked hard to incorporate. 

Breton’s letter troublingly states that he is currently monitoring “debates and interviews in the context of elections” for the “potential risks” they may pose in the EU. But such debates and interviews with electoral candidates, including the Trump-Musk interview, are clearly matters of public concern—the types of publication that are deserving of the highest levels of protection under the law. Even if one has concerns about a specific event, dissemination of information that is highly newsworthy, timely, and relevant to public discourse is not in itself a systemic risk.

People seeking information online about elections have a protected right to view it, even through VLOPs. The dissemination of this content should not be within the EU’s enforcement focus under the threat of non-compliance procedures, and risks associated with such events should be analyzed with care. Yet Breton’s letter asserts that such publications are actually under EU scrutiny. And it is entirely unclear what proactive measures a VLOP should take to address a future speech event without resorting to general monitoring and disproportionate content restrictions. 

Moreover, Breton’s letter fails to distinguish between “illegal” and “harmful content” and implies that the Commission favors content-specific restrictions of lawful speech. The European Commission has itself recognized that “harmful content should not be treated in the same way as illegal content.” Breton’s tweet that accompanies his letter refers to the “risk of amplification of potentially harmful content.” His letter seems to use the terms interchangeably. Importantly, this is not just a matter of differences in the legal protections for speech between the EU, the UK, the US, and other legal systems. The distinction, and the protection for legal but harmful speech, is a well-established global freedom of expression principle. 

Lastly, we are concerned that the Commission is reaching beyond its geographic mandate.  It is not clear how such events that occur outside the EU are linked to risks and societal harm to people who live and reside within the EU, as well as the expectation of the EU Commission about what actions VLOPs must take to address these risks. The letter itself admits that the assessment is still in process, and the harm merely a possibility. EFF and partners within the DSA Human Rights Alliance have advocated for a long time that there is a great need to follow a human rights-centered enforcement of the DSA that also considers the global effects of the DSA. It is time for the Commission to prioritize their enforcement actions accordingly. 

Read the full letter here.

Christoph Schmon

EFF Benefit Poker Tournament at DEF CON 32

1 month 3 weeks ago

“Shuffle up and deal!” announced Cory Doctorow and the sound of playing cards and poker chips filled the room.

The sci-fi author and EFF special advisor was this year’s Celebrity Emcee for the 3rd annual EFF Benefit Poker Tournament, an official contest at the DEF CON computer hacking conference hosted by Red Queen Dynamics CEO and EFF board member Tarah Wheeler. Celebrity Knockout Guests were Runa Sandvik, MalwareJake Williams, and Deviant Ollam.

poker_10.jpeg

poker_23.jpg

Forty-six EFF supporters and friends played in the charity tournament on Friday, August 9 in the Horseshoe Poker Room at the heart of the Las Vegas Strip.

Every entrant received a special deck of cybersecurity playing cards. The original concept was suggested by information security attorney Kendra Albert, designed by Melanie “1dark1” Warner of Hotiron Collective, and made by Tarah Wheeler.

poker-treasure-chest-2.jpg

The day started with a poker clinic run by Tarah’s father, professional poker player Mike Wheeler. Mike shared how he taught Tarah to play poker with jelly beans and then shared with all listeners on when to check, when to raise, and how that flush draw might not be as solid as you think.

clinic1_credit_andrew_brandt.jpg

At noon, Cory Doctorow kicked off the tournament and the fun began. This year’s tournament featured a number of celebrity guests, each with a price on their head. Whichever player knocked one of them out of the tournament, would win a special prize.

JD Sterling knocked out poker pro Mike Wheeler, collecting the bounty on his head posted by Tarah, and winning a $250 donation to EFF in his name. Mike swears he was ahead until the river.

miketarahjd.jpg
Malware Jake was the next celebrity to fall, with fish knocking him out just before the break to win a scrolling message LED hat.

fishmalware.jpg


Runa Sandvik was knocked out by Jacen Kohler winning an assortment of Norwegian milk chocolate.

And Tarah Wheeler showed the skills and generosity that led her to start this charity tournament back in 2022. She knocked out Deviant, winning the literal shirt off his back, a one-of-a-kind Sci-Hub t-shirt that she later auctioned off for EFF. She also knocked out Cory Doctorow, winning a collection of his signed books that she also gave away. She bubbled and finished 9th in the tournament, just missing the final table of 8.

poker_11.jpeg

poker_8.jpeg

poker_9.jpeg

After Tarah fell, the final table assembled for one more hour of poker. The final five players were J.D. Sterling, Eric Hammersmark, n0v, Sid, and Ed Bailey, as the big stack.

J.D. was the first to fall when his K8 lost to Ed Bailey’s QT when a Queen flopped early.

Eric busted out next, leaving the final three. They traded blinds back and forth for a while until Ed, with the big stack, started pushing, joking that he had a plane to catch.

One of his raises was eventually called by Sid. Ed flopped top pair and tried to keep Sid on the line, but Sid wriggled out, landing a King on the river to beat Ed’s pair, double up, and eat into Ed’s chip lead.

Ed kept pushing, but ran into n0v’s pocket queens, taking another third of his chips, and then eventually busting out to pocket aces from Sid. Ed was satisfied with coming in third, and after shaking hands quickly took off. It turns out he actually did have a plane to catch.

The final two, n0v and Sid traded blinds until n0V went all-in pre-flop with A5. Sid called and showed pocket 9s. The crowd cheered as the board showed no aces, giving Sid the hand and the tournament.

poker_6.jpeg

Cory presented Sid with the tournament’s traditional jellybean trophy and a treasure chest of precious stones from Tarah’s personal collection.

poker_14.jpg

It was an exciting afternoon of competition raising over $16,000 to support civil liberties and human rights online. We hope you join us next year as we continue to grow the tournament. Follow Tarah and EFF to make sure we have chips and a chair for you at DEF CON 33.

poker_7.jpeg

Daniel de Zeeuw

Digital License Plates and the Deal That Never Had a Chance

1 month 3 weeks ago

Location and surveillance technology permeates the driving experience. Setting aside external technology like license plate readers, there is some form of internet-connected service or surveillance capability built into or on many cars, from GPS tracking to oil-change notices. This is already a dangerous situation for many drivers and passengers, and a bill in California requiring GPS-tracking in digital license plates would put us further down this troubling path. 

In 2022, EFF fought along with other privacy groups, domestic violence organizations, and LGBTQ+ rights organizations to prevent the use of GPS-enabled technology in digital license plates. A.B. 984, authored by State Assemblymember Lori Wilson and sponsored by digital license plate company Reviver, originally would have allowed for GPS trackers to be placed in the digital license plates of personal vehicles. As we have said many times, location data is very sensitive information, because where we go can also reveal things we'd rather keep private even from others in our household. Ultimately, advocates struck a deal with the author to prohibit location tracking in passenger cars, and this troubling flaw was removed. Governor Newsom signed A.B. 984 into law. 

Now, not even two years later, the state's digital license plate vendor, Reviver, and Assemblymember Wilson have filed A.B. 3138, which directly undoes the deal from 2022 and explicitly calls for location tracking in digital license plates for passenger cars. 

To best protect consumers, EFF urges the legislature to not approve A.B. 3138. 

Consumers Could Face Serious Concerns If A.B. 3138 Becomes Law

In fact, our concerns about trackers in digital plates are stronger than ever. Recent developments have made location data even more ripe for misuse.

  • People traveling to California from a state that criminalizes abortions may be unaware that the rideshare car they are in is tracking their trip to a Planned Parenthood via its digital license plate. This trip may generate location data that can be used against them in a state where abortion is criminalized.
  • Unsupportive parents of queer youth could use GPS-loaded plates to monitor or track whether teens are going to local support centers or events.
  • U.S. Immigration and Customs Enforcement (ICE) could use GPS surveillance technology to locate immigrants, as it has done by exploiting ALPR location data exchange between local police departments and ICE to track immigrants’ movements.  The invasiveness of vehicle location technology is part of a large range of surveillance technology that is at the hands of ICE to fortify their ever-growing “virtual wall.” 
  • There are also serious implications in domestic violence situations, where GPS tracking has been investigated and found to be used as a tool of abuse and coercion by abusive partners. Most recently, two Kansas City families are jointly suing the company Spytec GPS after its technology was used in a double-murder suicide, in which a man used GPS trackers to find and kill his ex-girlfriend, her current boyfriend, and then himself. The families say the lawsuit is, in part, to raise awareness about the danger of making this technology and location information more easily available. There's no reason to make tracking any easier by embedding it in state-issued plates. 
We Urge the Legislature to Reject A.B. 3138  

Shortly after California approved Reviver to provide digital license plates to commercial vehicles under A.B. 984, the company experienced a security breach where it was possible for hackers to use GPS in real time to track vehicles with a Reviver digital license plate. Privacy issues aside,  this summer, the state of Michigan also terminated their two-year old contract with Reviver for the company’s failure to follow state law and its contractual obligations. This has forced 1,700 Michigan drivers to go back to a traditional metal license plate.

Reviver is the only company that currently has state authorization to sell digital plates in California, and is the primary advocate for allowing tracking in passenger vehicle plates. The company says its goal is to modernize personalization and safety with digital license plate technology for passenger vehicles. But they haven't proven themselves up to the responsibility of protecting this data. 

A.B. 3138 functionally gives drivers one choice for a digital license plate vendor, and that vendor failed once to competently secure the location data collected by its products. It has now failed to meet basic contractual obligations with a state agency. California lawmakers should think carefully about the clear dangers of vehicle location tracking, and whether we can trust this company to protect the sensitive location information for vulnerable populations, or for any Californian.  

Hayley Tsukayama

2 Fast 2 Legal: How EFF Helped a Security Researcher During DEF CON 32

1 month 3 weeks ago

This year, like every year, EFF sent a variety of lawyers, technologists, and activists to the summer security conferences in Las Vegas to help foster support for the security research community. While we were at DEF CON 32, security researcher Dennis Giese received a cease-and-desist letter on a Thursday afternoon for his talk scheduled just hours later for Friday morning. EFF lawyers met with Dennis almost immediately, and by Sunday, Dennis was able to give his talk. Here’s what happened, and why the fight for coders’ rights matters.

Throughout the year, we receive a number of inquiries from security researchers who seek to report vulnerabilities or present on technical exploits and want to understand the legal risks involved. Enter the EFF Coders’ Rights Project, designed to help programmers, tinkerers, and innovators who wish to responsibly explore technologies and report on those findings. Our Coders Rights lawyers counsel many of those who reach out to us on anything from mitigating legal risk in their talks, to reporting vulnerabilities they’ve found, to responding to legal threats. The number of inquiries often ramp up in the months leading to “hacker summer camp,” but we usually have at least a couple of weeks to help and advise the researcher.

In this case, however, we did our work on an extremely short schedule.

Dennis is a prolific researcher who has presented his work at conferences around the world. At DEF CON, one of the talks he planned along with a co-presenter involved digital locks, including the vendor Digilock. In the months leading up to the presentation, Dennis shared his findings with Digilock and sought to discuss potential remediations. Digilock expressed interest in these conversations, so it came as a surprise when the company sent him the cease-and-desist letter on the eve of the presentation raising a number of baseless legal claims.

Because we had lawyers on the ground at DEF CON, Dennis was able to connect with EFF soon after receiving the cease-and-desist and, along with former EFF attorney and current Special Counsel to EFF, Kurt Opsahl, we agreed to represent him in responding to Digilock. Over the course of forty-eight hours, we were able to meet with Digilock’s lawyers and ultimately facilitated a productive conversation between Dennis and its CEO.

Good-faith security researchers increase security for all of us.

To its credit, Digilock agreed to rescind the cease-and-desist letter and also provided Dennis with useful information about its plans to address vulnerabilities discussed in his research.

Dennis was able to give the talk, with this additional information, on Sunday, the last day of DEF CON.

We are proud we could help Dennis navigate what can be a scary situation of receiving last-minute legal threats, and are happy that he was ultimately able to give his talk. Good-faith security researchers like Dennis increase security for all of us who use digital devices. By identifying and disclosing vulnerabilities, hackers are able to improve security for every user who depends on information systems for their daily life and work. If we do not know about security vulnerabilities, we cannot fix them, and we cannot make better computer systems in the future. Dennis’s research was not only legal, it demonstrated real world problems that the companies involved need to address.

Just as important as discovering security vulnerabilities is reporting the findings so that users can protect themselves, vendors can avoid introducing vulnerabilities in the future, and other security researchers can build off that information. By publicly explaining these sorts of attacks and proposing remedies, other companies that make similar devices can also benefit by fixing these vulnerabilities. In discovering and reporting on their findings, security researchers like Dennis help build a safer future for all of us.

However, this incident reminds us that even good faith hackers are often faced with legal challenges meant to silence them from publicly sharing the legitimate fruits of their labor. The Coders' Rights Project is part of our long standing work to protect researchers through legal defense, education, amicus briefs, and involvement in the community. Through it, we hope to promote innovation and safeguard the rights of curious tinkerers and hackers everywhere.

We must continue to fight for the right to share this research, which leads to better security for us all. If you are a security researcher in need of legal assistance or have concerns before giving a talk, do not hesitate to reach out to us. If you'd like to support more of this work, please consider donating to EFF.

Hannah Zhao

EFF Honored as DEF CON 32 Uber Contributor

1 month 3 weeks ago

At DEF CON 32 this year, the Electronic Frontier Foundation became the first organization to be given the Uber Contributor award. This award recognizes EFF’s work in education and litigation, naming us “Defenders of the Hacker Spirit.”

DEF CON Uber Contributor Award

EFF Staff Attorney Hannah Zhao and Staff Technologist Cooper Quintin accepting the Uber Contributor Award from DEF CON founder Jeff Moss

The Uber Contributor Award is an honor created three years ago to recognize people and groups who have made exceptional contributions to the infosec and hacker community at DEF CON. Our connection with DEF CON runs deep, dating back over 20 years. The conference has become a vital part of keeping EFF’s work, grounded in the ongoing issues faced by the creative builders and experimenters keeping tech secure (and fun).

EFF Staff Attorney Hannah Zhao (left) and Staff Technologist Cooper Quintin (right) with the Uber Contributor Award (center)

Every year attendees and organizers show immense support and generosity in return, but this year exceeded all expectations. EFF raised more funds than all previous years at hacker summer camp—the three annual Las Vegas hacker conferences, BSidesLV, Black Hat USA, and DEF CON. We also gained over 1,000 new supporting and renewing members supporting us year-round. This community’s generosity fuels our work to protect encrypted messaging, fight back against illegal surveillance, and defend your right to hack and experiment. We’re honored to be welcomed so warmly year after year. 

Just this year, we saw another last minute cease-and-desist order sent to a security researcher about their DEF CON talk. EFF attorneys from our  Coders’ Rights Project attend every year, and were able to  jump into action to protect the speaker. While the team puts out fires at DEF CON for one week in August, their year-round support of coders is thanks to the continued support of the wider community. Anyone facing intimidation and spurious legal threats can always reach out for support at info@eff.org

We are deeply grateful for this honor and the unwavering support from DEF CON. Thank you to everyone who supported EFF at the membership booth, participated in our Poker Tournament and Tech Trivia, or checked out our talks. 

We remain committed to meeting the needs of coders and will continue to live up to this award, ensuring the hacker spirit thrives despite an increasingly hostile landscape. We look forward to seeing you again next year!

Rory Mir

In These Five Social Media Speech Cases, Supreme Court Set Foundational Rules for the Future

1 month 3 weeks ago

The U.S. Supreme Court addressed government’s various roles with respect to speech on social media in five cases reviewed in its recently completed term. The through-line of these cases is a critically important principle that sets limits on government’s ability to control the online speech of people who use social media, as well as the social media sites themselves: internet users’ First Amendment rights to speak on social media—whether by posting or commenting—may be infringed by the government if it interferes with content moderation, but will not be infringed by the independent decisions of the platforms themselves.

As a general overview, the NetChoice cases, Moody v. NetChoice and NetChoice v. Paxton, looked at government’s role as a regulator of social media platforms. The issue was whether state laws in Texas and Florida that prevented certain online services from moderating content were constitutional in most of their possible applications. The Supreme Court did not rule on that question and instead sent the cases back to the lower courts to reexamine NetChoice’s claim that the statutes had few possible constitutional applications.

The court did, importantly and correctly, explain that at least Facebook’s Newsfeed and YouTube’s Homepage were examples of platforms exercising their own First Amendment rights on how to display and organize content, and the laws could not be constitutionally applied to Newsfeed and Homepage and similar sites, a preliminary step in determining whether the laws were facially unconstitutional.

Lindke v. Freed and Garnier v. O’Connor-Ratcliffe looked at the government’s role as a social media user who has an account and wants to use its full features, including blocking other users and deleting comments. The Supreme Court instructed the lower courts to first look to whether a government official has the authority to speak on behalf of the government, before looking at whether the official used their social media page for governmental purposes, conduct that would trigger First Amendment protections for the commenters.

Murthy v. Missouri, the jawboning case, looked at the government’s mixed role as a regulator and user, in which the government may be seeking to coerce platforms to engage in unconstitutional censorship or may also be a user simply flagging objectionable posts as any user might. The Supreme Court found that none of the plaintiffs had standing to bring the claims because they could not show that their harms were traceable to any action by the federal government defendants.

We’ve analyzed each of the Supreme Court decisions, Moody v. NetChoice (decided with NetChoice v. Paxton), Murthy v. Missouri, and Lindke v. Freed (decided with Garnier v. O’Connor Ratcliffe), in depth.

But some common themes emerge when all five cases are considered together.

  • Internet users have a First Amendment right to speak on social media—whether by posting or commenting—and that right may be infringed when the government seeks to  interfere with content moderation, but it will not be infringed  by the independent decisions of the platforms themselves. This principle, which EFF has been advocating for many years, is evident in each of the rulings. In Lindke, the Supreme Court recognized that government officials, if vested with and exercising official authority, could violate the First Amendment by deleting a user’s comments or blocking them from commenting altogether. In Murthy, the Supreme Court found that users could not sue the government for violating their First Amendment rights unless they could show that government coercion lead to their content being taken down or obscured, rather than the social media platform’s own editorial decision. And in the NetChoice cases, the Supreme Court explained that social media platforms typically exercise their own protected First Amendment rights when they edit and curate which posts they show to their users, and the government may violate the First Amendment when it requires them to publish or amplify posts.

  • Underlying these rulings is the Supreme Court’s long-awaited recognition that social media platforms routinely moderate users’ speech: they decide which posts each user sees and when and how they see it, they decide to amplify and recommend some posts and obscure others, and are often guided in this process by their own community standards or similar editorial policies. This is seen in the Supreme Court’s emphasis in Murthy that jawboning is not actionable if the content moderation was the independent decision of the platform rather than coerced by the government. And a similar recognition of independent decision-making underlies the Supreme Court’s First Amendment analysis in the NetChoice cases. The Supreme Court has now thankfully moved beyond the idea that content moderation is largely passive and indifferent, a concern that had been raised after the Supreme Court used that language to describe the process in last term’s case, Twitter v. Taamneh.

  • This term’s cases also confirm that traditional First Amendment rules apply to social media. In Lindke, the Supreme Court recognized that when government controls the comments components of a social media page, it has the same First Amendment obligations to those who wish to speak in those spaces as it does in offline spaces it controls, such as parks, public auditoriums, or city council meetings. In the NetChoice cases, the Supreme Court found that platforms that edit and curate user speech according to their editorial standards have the same First Amendment rights as others who express themselves by selecting the speech of others, including art galleries, booksellers, newsstands, parade organizers, and editorial page editors.

Plenty of legal issues around social media remain to be decided. But the 2023-24 Supreme Court term has set out important speech-protective rules that will serve as the foundation for many future rulings. 

 

Related Cases: PETA v. Texas A&MNetChoice Must-Carry Litigation
David Greene

EFF Presses Federal Circuit To Make Patent Case Filings Public

1 month 3 weeks ago

Federal court records belong to everyone. But one federal court in Texas lets patent litigants treat courts like their own private tribunals, effectively shutting out the public.

When EFF tried to intervene and push for greater access to a patent dispute earlier this year, the U.S. District Court for the Eastern District of Texas rejected our effort.  EFF appealed and last week filed our opening brief with the U.S. Court of Appeals for the Federal Circuit.

EFF is not the only one concerned by the district court’s decision. Several organizations filed friend-of-the-court briefs in support of our appeal. Below, we explain the stakes of this case and why others are concerned about the implications of the district court’s secrecy.  

Courts too often let patent litigants shut out the public

Secrecy in patent litigation is an enduring problem, and EFF has repeatedly pushed for greater transparency by intervening in patent lawsuits to vindicate the public’s right to access judicial records.

But sometimes, courts don’t let us—and instead decide to prioritize corporations’ confidentiality interests over the public’s right to access records filed on the record in the public’s courts.

That’s exactly what happened in Entropic Communications, LLC. v. Charter Commuications, Inc. Entropic, a semiconductor provider, sued Charter, one of the nation’s largest media companies, for allegedly infringing six Entropic patents for cable modem technology. Charter argued that it had a license defense because the patents cover technology required to comply with the industry-leading cable data transmission standard, Data Over Cable Service Interface Specification (DOCSIS). Its argument raises a core patent law question: when is a patent “essential” to a technical standard, and thus encumbered by licensing commitments?

Many of the documents explaining the parties’ positions on this important issue are either completely sealed or heavily redacted, making it difficult for the public to understand their arguments. Worse, the parties themselves decided which documents to prevent the public from viewing.

District court rejects EFF’s effort to make case more transparent

The kind of collusive secrecy in this case is illegal—courts are required to scrutinize every line that a party seeks to redact, to ensure that nothing is kept secret unless it satisfies a rigorous balancing test. Under that test, proponents of secrecy need to articulate a specific reason to seal the document sufficient to outweigh the strong presumption that all filings will be public. The court didn’t do any of that here. Instead, it allowed the parties to seal all documents they deemed “confidential” under a protective order, which applies to documents produced in discovery.

That’s wrong: protective orders do not control whether court filings may be sealed. But unfortunately, the district court’s misuse of these protective orders is extremely common in patent cases in the Eastern District of Texas. In fact, the protective order in this case closely mirrors the “model protective order” created by the court for use in patent cases, which also allows parties to seal court filings free from judicial scrutiny or even the need to explain why they did so.

Those concerns prompted EFF in March to ask the court to allow it to intervene and challenge the sealing practices. The court ruled in May that EFF could not intervene in the case, leaving no one to advocate for the public’s right of access. It further ruled that the sealing practices were legal because local rules and the protective order authorized the parties to broadly make these records secret. The end result? Excessive secrecy that wrongfully precludes public scrutiny over patent cases and decisions in this district.

The district court’s errors in this case creates a bad precedent that undermines a cornerstone of the American justice system: judicial transparency. Without transparency, the public cannot ensure that its courts are acting fairly, eroding public trust in the judiciary.

EFF’s opening brief explains the district court’s errors

EFF disagreed with the district court’s ruling, and last week filed its opening brief challenging the decision. As we explained in our opening brief:

The public has presumptive rights under the common law and First Amendment to access summary judgment briefs and related materials filed by Charter and Entropic. Rather than protect public access, the district court permitted the parties to file vast swaths of material under seal, some of which remains completely secret or is so heavily redacted that EFF cannot understand legal arguments and evidence used in denying Charter’s license defense.

Moreover, the court’s ruling that EFF could not even seek to unseal the documents in the first place sets a dangerous precedent. If the decision is upheld, many court dockets, including those with significant historic and newsworthy materials, could become permanently sealed merely because the public did not try to intervene and unseal records while the case was open.

EFF’s brief argued that:

The district court ignored controlling law and held EFF to an arbitrary timeliness standard that the Fifth Circuit has explicitly rejected—including previously reversing the district court here. Neither controlling law nor the record support the district court’s conclusion that Charter and Entropic would be prejudiced by EFF’s intervention. Troublingly, the district court’s reasoning for denying EFF’s intervention could inhibit the public from coming forward to challenge secrecy in all closed cases.

A successful appeal will open this case to the public and help everyone better understand patent disputes that are filed in the Eastern District of Texas. EFF looks forward to vindicating the public’s right to access records on appeal.

Court transparency advocates file briefs supporting EFF’s appeal

The district court’s ruling raised concerns among the broader transparency community, as multiple organizations filed friend-of-court briefs in support of EFF’s appeal.

The Reporters Committee for Freedom of the Press and 19 media organizations, including the New York Times and ProPublica, filed a brief arguing that the district court’s decision to reject EFF’s intervention could jeopardize access to court records in long-closed cases that have previously led to disclosures showing Purdue Pharmaceutical’s efforts to boost sales of OxyContin and misleading physicians about the drug’s addiction risks. The brief details several other high-profile instances in which sealed court records led to criminal convictions or revealed efforts to cover up the sale of defective products.

“To protect just the sort of journalism described above, the overwhelming weight of authority holds that the press and public may intervene to unseal judicial records months, years, or even decades later—including, as here, where the parties might have hoped a case was over,” the brief argues. “The district court’s contrary ruling was error.”

A group of legal scholars from Stanford Law and the University of California, Berkeley, School of Law filed a brief arguing that the district court inappropriately allowed the parties to decide how to conceal many of the facts in this case via the protective order. The brief, relying on empirical research the scholars undertook to review millions of court dockets, argues that the district court’s secrecy here is part of a larger problem of lax oversight by judges, who too often defer to litigants’ desire to make as much secret as possible.

“Instead of upholding the public’s presumptive right of access to those materials, the court again deferred to the parties’ self-interested desire for secrecy,” the brief argues. “That abdication of judicial duty, both in entering the protective order and in sealing judicial records, not only reflects a stubborn refusal to abide by the rulings of the Fifth Circuit; it represents a stunning disregard for the public’s interest in maintaining an open and transparent court system.”

A third brief filed by Public Citizen and Public Justice underscored the importance of allowing the public to push for greater transparency in sealed court cases. Both organizations actively intervene in court cases to unseal records as part of their broader advocacy to protect the public. Their brief argues that allowing EFF to intervene in the case furthers the public’s longstanding ability to understand and oversee the judicial system. The brief argues:

The public’s right of access to courts is central to the America legal system. Widespread sealing of court records cuts against a storied history of presumptive openness to court proceedings rooted in common law and the First Amendment. It also inhibits transparency in the judicial process, limiting the public’s ability to engage with and trust courts’ decision making.

EFF is grateful for the support these organizations and individuals provided, and we look forward to vindicating the public’s rights of access in this case.

Related Cases: Entropic Communications, LLC v. Charter Communications, Inc.
Tori Noble

EFFecting Change: Reproductive Justice in the Digital Age

1 month 3 weeks ago

Please join EFF for the next segment of EFFecting Change, our newest livestream series, diving into topics near and dear to our hearts. 

August 28: Reproductive Justice in the Digital Age

This summer marks the two-year anniversary of the Dobbs decision overturning Roe v. Wade. Join EFF for a livestream discussion about restrictions to reproductive healthcare and the choices people seeking an abortion must face in the digital age where everything is connected, and surveillance is rampant. Learn what’s happening across the United States and how you can get involved with our panel featuring EFF Staff Technologist Daly Barnett, EFF Associate Director of Legislative Activism Hayley Tsukayama, EFF Staff Attorney Jennifer Pinsof, Director of Research and Policy at the Surveillance Resistance Lab Cynthia Conti-Cook, and community organizer Adri Perez.



October 17:
How to Protest with Privacy in Mind

Do you know what to do if you’re subjected to a search or arrest at a protest? Join EFF for a livestream discussion about how to protect your electronic devices and digital assets before, during, and after a demonstration. Learn how you can avoid confiscation or forced deletion of media, and keep your movements and associations private.


We hope you and your friends can join us live for both events! Be sure to spread the word, and share our past livestreams. Please note that all future events will be recorded for later viewing.

Check out the first segment of EFFecting Change: The U.S. Supreme Court Takes on the Internet by watching the recording on our YouTube page

Melissa Srago

Digital Apartheid in Gaza: Big Tech Must Reveal Their Roles in Tech Used in Human Rights Abuses

1 month 3 weeks ago

This is part two of an ongoing series. Part one on unjust content moderation is here

Since the start of the Israeli military response to Hamas’ deadly October 7 attack, U.S.-based companies like Google and Amazon have been under pressure to reveal more about the services they provide and the nature of their relationships with the Israeli forces engaging in the military response. 

We agree. Without greater transparency, the public cannot tell whether these companies are complying with human rights standards—both those set by the United Nations and those they have publicly set for themselves. We know that this conflict has resulted in alleged war crimes and has involved massive, ongoing surveillance of civilians and refugees living under what international law recognizes as an illegal occupation. That kind of surveillance requires significant technical support and it seems unlikely that it could occur without any ongoing involvement by the companies providing the platforms.  

Google's Human Rights statement claims that “In everything we do, including launching new products and expanding our operations around the globe, we are guided by internationally recognized human rights standards. We are committed to respecting the rights enshrined in the Universal Declaration of Human Rights and its implementing treaties, as well as upholding the standards established in the United Nations Guiding Principles on Business and Human Rights (UNGPs) and in the Global Network Initiative Principles (GNI Principles). Google goes further in the case of AI technologies, promising not to design or deploy AI in technologies that are likely to facilitate injuries to people, gather or use information for surveillance or be used in violation of human rights, or even where the use is likely to cause overall harm.” 

Amazon states that it is "Guided by the United Nations Guiding Principles on Business and Human Rights," and that their “approach on human rights is informed by international standards; we respect and support the Core Conventions of the International Labour Organization (ILO), the ILO Declaration on Fundamental Principles and Rights at Work, and the UN Universal Declaration of Human Rights.” 

It is time for Google and Amazon to tell the truth about use of their technologies in Gaza so that everyone can see whether their human rights commitments were real or simply empty promises.

Concerns about Google and Amazon Facilitating Human Rights Abuses  

The Israeli government has long procured surveillance technologies from corporations based in the United States. Most recently, an investigation in August by +972 and Local Call revealed that the Israeli military has been storing intelligence information on Amazon’s Web Services (AWS) cloud after the scale of data collected through mass surveillance on Palestinians in Gaza was too large for military servers alone. The same article reported that the commander of Israel’s Center of Computing and Information Systems unit—responsible for providing data processing for the military—confirmed in an address to military and industry personnel that the Israeli army had been using cloud storage and AI services provided by civilian tech companies, with the logos of AWS, Google Cloud, and Microsoft Azure appearing in the presentation. 

This is not the first time Google and Amazon have been involved in providing civilian tech services to the Israeli military, nor is it the first time that questions have been raised about whether that technology is being used to facilitate human rights abuses. In 2021, Google and Amazon Web Services signed a $1.2 billion joint contract with the Israeli military called Project Nimbus to provide cloud services and machine learning tools located within Israel. In an official announcement for the partnership, the Israeli Finance Ministry said that the project sought to “provide the government, the defense establishment and others with an all-encompassing cloud solution.” Under the contract, Google and Amazon reportedly cannot prevent particular agencies of the Israeli government, including the military, from using its services. 

Not much is known about the specifics of Nimbus. Google has publicly stated that the project is not aimed at military uses; the Israeli military publicly credits Nimbus with assisting the military in conducting the war. Reports note that the project involves Google establishing a secure instance of the Google Cloud in Israel. According to Google documents from 2022, Google’s Cloud services include object tracking, AI-enabled face recognition and detection, and automated image categorization. Google signed a new consulting deal with the Israeli Ministry of Defense based around the Nimbus platform in March 2024, so Google can’t claim it’s simply caught up in the changed circumstances since 2021. 

Alongside Project Nimbus, an anonymous Israeli official reported that the Israeli military deploys face recognition dragnets across the Gaza Strip using two tools that have facial recognition/clustering capabilities: one from Corsight, which is a "facial intelligence company," and the other built into the platform offered through Google Photos. 

Clarity Needed 

Based on the sketchy information available, there is clearly cause for concern and a need for the companies to clarify their roles.  

For instance, Google Photos is a general-purpose service and some of the pieces of Project Nimbus are non-specific cloud computing platforms. EFF has long maintained that the misuse of general-purpose technologies alone should not be a basis for liability. But, as with Cisco’s development of a specific module of China’s Golden Shield aimed at identifying the Falun Gong (currently pending in litigation in the U.S. Court of Appeals for the Ninth Circuit), companies should not intentionally provide specific services that facilitate human rights abuses. They must also not willfully blind themselves to how their technologies are being used. 

In short, if their technologies are being used to facilitate human rights abuses, whether in Gaza or elsewhere, these tech companies need to publicly demonstrate how they are adhering to their own Human Rights and AI Principles, which are based in international standards. 

We (and the whole world) are waiting, Google and Amazon. 

Paige Collings

Federal Appeals Court Finds Geofence Warrants Are “Categorically” Unconstitutional

1 month 3 weeks ago

In a major decision on Friday, the federal Fifth Circuit Court of Appeals held that geofence warrants are “categorically prohibited by the Fourth Amendment.” Closely following arguments EFF has made in a number of cases, the court found that geofence warrants constitute the sort of “general, exploratory rummaging” that the drafters of the Fourth Amendment intended to outlaw. EFF applauds this decision because it is essential that every person feels like they can simply take their cell phone out into the world without the fear that they might end up a criminal suspect because their location data was swept up in open-ended digital dragnet.

The new Fifth Circuit case, United States v. Smith, involved an armed robbery and assault of a US Postal Service worker at a post office in Mississippi in 2018. After several months of investigation, police had no identifiable suspects, so they obtained a geofence warrant covering a large geographic area around the post office for the hour surrounding the crime. Google responded to the warrant with information on several devices, ultimately leading police to the two defendants.

On appeal, the Fifth Circuit reached several important holdings.

First, it determined that under the Supreme Court’s landmark ruling in Carpenter v. United States, individuals have a reasonable expectation of privacy in the location data implicated by geofence warrants. As a result, the court broke from the Fourth Circuit’s deeply flawed decision last month in United States v. Chatrie, noting that although geofence warrants can be more “limited temporally” than the data sought in Carpenter, geofence location data is still highly invasive because it can expose sensitive information about a person’s associations and allow police to “follow” them into private spaces.

Second, the court found that even though investigators seek warrants for geofence location data, these searches are inherently unconstitutional. As the court noted, geofence warrants require a provider, almost always Google, to search “the entirety” of its reserve of location data “while law enforcement officials have no idea who they are looking for, or whether the search will even turn up a result.” Therefore, “the quintessential problem with these warrants is that they never include a specific user to be identified, only a temporal and geographic location where any given user may turn up post-search. That is constitutionally insufficient.”

Unsurprisingly, however, the court found that in 2018, police could have relied on such a warrant in “good faith,” because geofence technology was novel, and police reached out to other agencies with more experience for guidance. This means that the evidence they obtained will not be suppressed in this case.

Nevertheless, it is gratifying to see an appeals court recognize the fundamental invasions of privacy created by these warrants and uphold our constitutional tradition prohibiting general searches. Police around the country have increasingly relied on geofence warrants and other reverse warrants, and this opinion should act as a warning against narrow applications of Fourth Amendment precedent in these cases.

Related Cases: Carpenter v. United States
Andrew Crocker

Reintroducing the EFA

2 months ago

We're thrilled to share that the Electronic Frontier Alliance (EFA) has a fresh new look and a wealth of new resources for community organizers. EFF can’t be everywhere and in every fight, which is why back in 2016 we committed to building a network with grassroots organizations, and made the EFA a critical part of our work. Local organizers from within the community are better situated to build support and change in the long term. So when civil liberties and digital rights are under threat in your neck of the woods, we hope you find or become a local EFA member.

After eight very eventful years for local organizing, the EFA is going strong with over 70 active groups across the United States. To renew our support of the network, EFF revamped the look of the EFA and made a number of improvements to our online hub for all things EFA: https://efa.eff.org.

But the network is bigger than EFF. EFA is composed of its members, and relies on dedicated local advocates, educators, and hackers to help drive the work forward. If you’re part of a not-for-profit community group, we encourage you to apply.

JOIN EFA

Defend Digital Rights Locally

What Is the EFA?

The Electronic Frontier Alliance (EFA) is an information-sharing network of grassroots groups across the United States, administered by EFF’s team of organizers. All groups are totally independent—meaning no one is obliged to follow EFF’s lead to be supported. The result is a network with incredibly diverse beliefs, focuses, and tactics; from hacker spaces developing open-source software tools, to community ISPs, to student groups hosting surveillance self-defense workshops.

A few things do unify alliance members, though. All groups must be tied to a local community, meaning their work is based in a specific region or institution, with meaningful ways for other community members to get involved. Groups must also be not-for-profit; either unincorporated or registered as a non-profit. Finally, all member organizations publicly endorse EFA’s five core principles:

- Free expression: People should be able to speak their minds to whomever will listen.

- Security: Technology should be trustworthy and answer to its users.

- Privacy: Technology should allow private and anonymous speech, and let users set their own parameters about what to share with whom.

- Creativity: Technology should promote progress by allowing people to build on the ideas, creations, and inventions of others.

- Access to knowledge: Curiosity should be rewarded, not stifled.

How EFF Supports EFA Members

EFF is committed to building and strengthening the EFA network. EFF doesn’t bottleneck on-the-ground activists, or parachute into local communities with marching orders. Instead, we aim to build the network in autonomous and decentralized ways, helping build local power through base-building, and fostering more connections between aligned groups.

That’s not to say we stay on the sidelines: EFF’s organizers respond to requests from community groups with hands-on support. This includes helping to create an effective local campaign, host successful events, write a local op-ed, or tackle the administrative headaches faced by new and growing groups. We also lend EFF’s platform by promoting local work. In short, membership comes with an EFF support-line which, pending capacity, can help make local work more impactful.

EFF’s organizing team also brings groups together with a number of member-only convenings. Exclusive EFA videoconferences are hosted every month, with talks and workshops from digital rights and organizing experts, as well as an opportunity to brainstorm or workshop work with other organizers in the network. Organizers also regularly host in-person EFA meetups and socials, and will leverage EFF’s network to assist with local networking necessary for coalition work. EFF also hosts multiple socials and in-person EFA meetups exclusive to members across the country throughout the year.

As an added bonus, EFA groups also get discounts on EFF annual memberships, unlocking additional exclusive events, mailings, and member gifts. 

New Look, Site, and Resources

Bringing a new look to the EFA site (and new swag) was the perfect excuse to also extend and update our resources for organizers.

Allies Directory

EFF staff meet an endless stream of people frustrated with constant infringements of our rights, especially when these intrusions start hitting us at home. There are sometimes clear ways to take action when congress or big tech are making bad decisions. But what about when it’s in your own backyard?

That’s where the EFA allies directory (https://efa.eff.org/allies) comes in. Finding a local group can turn that pent up frustration into action. Even if your nearest group has a different focus, they’ll be like-minded digital right defenders familiar with other local resources and organizations. Our site offers an easy way to filter by state, and get a quick introduction to each group and how to best contact them.

Organizer Toolkits

While EFF’s organizing team is always eager to help groups grow, many groups run into the same hurdles. That’s why we’ve prepared several organizer toolkits with evergreen advice for starting and growing a group. These include:

  • Organizing events: Every event, from a regular meeting to hosting a conference, requires clear planning and many logistical considerations.
  • Social media advocacy: It can be challenging to make an impact online, but with some consistency and helpful tips it can be an effective tool for rallying support.
  • Building coalitions with sign-on letters: Approaching decision-makers with a host of groups supporting or opposing a policy in a sign-on letter is a powerful advocacy tool. If approached carefully, it can also serve as a starting point for continued coalition work and support across issues.
  • Traditional media tips: Talking to a reporter about your work can be stressful, but with the right preparation it helps spread your message and raise your group’s prestige.
  • Student organizing: Students have to navigate a lot of unique and difficult dynamics starting a group, maintaining a group, and even protesting on campus.
  • Community agreements: Local groups benefit from consensus, from event expectations to community guidelines, making it clear what behavior is encouraged and what is off-limits. This helps groups grow and keep everyone safe.

These toolkits are designed to be easily printed and shared, and are available at launch in both English and Spanish.

Building a Movement Together

The relaunch of the EFA is not just a cosmetic change; it represents our renewed commitment to supporting grassroots digital rights advocacy. The strength of the EFA network lies in its diversity and the dedication of its members. In an era where civil liberties and digital rights are under constant threat, a coordinated and well-supported grassroots movement is essential for addressing the digital rights challenges of today, and tomorrow.

So EFF calls on you to get involved. Find a local group, discuss joining the EFA with your group, or even get some friends together to start a new group. We need people from all walks of life, with a range of experience and expertise, to be a part of the work which will shape the future.

JOIN EFA

Defend Digital Rights Locally

For more information on how the EFA works and to join the fight, please check out our FAQ page or reach out to EFF’s organizing team at organizing[at]eff.org. Join us in building a future where digital rights are upheld and respected.

Rory Mir
Checked
42 minutes 17 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed