U.S. Federal Employees: Plant Your Flag for Digital Freedoms Today!

3 months ago

Like clockwork, September is here—and so is the Combined Federal Campaign (CFC) pledge period!  

The CFC is the world’s largest and most successful annual charity campaign for U.S. federal employees and retirees. You can now make a pledge to support EFF’s lawyers, technologists, and activists in the fight for privacy and free speech online. Last year members of the CFC community raised nearly $34,000 to support digital civil liberties. 

Giving to EFF through the CFC is easy! Just head over to GiveCFC.org and use our ID 10437. Once there, click DONATE to give via payroll deduction, credit/debit, or an e-check. If you have a renewing pledge, you can increase your support as well! Scan the QR code below to easily make a pledge or go to GiveCFC.org

This year's campaign theme—GIVE HAPPY—shows that when U.S. federal employees and retirees give together, they make a meaningful difference to a countless number of individuals throughout the world. They ensure that organizations like EFF can continue working towards our goals even during challenging times. 

With support from those who pledged through the CFC last year, EFF has:

Federal employees and retirees have a tremendous impact on the shape of our democracy and the future of civil liberties and human rights online. Support EFF’s work by using our CFC ID 10437 when you make a pledge today!

Christian Romero

EFF Calls For Release of Alexey Soldatov, "Father of the Russian Internet"

3 months ago

EFF was deeply disturbed to learn that Alexey Soldatov, known as the “father of the Russian Internet,” was sentenced in July to two years in prison by a Moscow court for alleged “misuse” of IP addresses.

In 1990, Soldatov led the Relcom computer network that made the first Soviet connection to the global internet. He also served as Russia’s Deputy Minister of Communications from 2008 to 2010.

Soldatov was convicted on charges related to an alleged deal to transfer IP addresses to a foreign organization. He and his lawyers have denied the accusations. His family, many supporters, and Netzpolitik suggest that the accusations are politically motivated. Soldatov’s former business partner, Yevgeny Antipov, was also sentenced to eighteen months in prison.

Soldatov was a trained nuclear scientist at Kurchatov nuclear research institute who, during the Soviet era, built the Russian Institute for Public Networks (RIPN), which was responsible for administering and allocating IP addresses in Russia from the early 1990s onwards. The  network RIPN created was called Relcom (RELiable COMmunication). During the 1991 KGB-led coup d’etat Relcom—unlike traditional media—remained uncensored. As his son, journalist Andrei Soldatov recalls, Alexey Soldatov insisted on keeping the lines open under all circumstances.

Following the collapse of the Soviet Union, Soldatov ran Relcom as the first ISP in Russia and has since helped establish organizations that provide the technical backbone of the Russian Internet. For this long service, he has been dubbed “the father of RuNet” (the term used to describe the Russian-speaking internet). During the time that Soldatov served as Russia’s deputy minister of communications, he was instrumental in getting ICANN to approve the use of Cyrillic in domain names. He also rejected then-preliminary discussions about isolating the Russian internet from the global internet. 

We are deeply concerned that this is a politically motivated prosecution. Multiple reports indicate this may be true. Soldatov suffers from both prostate cancer and a heart condition, and this sentence would almost certainly further endanger his health.

His son Andrei Soldatov writes, “The Russian state, vindictive and increasingly violent by nature, decided to take his liberty, a perfect illustration of the way Russia treats the people who helped contribute to the modernization and globalization of the country.”

Because of our concerns, EFF calls for his immediate release.

Electronic Frontier Foundation

Victory! California Bill To Impose Mandatory Internet ID Checks Is Dead—It Should Stay That Way

3 months ago

A misguided bill that would have required many people to show ID to get online has died without getting a floor vote in the California legislature, where key deadlines for bill passage passed this weekend. Thank you to our supporters for helping us to kill this wrongheaded bill, especially those of you who took the time to reach out to your legislators

EFF opposed this bill from the start. Bills that allow politicians to define what is “sexually explicit” content and then enact punishments for those who engage with it are inherently censorship bills—and they never stop with minors. 

A.B. 3080 would have required an age verification system, most likely a scanned uploaded government-issued ID, to be erected for any website that had more than 33% “sexually explicit” content. The proposal did not, and could not have, differentiated between sites that are largely graphic sexual content and a huge array of sites that have some content that is appropriate for minors, along with other content that is geared towards adults. Bills like this are similar to having state prosecutors insist on ID uploads in order to turn on Netflix, regardless of whether the movie you’re seeking is G-rated or R-rated. 

Political attempts to use pornography as an excuse to censor and control the internet are now almost 30 years old. These proposals persist despite the fact that applying government overseers to what Americans read and watch is not only unconstitutional, but broadly unpopular. In Reno v. ACLU, the Supreme Court overruled almost all of the Communications Decency Act, a 1996 law that was intended to keep “obscene or indecent” material away from minors. In 2004, the Supreme Court again rejected an age-gated internet in ACLU v. Ashcroft, striking down most of a federal law of that era. 

The right of adults to read and watch what they want online is settled law. It is also a right that the great majority of Americans want to keep. The age-gating systems that propose to analyze and copy our biometric data, our government IDs, or both, will be a huge privacy setback for Americans of all ages. Electronically uploading and copying IDs is far from the equivalent of an in-person card check. And they won’t be effective at moderating what children see, which can and must be done by individuals and families. 

Other states have passed online age-verification bills this year, including a Texas bill that EFF has asked the U.S. Supreme Court to evaluate. Tennessee’s age-verification bill even includes criminal penalties, allowing prosecutors to bring felony charges against anyone who “publishes or distributes”—i.e., links to—sexual material. 

California politicians should let this unconstitutional and censorious proposal fade away, and resist the urge to bring it back next year. Californians do not want mandatory internet ID checks, nor are they interested in fines and incarceration for those who fail to use them. 

Joe Mullin

EFF to Tenth Circuit: Protest-Related Arrests Do Not Justify Dragnet Device and Digital Data Searches

3 months ago

The Constitution prohibits dragnet device searches, especially when those searches are designed to uncover political speech, EFF explained in a friend-of-the-court brief filed in the U.S. Court of Appeals for the Tenth Circuit.

The case, Armendariz v. City of Colorado Springs, challenges device and data seizures and searches conducted by the Colorado Springs police after a 2021 housing rights march that the police deemed “illegal.” The plaintiffs in the case, Jacqueline Armendariz and a local organization called the Chinook Center, argue these searches violated their civil rights.

The case details repeated actions by the police to target and try to intimidate plaintiffs and other local civil rights activists solely for their political speech. After the 2021 march, police arrested several protesters, including Ms. Armendariz. Police alleged Ms. Armendariz “threw” her bike at an officer as he was running, and despite that the bike never touched the officer, police charged her with attempted simple assault. Police then used that charge to support warrants to seize and search six of her electronic devices—including several phones and laptops. The search warrant authorized police to comb through these devices for all photos, videos, messages, emails, and location data sent or received over a two-month period and to conduct a time-unlimited search of 26 keywords—including for terms as broad and sweeping as “officer,” “housing,” “human,” “right,” “celebration,” “protest,” and several common names. Separately, police obtained a warrant to search all of the Chinook Center’s Facebook information and private messages sent and received by the organization for a week, even though the Center was not accused of any crime.

After Ms. Armendariz and the Chinook Center filed their civil rights suit, represented by the ACLU of Colorado, the defendants filed a motion to dismiss the case, arguing the searches were justified and, in any case, officers were entitled to qualified immunity. The district court agreed and dismissed the case. Ms. Armendariz and the Center appealed to the Tenth Circuit.

As explained in our amicus brief—which was joined by the Center for Democracy & Technology, the Electronic Privacy Information Center, and the Knight First Amendment Institute at Columbia University—the devices searched contain a wealth of personal information. For that reason, and especially where, as here, political speech is implicated, it is imperative that warrants comply with the Fourth Amendment.

The U.S. Supreme Court recognized in Riley v. California that electronic devices such as smartphones “differ in both a quantitative and a qualitative sense” from other objects. Our electronic devices’ immense storage capacities means that just one type of data can reveal more than previously possible because they can span years’ worth of information. For example, location data can reveal a person’s “familial, political, professional, religious, and sexual associations.” And combined with all of the other available data—including photos, video, and communications—a device such as a smartphone or laptop can store a “digital record of nearly every aspect” of a person’s life, “from the mundane to the intimate.” Social media data can also reveal sensitive, private information, especially with respect to users' private messages.

It’s because our devices and the data they contain can be so revealing that warrants for this information must rigorously adhere to the Fourth Amendment’s requirements of probable cause and particularity.

Those requirements weren’t met here. The police’s warrants failed to establish probable cause that any evidence of the crime they charged Ms. Armendariz with—throwing her bike at an officer—would be found on her devices. And the search warrant, which allowed officers to rifle through months of her private records, was so overbroad and lacking in particularity as to constitute an unconstitutional “general warrant.” Similarly, the warrant for the Chinook Center’s Facebook messages lacked probable cause and was especially invasive given that access to these messages may well have allowed police to map activists who communicated with the Center and about social and political advocacy.

The warrants in this case were especially egregious because they appear designed to uncover First Amendment-protected activity. Where speech is targeted, the Supreme Court has recognized that it’s all the more crucial that warrants apply the Fourth Amendment’s requirements with “scrupulous exactitude” to limit an officer’s discretion in conducting a search. But that failed to happen here, and thus affected several of Ms. Armendariz and the Chinook Center’s First Amendment rights—including the right to free speech, the right to free association, and the right to receive information.

Warrants that fail to meet the Fourth Amendment’s requirements disproportionately burden disfavored groups. In fact, the Framers adopted the Fourth Amendment to prevent the “use of general warrants as instruments of oppression”—but as legal scholars have noted, law enforcement routinely uses low-level, highly discretionary criminal offenses to impose order on protests. Once arrests are made, they are often later dropped or dismissed—but the damage is done, because protesters are off the streets, and many may be chilled from returning. Protesters undoubtedly will be further chilled if an arrest for a low-level offense then allows police to rifle through their devices and digital data, as happened in this case.

The Tenth Circuit should let this case to proceed. Allowing police to conduct a virtual fishing expedition of a protester’s devices, especially when justification for that search is an arrest for a crime that has no digital nexus, contravenes the Fourth Amendment’s purposes and chills speech. It is unconstitutional and should not be tolerated.

Brendan Gilligan

Americans Are Uncomfortable with Automated Decision-Making

3 months ago

Imagine a company you recently applied to work at used an artificial intelligence program to analyze your application to help expedite the review process. Does that creep you out? Well, you’re not alone.

Consumer Reports recently released a national survey finding that Americans are uncomfortable with use of artificial intelligence (AI) and algorithmic decision-making in their day to day lives. The survey of 2,022 U.S. adults was administered by NORC at the University of Chicago and examined public attitudes on a variety of issues. Consumer Reports found:

  • Nearly three-quarters of respondents (72%) said they would be “uncomfortable”— including nearly half (45%) who said they would be “very uncomfortable”—with a job interview process that allowed AI to screen their interview by grading their responses and in some cases facial movements.
  • About two-thirds said they would be “uncomfortable”— including about four in ten (39%) who said they would be “very uncomfortable”— allowing banks to use such programs to determine if they were qualified for a loan or allowing landlords to use such programs to screen them as a potential tenant.
  • More than half said they would be “uncomfortable”— including about a third who said they would be “very uncomfortable"— with video surveillance systems using facial recognition to identity them, and with hospital systems using AI or algorithms to help with diagnosis and treatment planning.

The survey findings indicate that people are feeling disempowered by lost control over their digital footprint, and by corporations and government agencies adopting AI technology to make life-altering decisions about them. Yet states are moving at breakneck speed to implement AI “solutions” without first creating meaningful guidelines to address these reasonable concerns. In California, Governor Newsom issued an executive order to address government use of AI, and recently granted five vendors approval to test and AI for a myriad of state agencies. The administration hopes to apply AI to such topics as health-care facility inspections, assisting residents who are not fluent in English, and customer service.

The vast majority of Consumer Reports’ respondents (83%) said they would want to know what information was used to instruct AI or a computer algorithm to make a decision about them.  Another super-majority (91%) said they would want to have a way to correct the data where a computer algorithm was used.

As states explore how to best protect consumers as corporations and government agencies deploy algorithmic decision-making, EFF urges strict standards of transparency and accountability. Laws should have a “privacy first” approach that ensures people have a say in how their private data is used. At a minimum, people should have a right to access what data is being used to make decisions about them and have the opportunity to correct it. Likewise, agencies and businesses using automated decision-making should offer an appeal process. Governments should ensure that consumers have protections from discrimination in algorithmic decision-making by both corporations and the public sector. Another priority should be a complete ban on many government uses of automated decision-making, including predictive policing.

From deciding who gets housing or the best mortgages, who gets an interview or a job, or who law enforcement or ICE investigates, people are uncomfortable with algorithmic decision-making that will affect their freedoms. Now is the time for strong legal protections.

Catalina Sanchez

The French Detention: Why We're Watching the Telegram Situation Closely

3 months 1 week ago

EFF is closely monitoring the situation in France in which Telegram’s CEO Pavel Durov was charged with having committed criminal offenses, most of them seemingly related to the operation of Telegram. This situation has the potential to pose a serious danger to security, privacy, and freedom of expression for Telegram’s 950 million users.  

On August 24th, French authorities detained Durov when his private plane landed in France. Since then, the French prosecutor has revealed that Durov’s detention was related to an ongoing investigation, begun in July, of an “unnamed person.” The investigation involves complicity in crimes presumably taking place on the Telegram platform, failure to cooperate with law enforcement requests for the interception of communications on the platform, and a variety of charges having to do with failure to comply with  French cryptography import regulations. On August 28, Durov was charged with each of those offenses, among others not related to Telegram, and then released on the condition that he check in regularly with French authorities and not leave France.  

We know very little about the Telegram-related charges, making it difficult to draw conclusions about how serious a threat this investigation poses to privacy, security, or freedom of expression on Telegram, or on online services more broadly. But it has the potential to be quite serious. EFF is monitoring the situation closely.  

There appear to be three categories of Telegram-related charges:  

  • First is the charge based on “the refusal to communicate upon request from authorized authorities, the information or documents necessary for the implementation and operation of legally authorized interceptions.” This seems to indicate that the French authorities sought Telegram’s assistance to intercept communications on Telegram.  
  • The second set of charges relate to “complicité” with crimes that were committed in some respect on or through Telegram. These charges specify “organized distribution of images of minors with a pedopornographic nature, drug trafficking, organized fraud, and conspiracy to commit crimes or offenses,” and “money laundering of crimes or offenses in an organized group.”  
  • The third set of charges all relate to Telegram’s failure to file a declaration required of those who import a cryptographic system into France.  

Now we are left to speculate. 

It is possible that all of the charges derive from “the failure to communicate.” French authorities may be claiming that Durov is complicit with criminals because Telegram refused to facilitate the “legally authorized interceptions.” Similarly, the charges connected to the failure to file the encryption declaration likely also derive from the “legally authorized interceptions” being encrypted. France very likely knew for many years that Telegram had not filed the required declarations regarding their encryption, yet they were not previously charged for that omission. 

Refusal to cooperate with a valid legal order for assistance with an interception could be similarly prosecuted in most international legal systems, including the United States. EFF has frequently contested the validity of such orders and gag orders associated with them, and have urged services to contest them in courts and pursue all appeals. But once such orders have been finally validated by courts, they must be complied with. It is a more difficult situation in other situations such as where the nation lacks a properly functioning judiciary or there is an absence of due process, such as China or Saudi Arabia. 

In addition to the refusal to cooperate with the interception, it seems likely that the complicité charges also, or instead, relate to Telegram’s failure to remove posts advancing crimes upon request or knowledge. Specifically, the charges of complicity in “the administration of an online platform to facilitate an illegal transaction” and “organized distribution of images of minors with a pedopornographic nature, drug trafficking,[and] organized fraud,” could likely be based on not depublishing posts. An initial statement by Ofmin, the French agency established to investigate threats to child safety online, referred to “lack of moderation” as being at the heart of their investigation. Under French law, Article 323-3-2, it is a crime to knowingly allow the distribution of illegal content or provision of illegal services, or to facilitate payments for either. 

It is not yet clear whether Telegram users themselves, or those offering similar services to Telegram, should be concerned.

In particular, this potential “lack of moderation” liability bears watching. If Durov is prosecuted because Telegram simply inadequately removed offending content from the site that it is generally aware of, that could expose most every other online platform to similar liability. It would also be concerning, though more in line with existing law, if the charges relate to an affirmative refusal to address specific posts or accounts, rather than a generalized awareness. And both of these situations are much different from one in which France has evidence that Durov was more directly involved with those using Telegram for criminal purposes. Moreover, France will likely have to prove that Durov himself committed each of these offenses, and not Telegram itself or others at the company. 

EFF has raised serious concerns about Telegram’s behavior both as a social media platform and as a messaging app. In spite of its reputation as a “secure messenger,” only a very small subset of messages  on Telegram are encrypted in such a way that prevents the company from reading the contents of communications—end-to-end encryption. (Only one-to-one messages with the “secret messages” option enabled are end-to-end encrypted) And even so, cryptographers have questioned the effectiveness of Telegram’s homebrewed cryptography. If the French government’s charges have to do with Telegram’s refusal to moderate or intercept these messages, EFF will oppose this case in the strongest terms possible, just as we have opposed all government threats to end-to-end encryption all over the world

This arrest marks an alarming escalation by a state’s authorities. 

It is not yet clear whether Telegram users themselves, or those offering similar services to Telegram, should be concerned. French authorities may ask for technical measures that endanger the security and privacy of those users. Durov and Telegram may or may not comply. Those running similar services may not have anything to fear, or these charges may be the canary in the coalmine warning us all that French authorities intend to expand their inspection of messaging and social media platforms. It is simply too soon, and there is too little information for us to know for sure.  

It is not the first time Telegram’s laissez faire attitude towards content moderation has led to government reprisals. In 2022, the company was forced to pay a fine in Germany for not establishing a lawful way for reporting illegal content or naming an entity in Germany to receive official communication. Brazil fined the company in 2023 for failing to suspend accounts of supporters of former President Jair Bolsonaro. Nevertheless this arrest marks an alarming escalation by a state’s authorities.  We are monitoring the situation closely and will continue to do so.  

David Greene

The California Supreme Court Should Help Protect Your Stored Communications

3 months 1 week ago

When you talk to your friends and family on Snapchat or Facebook, you should be assured that those services will not freely disclose your communications to the government or other private parties.

That is why the California Supreme Court must take up and reverse the appellate opinion in the case of Snap v. The Superior Court of San Diego County. This opinion dangerously weakens the Stored Communications Act (SCA), which is one of the few federal privacy laws on the books. The SCA prevents certain communications providers from disclosing the content of your communications to private parties or the government without a warrant (or other narrow exceptions).

EFF submitted an amicus letter to the court, along with the Center for Democracy & Technology.

The lower court incorrectly ruled that modern services like Snapchat and Facebook largely do not have to comply with the 1986 law. Since those companies already access the content of your communications for their own business purposes—including to target their behavioral advertising—the lower court held that they can also freely disclose the content of your communications to anyone.

The ruling came in the context of a criminal defendant who sought access to the communications of a deceased victim with a subpoena. In compliance with the law, both Meta and Snap resisted disclosing the information.

The lower court’s opinion conflicts with nearly 40 years of interpretation by Congress and other courts. It ignores the SCA’s primary purpose of protecting your communications from disclosure. And the opinion gives too much weight to companies’ terms of service. Those terms, which almost no one reads, is where most companies bury their own right to access to your communications.

There is no doubt that companies should also be restricted in how they access and use your data, and we need stronger laws to make that happen. For years, EFF has advocated for comprehensive data privacy legislation, including data minimization and a ban on online behavioral advertising. But that does not affect the current analysis of the SCA, which protects against disclosure now.

If the California Supreme Court does not take this up, Meta, Snap, and other providers would be allowed to voluntarily disclose the content of their users’ communications to any other corporations for any reason, to parties in civil litigation, and to the government without a warrant. Private parties could also compel disclosure with a mere subpoena.

Mario Trujillo

Copyright Is Not a Tool to Silence Critics of Religious Education

3 months 1 week ago

Copyright law is not a tool to punish or silence critics. This is a principle so fundamental that it is the ur-example of fair use, which typically allows copying another’s creative work when necessary for criticism. But sometimes, unscrupulous rightsholders misuse copyright law to bully critics into silence by filing meritless lawsuits, threatening potentially enormous personal liability unless they cease speaking out. That’s why EFF is defending Zachary Parrish, a parent in Indiana, against a copyright infringement suit by LifeWise, Inc.

LifeWise produces controversial “released time” religious education programs for public elementary school students during school hours. After encountering the program at his daughter’s public school, Mr. Parrish co-founded “Parents Against LifeWise,” a group that strives to educate and warn others about the harms they believe LifeWise’s programs cause. To help other parents make fully informed decisions about signing their children up for a LifeWise program, Mr. Parrish obtained a copy of LifeWise’s elementary school curriculum—which the organization kept secret from everyone except instructors and enrolled students—and posted it to the Parents Against LifeWise website. LifeWise sent a copyright takedown to the website’s hosting provider to get the curriculum taken down, and followed up with an infringement lawsuit against Mr. Parrish.

EFF filed a motion to dismiss LifeWise’s baseless attempt to silence Mr. Parrish. As we explained to the court, Mr. Parrish’s posting of the curriculum was a paradigmatic example of fair use, an important doctrine that allows critics like Mr. Parrish to comment on, criticize, and educate others on the contents of a copyrighted work. LifeWise’s own legal complaint shows why Mr. Parrish’s use was fair: “his goal was to gather information and internal documents with the hope of publishing information online which might harm LifeWise’s reputation and galvanize parents to oppose local LifeWise Academy chapters in their communities.” This is a mission of public advocacy and education that copyright law protects. In addition, Mr. Parrish’s purpose was noncommercial: far from seeking to replace or compete with LifeWise, he posted the curriculum to encourage others to think carefully before signing their children up for the program. And posting the curriculum doesn’t harm LifeWise—at least not in any way that copyright law was meant to address. Just like copyright doesn’t stop a film critic from using scenes from a movie as part of a devastating review, it doesn’t stop a concerned parent from educating other parents about a controversial religious school program by showing them the actual content of that program.

Early dismissals in copyright cases against fair users are crucial. Because, although fair use protects lots of important free expression like the commentary and advocacy of Mr. Parrish, it can be ruinously expensive and chilling to fight for those protections. The high cost of civil discovery and the risk of astronomical statutory damages—which reach as high as $150,000 per work in certain cases—can lead would-be fair users to self-censor for fear of invasive legal process and financial ruin.

Early dismissal helps prevent copyright holders from using the threat of expensive, risky lawsuits to silence critics and control public conversations about their works. It also sends a message to others that their right to free expression doesn’t depend on having enough money to defend it in court or having access to help from organizations like EFF. While we are happy to help, we would be even happier if no one needed our help for a problem like this ever again.

When society loses access to critical commentary and the public dialogue it enables, we all suffer. That’s why it is so important that courts prevent copyright law from being used to silence criticism and commentary. We hope the court will do so here, and dismiss LifeWise’s baseless complaint against Mr. Parrish.

Mitch Stoltz

Backyard Privacy in the Age of Drones

3 months 1 week ago

This article was originally published by The Legal Aid Society's Decrypting a Defense Newsletter on August 5, 2024 and is reprinted here with permission.

Police departments and law enforcement agencies are increasingly collecting personal information using drones, also known as unmanned aerial vehicles. In addition to high-resolution photographic and video cameras, police drones may be equipped with myriad spying payloads, such as live-video transmitters, thermal imaging, heat sensors, mapping technology, automated license plate readers, cell site simulators, cell phone signal interceptors and other technologies. Captured data can later be scrutinized with backend software tools like license plate readers and face recognition technology. There have even been proposals for law enforcement to attach lethal and less-lethal weapons to drones and robots. 

Over the past decade or so, police drone use has dramatically expanded. The Electronic Frontier Foundation’s Atlas of Surveillance lists more than 1500 law enforcement agencies across the US that have been reported to employ drones. The result is that backyards, which are part of the constitutionally protected curtilage of a home, are frequently being captured, either intentionally or incidentally. In grappling with the legal implications of this phenomenon, we are confronted by a pair of U.S. Supreme Court cases from the 1980s: California v. Ciraolo and Florida v. Riley. There, the Supreme Court ruled that warrantless aerial surveillance conducted by law enforcement in low-flying manned aircrafts did not violate the Fourth Amendment because there was no reasonable expectation of privacy from what was visible from the sky. Although there are fundamental differences between surveillance by manned aircrafts and drones, some courts have extended the analysis to situations involving drones, shutting the door to federal constitution challenges.

Yet, Americans, legislators, and even judges, have long voiced serious worries with the threat of rampant and unchecked aerial surveillance. A couple of years ago, the Fourth Circuit found in Leaders of a Beautiful Struggle v. Baltimore Police Department that a mass aerial surveillance program (using manned aircrafts) covering most of the city violated the Fourth Amendment. The exponential surge in police drone use has only heightened the privacy concerns underpinning that and similar decisions. Unlike the manned aircrafts in Ciraolo and Riley, drones can silently and unobtrusively gather an immense amount of data at only a tiny fraction of the cost of traditional aircrafts. Additionally, drones are smaller and easier to operate and can get into spaces—such as under eaves or between buildings—that planes and helicopters can never enter. And the noise created by manned airplanes and helicopters effectively functions as notice to those who are being watched, whereas drones can easily record information surreptitiously.

In response to the concerns regarding drone surveillance voiced by civil liberties groups and others, some law enforcement agencies, like the NYPD, have pledged to abide by internal policies to refrain from warrantless use over private property. But without enforcement mechanisms, those empty promises are easily discarded by officials when they consider them inconvenient, as NYC Mayor Eric Adams did in announcing that drones would, in fact, be deployed to indiscriminately spy on backyard parties over Labor Day.

Barring a seismic shift away from Ciraolo and Riley by the U.S. Supreme Court (which seems nigh impossible given the Fourth Amendment approach by the current members of the bench), protection from warrantless aerial surveillance—and successful legal challenges—will have to come from the states. Indeed, six months after Ciraolo was decided, the California Supreme Court held in People v. Cook that under the state’s constitution, an individual had a reasonable expectation that cops will not conduct warrantless surveillance of their backyard from the air. More recently, other states, such as Hawai’i, Vermont, and Alaska, have similarly relied on their state constitution’s Fourth Amendment corollary to find warrantless aerial surveillance improper. Some states have also passed new laws regulating governmental drone use. And at least half a dozen states, including Florida, Maine, Minnesota, Nevada, North Dakota, and Virginia have statutes requiring warrants (with exceptions) for police use.

Law enforcement’s use of drones will only proliferate in the coming years, and drone capabilities continue to evolve rapidly. Courts and legislatures must keep pace to ensure that privacy rights do not fall victim to the advancement of technology.

For more information on drones and other surveillance technologies, please visit EFF’s Street Level Surveillance guide at https://sls.eff.org/.

Hannah Zhao

Geofence Warrants Are 'Categorically' Unconstitutional | EFFector 36.11

3 months 2 weeks ago

School is back in session, so prepare for your first lesson from EFF! Today you'll learn about the latest court ruling on the dangers of geofence warrants, our letter urging Bumble to require opt-in consent to sell user data, and the continued fight against the UN Cybercrime Treaty.

If you'd like future lessons about the fight for digital freedoms, you're in luck! We've got you covered with our EFFector newsletter. You can read the full issue here, or subscribe to get the next one in your inbox automatically. You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:

LISTEN ON YouTube

EFFECTOR 36.11 - Geofence Warrants Are 'Categorically' Unconstitutional

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

NO FAKES – A Dream for Lawyers, a Nightmare for Everyone Else

3 months 2 weeks ago

Performers and ordinary humans are increasingly concerned that they may be replaced or defamed by AI-generated imitations. We’re seeing a host of bills designed to address that concern – but every one just generates new problems. Case in point: the NO FAKES Act. We flagged numerous flaws in a “discussion draft” back in April, to no avail: the final text has been released, and it’s even worse.  

NO FAKES creates a classic “hecklers’ veto”: anyone can use a specious accusation to get speech they don’t like taken down.

Under NO FAKES, any human person has the right to sue anyone who has either made, or made available, their “digital replica.” A replica is broadly defined as “a newly-created, computer generated, electronic representation of the image, voice or visual likeness” of a person. The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for up to 70 years after the person dies. Because it is a federal intellectual property right, Section 230 protections – a crucial liability shield for platforms and anyone else that hosts or shares user-generated content—will not apply. And that legal risk begins the moment a person gets a notice that the content is unlawful, even if they didn't create the replica and have no way to confirm whether or not it was authorized, or have any way to verify the claim. NO FAKES thereby creates a classic “hecklers’ veto”: anyone can use a specious accusation to get speech they don’t like taken down.  

The bill proposes a variety of exclusions for news, satire, biopics, criticism, etc. to limit the impact on free expression, but their application is uncertain at best. For example, there’s an exemption for use of a replica for a “bona fide” news broadcast, provided that the replica is “materially relevant” to the subject of the broadcast. Will citizen journalism qualify as “bona fide”? And who decides whether the replica is “materially relevant”?  

These are just some of the many open questions, all of which will lead to full employment for lawyers, but likely no one else, particularly not those whose livelihood depends on the freedom to create journalism or art about famous people. 

The bill also includes a safe harbor scheme modelled on the DMCA notice and takedown process. To stay within the NO FAKES safe harbors, a platform that receives a notice of illegality must remove “all instances” of the allegedly unlawful content—a broad requirement that will encourage platforms to adopt “replica filters” similar to the deeply flawed copyright filters like YouTube’s Content I.D. Platforms that ignore such a notice can be on the hook just for linking to unauthorized replicas. And every single copy made, transmitted, or displayed is a separate violation incurring a $5000 penalty – which will add up fast. The bill does throw platforms a not-very-helpful-bone: if they can show they had an objectively reasonable belief that the content was lawful, they only have to cough up $1 million if they guess wrong.  

All of this is a recipe for private censorship. For decades, the DMCA process has been regularly abused to target lawful speech, and there’s every reason to suppose NO FAKES will lead to the same result.  

All of this is a recipe for private censorship. 

What is worse, NO FAKES offers even fewer safeguards for lawful speech than the DMCA. For example, the DMCA includes a relatively simple counter-notice process that a speaker can use to get their work restored. NO FAKES does not. Instead, NO FAKES puts the burden on the speaker to run to court within 14 days to defend their rights. The powerful have lawyers on retainer who can do that, but most creators, activists, and citizen journalists do not.  

NO FAKES does include a provision that, in theory, would allow improperly targeted speakers to hold notice senders accountable. But they must prove that the lie was “knowing,” which can be interpreted to mean that the sender gets off scot-free as long as they subjectively believes the lie to be true, no matter how unreasonable that belief. Given the multiple open questions about how to interpret the various exemptions (not to mention the common confusions about the limits of IP protection that we’ve already seen), that’s pretty cold comfort. 

These significant flaws should doom the bill, and that’s a shame. Deceptive AI-generated replicas can cause real harms, and performers have a right to fair compensation for the use of their likenesses, should they choose to allow that use. Existing laws can address most of this, but Congress should be considering narrowly-targeted and proportionate proposals to fill in the gaps.  

The NO FAKES Act is neither targeted nor proportionate. It’s also a significant Congressional overreach—the Constitution forbids granting a property right in (and therefore a monopoly over) facts, including a person’s name or likeness.  

The best we can say about NO FAKES is that it has provisions protecting individuals with unequal bargaining power in negotiations around use of their likeness. For example, the new right can’t be completely transferred to someone else (like a film studio or advertising agency) while the person is alive, so a person can’t be pressured or tricked into handing over total control of their public identity (their heirs still can, but the dead celebrity presumably won’t care). And minors have some additional protections, such as a limit on how long their rights can be licensed before they are adults.   

TAKE ACTION

Throw Out the NO FAKES Act and Start Over

But the costs of the bill far outweigh the benefits. NO FAKES creates an expansive and confusing new intellectual property right that lasts far longer than is reasonable or prudent, and has far too few safeguards for lawful speech. The Senate should throw it out and start over. 

Corynne McSherry

Court to California: Try a Privacy Law, Not Online Censorship

3 months 2 weeks ago

In a victory for free speech and privacy, a federal appellate court confirmed last week that parts of the California Age-Appropriate Design Code Act likely violate the First Amendment, and that other parts require further review by the lower court.

The U.S. Court of Appeals for the Ninth Circuit correctly rejected rules requiring online businesses to opine on whether the content they host is “harmful” to children, and then to mitigate such harms. EFF and CDT filed a friend-of-the-court brief in the case earlier this year arguing for this point.

The court also provided a helpful roadmap to legislatures for how to write privacy first laws that can survive constitutional challenges. However, the court missed an opportunity to strike down the Act’s age-verification provision. We will continue to argue, in this case and others, that this provision violates the First Amendment rights of children and adults.

The Act, The Rulings, and Our Amicus Brief

In 2022, California enacted its Age-Appropriate Design Code Act (AADC). Three of the law’s provisions are crucial for understanding the court’s ruling.

  1. The Act requires an online business to write a “Data Protection Impact Assessment” for each of its features that children are likely to access. It must also address whether the feature’s design could, among other things, “expos[e] children to harmful, or potentially harmful, content.” Then the business must create a “plan to mitigate” that risk.
  1. The Act requires online businesses to follow enumerated data privacy rules specific to children. These include data minimization, and limits on processing precise geolocation data.
  1. The Act requires online businesses to “estimate the age of child users,” to an extent proportionate to the risks arising from the business’s data practices, or to apply child data privacy rules to all consumers.

In 2023, a federal district court blocked the law, ruling that it likely violates the First Amendment. The state appealed.

EFF’s brief in support of the district court’s ruling argued that the Act’s age-verification provision and vague “harmful” standard are unconstitutional; that these provisions cannot be severed from the rest of the Act; and thus that the entire Act should be struck down. We conditionally argued that if the court rejected our initial severability argument, privacy principles in the Act could survive the reduced judicial scrutiny applied to such laws and still safeguard peoples personal information. This is especially true given the government’s many substantial interests in protecting data privacy.

The Ninth Circuit affirmed the preliminary injunction as to the Act’s Impact Assessment provisions, explaining that they likely violate the First Amendment on their face. The appeals court vacated the preliminary injunction as to the Act’s other provisions, reasoning that the lower court had not applied the correct legal tests. The appeals court sent the case back to the lower court to do so.

Good News: No Online Censorship

The Ninth Circuit’s decision to prevent enforcement of the AADC’s impact assessments on First Amendment grounds is a victory for internet users of all ages because it ensures everyone can continue to access and disseminate lawful speech online.

The AADC’s central provisions would have required a diverse array of online services—from social media to news sites—to review the content on their sites and consider whether children might view or receive harmful information. EFF argued that this provision imposed content-based restrictions on what speech services could host online and was so vague that it could reach lawful speech that is upsetting, including news about current events.

The Ninth Circuit agreed with EFF that the AADC’s “harmful to minors” standard was vague and likely violated the First Amendment for several reasons, including because it “deputizes covered businesses into serving as censors for the State.”

The court ruled that these AADC censorship provisions were subject to the highest form of First Amendment scrutiny because they restricted content online, a point EFF argued. The court rejected California’s argument that the provisions should be subjected to reduced scrutiny under the First Amendment because they sought to regulate commercial transactions.

“There should be no doubt that the speech children might encounter online while using covered businesses’ services is not mere commercial speech,” the court wrote.

Finally, the court ruled that the AADC’s censorship provisions likely failed under the First Amendment because they are not narrowly tailored and California has less speech-restrictive ways to protect children online.

EFF is pleased that the court saw AADC’s impact assessment requirements for the speech restrictions that they are. With those provisions preliminarily enjoined, everyone can continue to access important, lawful speech online.

More Good News: A Roadmap for Privacy-First Laws

The appeals court did not rule on whether the Act’s data privacy provisions could survive First Amendment review. Instead, it directed the lower court in the first instance to apply the correct tests.

In doing so, the appeals court provided guideposts for how legislatures can write data privacy laws that survive First Amendment review. Spoiler alert: enact a “privacy first” law, without unlawful censorship provisions.

Dark patterns. Some privacy laws prohibit user interfaces that have the intent or substantial effect of impairing autonomy and choice. The appeals court reversed the preliminary injunction against the Act’s dark patterns provision, because it is unclear whether dark patterns are even protected speech, and if so, what level of scrutiny they would face.

Clarity. Some privacy laws require businesses to use clear language in their published privacy policies. The appeals court reversed the preliminary injunction against the Act’s clarity provision, because there wasn’t enough evidence to say whether the provision would run afoul of the First Amendment. Indeed, “many” applications will involve “purely factual and non-controversial” speech that could survive review.

Transparency. Some privacy laws require businesses to disclose information about their data processing practices. In rejecting the Act’s Impact Assessments, the appeals court rejected an analogy to the California Consumer Privacy Act’s unproblematic requirement that large data processors annually report metrics about consumer requests to access, correct, and delete their data. Likewise, the court reserved judgment on the constitutionality of two of the Act’s own “more limited” reporting requirements, which did not require businesses to opine on whether third-party content is “harmful” to children.

Social media. Many privacy laws apply to social media companies. While the EFF is second-to-none in defending the First Amendment right to moderate content, we nonetheless welcome the appeals court’s rejection of the lower court’s “speculat[ion]” that the Act’s privacy provisions “would ultimately curtail the editorial decisions of social media companies.” Some right-to-curate allegations against privacy laws might best be resolved with “as-applied claims” in specific contexts, instead of on their face.

Ninth Circuit Punts on the AADC’s Age-Verification Provision

The appellate court left open an important issue for the trial court to take up: whether the AADC’s age-verification provision violates the First Amendment rights of adults and children by blocking them from lawful speech, frustrating their ability to remain anonymous online, and chilling their speech to avoid danger of losing their online privacy.

EFF also argued in our Ninth Circuit brief that the AADC’s age-verification provision was similar to many other laws that courts have repeatedly found to violate the First Amendment.

The Ninth Circuit missed a great opportunity to confirm that the AADC’s age-verification provision violated the First Amendment. The court didn’t pass judgment on the provision, but rather ruled that the district court had failed to adequately assess the provision to determine whether it violated the First Amendment on its face.

As EFF’s brief argued, the AADC’s age-estimation provision is pernicious because it restricts everyone’s access to lawful speech online, by requiring adults to show proof that they are old enough to access lawful content the AADC deems harmful.

We look forward to the district court recognizing the constitutional flaws of the AADC’s age-verification provision once the issue is back before it.

Adam Schwartz
Checked
38 minutes 24 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed