Surveillance and the U.S.-Mexico Border: 2023 Year in Review

2 months 1 week ago

The U.S.-Mexico border continues to be one of the most politicized spaces in the country, with leaders in both political parties supporting massive spending on border security, including technological solutions such as the so-called "virtual wall." We spent the year documenting surveillance technologies at the border and the impacts on civil liberties and human rights of those who live in the borderlands.

In early 2023, EFF staff completed the last of three trips to the U.S.-Mexico border, where we met with the residents, activists, humanitarian organizations, law enforcement officials, and journalists whose work is directly impacted by the expansion of surveillance technology in their communities.

Using information from those trips, as well as from public records, satellite imagery, and exploration in virtual reality, we released a map and dataset of more than 390 surveillance towers installed by Customs and Border Protection (CBP) along the U.S.-Mexico border. Our data serves as a living snapshot of the so-called "virtual wall," from the California coast to the lower tip of Texas. The data also lays the foundation for many types of research ranging from border policy to environmental impacts.

We also published an in-depth report on Plataforma Centinela (Sentinel Platform), an aggressive new surveillance system developed by Chihuahua state officials in collaboration with a notorious Mexican security contractor. With tentacles reaching into 13 Mexican cities and a data pipeline that will channel intelligence all the way to Austin, Texas, the monstrous project is unlike anything seen before along the U.S.-Mexico border. The strategy adopts nearly every cutting-edge technology system marketed at law enforcement: 10,000 surveillance cameras, face recognition, automated license plate recognition, real-time crime analytics, a fleet of mobile surveillance vehicles, drone teams and counter-drone teams, and more. It also involves a 20-story high-rise in downtown Ciudad Juarez, known as the Torre Centinela (Sentinel Tower), that will serve as the central node of the surveillance operation. We’ll continue to keep a close eye on the development of this surveillance panopticon.

Finally, we weighed in on the dangers of border surveillance on civil liberties by filing an amicus brief in the U.S. Court of Appeals for the Ninth Circuit. The case, Phillips v. U.S. Customs and Border Protection, was filed after a 2019 news report revealed the federal government was conducting surveillance of journalists, lawyers, and activists thought to be associated with the so-called “migrant caravan” coming through Central America and Mexico. The lawsuit argues, among other things, that the agencies collected information on the plaintiffs in violation of their First Amendment rights to free speech and free association, and that the illegally obtained information should be “expunged” or deleted from the agencies’ databases. Unfortunately, both the district court and a three-judge panel of the Ninth Circuit ruled against the plaintiffs. The plaintiffs urged the panel to reconsider, or for the full Ninth Circuit to rehear the case. In our amicus brief, we argued that the plaintiffs have privacy interests in personal information compiled by the government, even when the individual bits of data are available from public sources, and especially when the data collection is facilitated by technology. We also argued that, because the government stored plaintiffs’ personal information in various databases, there is a sufficient risk of future harm due to lax policies on data sharing, abuse, or data breach.

Undoubtedly, next year’s election will only heighten the focus on border surveillance technologies in 2024. As we’ve seen time and again, increasing surveillance at the border is a bipartisan strategy, and we don’t expect that to change in the new year.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Saira Hussain

2023 Year in Review

2 months 1 week ago

At the end of every year, we look back at the last 12 months and evaluate what has changed for the better (and worse) for digital rights.  While we can be frustrated—hello ongoing attacks on encryption—overall it's always an exhilarating reminder of just how far we've come since EFF was founded over 33 years ago. Just the scale alone it's breathtaking. Digital rights started as a niche, future-focused issue that we would struggle to explain to nontechnical people; now it's deeply embedded into all of our lives.

The legislative, court, and agency fights around the world this year also helped us see and articulate a common thread: the need for a "privacy first" approach to laws and technology innovation.  As we wrote in a new white paper aptly entitled "Privacy First: A Better Way to Address Online Harms," many of the ills of today’s internet have a single thing in common, and it is that they are built on a business model of corporate surveillance and behavioral advertising.  Addressing that problem could help us make great strides in a range of issues, and avoid many of the the terrible likely impacts of many of today's proposed "solutions."

Instead of considering proposals that would censor speech and put children's access to internet resources at the whims of state attorneys general, we could be targeting the root cause of the concern: internet companies' collection, storage, sales, and use of our personal information and activities to feed their algorithms and ad services. Police go straight to tech companies for your data or the data on everyone who was near a certain location.  And that's when they even bother with a court-overseen process, rather than simply issuing a subpoena, showing up and demanding it, or buying data from data brokers. If we restricted what data tech companies could keep and for how long, we could also tackle this problem at the source. Instead of unconstitutional link taxes to save local journalism, laws that attack behavioral advertising--built on collection of data--would break the ad and data monopoly that put journalists at the mercy of Big Tech in the first place.

Concerns about what is feeding AI, social media algorithms, government spying (either your own or another country's), online harassment, getting access to healthcare--so much can be better protected if we address privacy first. EFF knows this, and it's why, in 2023, we did things like launch the Tor University Challenge, urge the Supreme Court to recognize that the Fifth Amendment protects you from being forced to give your phone's passcode to police, and work to fix the dangerously flawed UN Cybercrime Treaty. Most recently, we celebrated Google's decision to limit the data collected and kept in its "Location History" as a potentially huge step to prevent geofence warrants that use Google's storehouse of location data to conduct massive, unconstitutional searches sweeping in many innocent bystanders. 

Of course, as much as individuals need more privacy, we also need more transparency, especially from our governments and the big corporations that rule so much of our digital lives. That's why EFF urged the Supreme Court to overturn an order preventing Twitter—now X—from publishing a transparency report with data about what, exactly, government agents have asked the company for. It's why we won an important victory in keeping laws and regulations online and accessible. And it's why we defended the Internet Archive from an attack by major publishers seeking to cripple libraries' ability to give the rest of us access to knowledge into the digital age.

All of that barely scratches the surface of what we've been doing this year. But none of it would be possible without the strong partnership of our members, supporters, and all of you who stood up and took action to build a better future. 

EFF has an annual tradition of writing several blog posts on what we’ve accomplished this year, what we’ve learned, and where we have more to do. We will update this page with new stories about digital rights in 2023 every day between now and the new year.

Cindy Cohn

FTC’s Rite Aid Ruling Rightly Renews Scrutiny of Face Recognition

2 months 1 week ago

The Federal Trade Commission on Tuesday announced action against the pharmacy chain Rite Aid for its use of face recognition technology in hundreds of stores. The regulator found that Rite Aid deployed a massive, error-riddled surveillance program, chose vendors that could not properly safeguard the personal data the chain hoarded, and attempted to keep it all under wraps. Under a proposed settlement, Rite Aid can't operate a face recognition system in any of its stores for five years.

EFF advocates for laws that require companies to get clear, opt-in consent from any person before scanning their faces. Rite Aid's program, as described in the complaint, would violate such laws. The FTC’s action against Rite Aid illustrates many of the problems we have raised about face recognition—including how data collected for face recognition systems is often insufficiently protected, and how systems are often deployed in ways that disproportionately hurt BIPOC communities.

The FTC’s complaint outlines a face recognition system that often relied on "low-quality" images to identify so-called “persons of interest,” and that the chain instructed staff to ask such customers to leave its stores.

From the FTC's press release on the ruling:

According to the complaint, Rite Aid contracted with two companies to help create a database of images of individuals—considered to be “persons of interest” because Rite Aid believed they engaged in or attempted to engage in criminal activity at one of its retail locations—along with their names and other information such as any criminal background data. The company collected tens of thousands of images of individuals, many of which were low-quality and came from Rite Aid’s security cameras, employee phone cameras and even news stories, according to the complaint.

Rite Aid's system falsely flagged numerous customers, according to the complaint, including an 11 year-old girl whom employees searched based on a false-positive result. Another unnamed customer quoted in the complaint told Rite Aid, "Before any of your associates approach someone in this manner they should be absolutely sure because the effect that it can [have] on a person could be emotionally damaging.... [E]very black man is not [a] thief nor should they be made to feel like one.”

Even if Rite Aid's face recognition technology had been completely accurate (and it clearly was not), the way the company deployed it was wrong. Rite Aid scanned everyone who came into certain stores and matched them against an internal list. Any company that does this assumes the guilt of everyone who walks in the door. And, as we have pointed out time and again, that assumption of guilt doesn't fall on all customers equally: People of color, who are already historically over-surveilled, are the ones who most often find themselves under new surveillance.

As the FTC explains in its complaint (emphasis added):

"[A]lthough approximately 80 percent of Rite Aid stores are located in plurality-White (i.e., where White people are the single largest group by race or ethnicity) areas, about 60 percent of Rite Aid stores that used facial recognition technology were located in plurality non-White areas. As a result, store patrons in plurality-Black, plurality-Asian, and plurality-Latino areas were more likely to be subjected to and surveilled by Rite Aid’s facial recognition technology."

The FTC's ruling rightly pulls the many problems with facial recognition into the spotlight. It also proposes remedies to many ways Rite Aid failed to ensure its system was safe and functional, failed to train employees on how to interpret results, and failed to evaluate whether its technology was harming its customers.

We encourage lawmakers to go further. They must enact laws that require businesses to get opt-in consent before collecting or disclosing a person’s biometrics. This will ensure that people can make their own decisions about whether to participate in face recognition systems and know in advance which companies are using them. 

Hayley Tsukayama

Victory: Utah Supreme Court Upholds Right to Refuse to Tell Cops Your Passcode

2 months 1 week ago

Last week, the Utah Supreme Court ruled that prosecutors violated a defendant’s Fifth Amendment privilege against self incrimination when they presented testimony about his refusal to give police the passcode to his cell phone. In State v. Valdez, the court found that verbally telling police a passcode is “testimonial” under the Fifth Amendment, and that the so-called foregone conclusion exception does not apply to “ordinary testimony” like this. This closely tracks arguments in the amicus brief EFF and the ACLU filed in the case.

The Utah court’s opinion is the latest in a thicket of state supreme court opinions dealing with whether law enforcement agents can compel suspects to disclose or enter their passwords. Last month, EFF supported a petition asking the U.S. Supreme Court to review People v. Sneed, an Illinois Supreme Court opinion that reached a contrary conclusion. As we explained in that brief, courts around the country are struggling to apply Fifth Amendment case law to the context of compelled disclosure and entry of passcodes.

The Fifth Amendment privilege protects suspects from being forced to provide “testimonial” answers to incriminating lines of questioning. So it would seem straightforward that asking “what is your passcode?” should be off limits. Indeed, the Utah Supreme Court had no trouble finding that verbally disclosing a passcode was protected as a “traditionally testimonial communication.” Notably there has been dissent from even this straightforward rule by the New Jersey Supreme Court. However, many cases—like the Sneed case from Illinois—involve a less clear demand by law enforcement: “tell us your passcode or just enter it.”

Unfortunately, many courts, including Utah, have applied a different standard to entering rather than disclosing a passcode. Under this reasoning, verbally telling police a passcode is explicitly testimonial, whereas entering a passcode is only implicitly testimonial as an “act of production,” comparable to turning over incriminating documents in response to a subpoena. But as we’ve argued, entering a passcode should be treated as purely testimonial in the same way that nodding or shaking your head in response to a question is. More fundamentally, the U.S. Supreme Court has held that even testimonial “acts of production,” like assembling documents in response to a subpoena, are privileged and cannot be compelled without expansive grants of immunity.

A related issue has generated even more confusion: whether police can compel a suspect to enter a passcode because they claim that the testimony it implies is a “foregone conclusion.” The foregone conclusion “exception” stems from a single U.S. Supreme Court case, United States v. Fisher, involving specific tax records—a far cry from a world where we carry our entire life history around on a phone. Nevertheless, prosecutors routinely argue it applies any time the government can show suspects know the passcode to their phones. Even Supreme Court justices like Antonin Scalia and Clarence Thomas have viewed Fisher as a historical outlier, and it should not be the basis of such a dramatic erosion of Fifth Amendment rights.

Thankfully, the Utah Supreme Court held that the foregone conclusion doctrine had no application in a case involving verbal testimony, but it left open the possibility of a different rule in cases involving compelled entry of a passcode. Make no mistake, Valdez is a victory for Utahns’ right to refuse to participate in their own investigation and prosecution. But we will continue to fight to ensure this right is given its full measure across the country.

Related Cases: Andrews v. New Jersey
Andrew Crocker

Does Less Consumer Tracking Lead to Less Fraud?

2 months 1 week ago

Here’s another reason to block digital surveillance: it might reduce financial fraud.  That’s the upshot of a small but promising study published as a National Bureau of Economic Research (NBER) working paper, “Consumer Surveillance and Financial Fraud

Authors Bo Bian, Michaela Pagel and Huan Tang investigated the relationship between the rollout of Apple’s App Tracking Transparency (ATT) and reports of consumer financial fraud. Many apps can track users across apps or websites owned by other companies. By default, Apple's ATT opted all iPhone users out of tracking, which meant that apps and websites no longer received user identifiers unless they obtained user permission. 

The highlight of the research is that Apple users were less likely to be victims of financial fraud after Apple implemented the App Tracking Transparency policy. The results showed a 10% increase in the share of Apple users in a particular ZIP code leads to roughly 3% reduction in financial fraud complaints. 

The Methodology 

The authors designed a complicated methodology for this study, but here are the basics for those who don’t have time to tackle the actual paper. 

The authors primarily use the number of financial fraud complaints and the amount of money lost due to fraud to track how much fraud is happening. These figures are obtained from the Consumer Financial Protection Bureau (CFPB) and Federal Trade Commission (FTC). The researchers used machine learning and keyword searches to narrow the complaints down to those related to financial fraud that was caused by lax data privacy as opposed to other types of financial fraud. They concluded that complaints in certain product categories—like credit reporting and debt collection—are most likely to implicate the lack of data privacy. 

The study used data acquired by a company called Safegraph to determine the share of iPhone users on ZIP code level. It then estimated the effect of Apple’s ATT,on the number of complaints of financial fraud in each ZIP code. They found a noticeable, measurable reduction in complaints for iPhone users after ATT was implemented. The researchers also investigated variation in this reduction across different demographic groups. They found that the effect is stronger for minorities, women, and younger people—suggesting that these groups, which may have been more vulnerable to fraud before, saw a greater increase in protection when Apple turned on ATT.  

To test the accuracy and reliability of their results, the researchers employed many different methods typically used in a statistical analysis. These include placebo tests, robustness check, and Poisson regression. In lay terms, these methods test the results against assumptions, the potential effect of other factors and alternative specifications, and variable conditions. 

These methods help establish causation (as opposed to mere correlation), in part by ruling out other possible causes. Although one can never be 100% sure that a result was caused by something in a regression analysis, these methods are popularly used to reasonably infer causation and the report meticulously applies them. 

What This Means 

While the scope of the data is small, this is the first significant research we’ve seen that connects increased privacy with decreased fraud. This should matter to all of us. It reinforces that when companies take steps to protect our privacy, they also help protect us from financial fraud. This is a point we made in our Privacy First whitepaper, which discusses the many harms that a robust privacy system can protect us from.  Lawmakers and regulators should take note.   

In implementing ATT, Apple has proven something EFF has long said: with over 75% of consumers as of May 2022 keeping all tracking off rather than opting in, it’s clear that most consumers want more privacy than they are currently getting through the surveillance business model. Now, with this research it seems that when they get more privacy, they also get some protection against fraud as well.   

Of course, we are not done pushing Apple or anyone else on stepping up for our privacy. As Professor Zeynep Tufekci noted in a recent NY Times column, “I was happy to see Apple switch the defaults for tracking in 2021, but I’m not happy that it was because of a decision by one powerful company—what oligopoly giveth, oligopoly can taketh away. We didn’t elect Apple’s chief executive, Tim Cook, to be the sovereign of our digital world. He could change his mind.”  

We appreciate Apple for implementing ATT. The initial research indicates that it may have a welcome additional  effect for all of us who need both privacy and security against fraud.  We’d like to see more research about this connection and, of course, more companies following Apple’s lead.  

As a side note, it is important to mention that we are concerned about researchers using data from Safegraph, a company that EFF has criticized for unethical personal data collection and its PR efforts to "research wash" its practices by making that data available for free to academics. The use of this data in several academic research projects speaks to the reach of unethical data brokers as well as to the need to rein them in, both with technical measures like ATT and with robust consumer data privacy legislation.  

However, the use of this data does not take away from the credibility of the research and its conclusions. The iOS share per ZIP code could have been determined by other legitimate sources, but that would have had no effect on the results determining the impact of ATT.  

Thanks to EFF Intern Muhammad Essa for research and key drafting help with this blog post.

Cindy Cohn

Digital Rights Updates with EFFector 35.16

2 months 1 week ago

Have no fear, it's the final EFFector of the year! Be the digital freedom expert for your family and friends during the holidays by catching up on the latest online rights issues with EFFector 35.16. This issue of our newsletter covers topics including: the surveillance one could be gifting another with smart speakers and other connected gadgets, how to use various Android safety tools to secure your kids Android device, and a victory announcement—Montana's TikTok ban was ruled unconstitutional by a federal court.

EFFector 35.16 is out now—you can read the full newsletter here, or subscribe to get the next issue in your inbox automatically! You can also listen to the audio version of the newsletter below:

LISTEN ON YouTube

Safe and Private for the Holidays

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

EFF Joins Forces with 20+ Organizations in the Coalition #MigrarSinVigilancia

2 months 1 week ago

Today, EFF joins more than 25 civil society organizations to launch the Coalition #MigrarSinVigilancia ("To Migrate Without Surveillance"). The Latin American coalition’s aim is to oppose arbitrary and indiscriminate surveillance affecting migrants across the region, and to push for the protection of human rights by safeguarding migrants' privacy and personal data.

On this International Migrants Day (December 18), we join forces with a key group of digital rights and frontline humanitarian organizations to coordinate actions and share resources in pursuit of this significant goal.

Governments increasingly use technologies to monitor migrants, asylum seekers, and others moving across borders with growing frequency and intensity. This intensive surveillance is often framed within the concept of "smart borders" as a more humanitarian approach to address and streamline border management, even though its implementation often negatively impacts the migrant population.

EFF has been documenting the magnitude and breadth of such surveillance apparatus, as well as how it grows and impacts communities at the border. We have fought in courts against the arbitrariness of border searches in the U.S. and called out the inherent dangers of amassing migrants' genetic data in law enforcement databases.  

The coalition we launch today stresses that the lack of transparency in surveillance practices and regional government collaboration violates human rights. This opacity is intertwined with the absence of effective safeguards for migrants to know and decide crucial aspects of how authorities collect and process their data.

The Coalition calls on all states in the Americas, as well as companies and organizations providing them with technologies and services for cross-border monitoring, to take several actions:

  1. Safeguard the human rights of migrants, including but not limited to the rights to migrate and seek asylum, the right to not be separated from their families, due process of law, and consent, by protecting their personal data.
  2. Recognize the mental, emotional, and legal impact that surveillance has on migrants and other people on the move.
  3. Ensure human rights safeguards for monitoring and supervising technologies for migration control.
  4. Conduct a human rights impact assessment of already implemented technologies for migration control.
  5. Refrain from using or prohibit technologies for migration control that present inherent or serious human rights harms.
  6. Strengthen efforts to achieve effective remedies for abuses, accountability, and transparency by authorities and the private sector.

We invite you to learn more about the Coalition #MigrarSinVigilancia and the work of the organizations involved, and to stand with us to safeguard data privacy rights of migrants and asylum seekers—rights that are crucial for their ability to safely build new futures.

Veridiana Alimonti

The Surveillance Showdown That Fizzled

2 months 1 week ago

Like the weather rapidly getting colder outside, the fight over renewing, reforming, or sunsetting the mass surveillance power of Section 702 has been put on ice until spring.

In the last week of legislative business before the winter break, Congress was scheduled to consider two very different proposals: H.R. 6570, the Protect Liberty and End Warrantless Surveillance Act in House Judiciary Committee (HJC); and H.R. 6611, the FISA Reform and Reauthorization Act of 2023 in the House Permanent Select Committee on Intelligence (HPSCI), to reauthorize Section 702 of the Foreign Intelligence Surveillance Act (FISA). However, as the conversation about how to consider these proposals grew heated, both bills have been pulled from the legislative calendar without being rescheduled.

TAKE ACTION

Tell Congress: Defeat HPSCI’s Horrific Surveillance Bill

The legislative authority for Section 702 was set to expire December 31, 2023, though language was added to the National Defense Authorization Act (NDAA) to extend the legislative authority of Section 702 through April 2024. It is disappointing that, despite all of the reported abuses of the Section 702 program, Congress chose to pass a reauthorization bill instead of making the necessary effort to include critical reforms. As advocates for reform, including EFF, said in a letter to Congress in late November, bypassing the discussion around reform by slipping an extension of the law into the defense authorization bill during conference demonstrates a blatant disregard for the civil liberties and civil rights of the American people.

While it is frustrating that Congress ignored the urgent need for significant Section 702 reform before the December 31 deadline, reform advocates should not lose hope. The current stalemate also means that the pro-surveillance hardliners of the intelligence community were not able to jam through their expansion of the program based on the same old scare tactics they’ve used for years. Fortunately, it seems that many members of the House and Senate have heard our message. While renewing any surveillance authority remains a complicated and complex issue, this choice is clear: we continue to urge all Members to oppose the Intelligence Committee’s bill, H.R.6611, the FISA Reform and Reauthorization Act of 2023.

Additionally, in the moments leading up to a possible floor vote, many House members (and some Senators) have made public statements calling for reform. Notably, that list includes the current House Speaker, Mike Johnson, who told Fox News that Section 702 “... was also abused by the FBI, by our own government, over almost 300,000 times between 2020 and 2021, and so the civil liberties of Americans have been jeopardized by that. It must be reformed."

So, while we are disappointed that Congress chose to leave for the holidays without enacting any of these absolutely necessary reforms, we are already making plans to continue this fight in the New Year. We are also grateful for the calls and emails from our members and supporters; these have absolutely made an impact and will be more important than ever in the fight to come. 

TAKE ACTION

Tell Congress: Defeat HPSCI’s Horrific Surveillance Bill

India McKinney

Internet Archive Files Appeal Brief Defending Libraries and Digital Lending From Big Publishers’ Legal Attack

2 months 1 week ago
The Archive’s Controlled Digital Lending Program is a Lawful Fair Use that Preserves Traditional Library Lending in the Digital World

SAN FRANCISCO—A cartel of major publishing companies must not be allowed to criminalize fair-use library lending, the Internet Archive argued in an appellate brief filed today. 

The Internet Archive is a San Francisco-based 501(c)(3) non-profit library which preserves and provides access to cultural artifacts of all kinds in electronic form. The brief filed in the U.S. Court of Appeal for the Second Circuit by the Electronic Frontier Foundation (EFF) and Morrison Foerster on the Archive’s behalf explains that the Archive’s Controlled Digital Lending (CDL) program is a lawful fair use that preserves traditional library lending in the digital world. 

"Why should everyone care about this lawsuit? Because it is about preserving the integrity of our published record, where the great books of our past meet the demands of our digital future,” said Brewster Kahle, founder and digital librarian of the Internet Archive. “This is not merely an individual struggle; it is a collective endeavor for society and democracy struggling with our digital transition. We need secure access to the historical record. We need every tool that libraries have given us over the centuries to combat the manipulation and misinformation that has now become even easier.”

“This appeal underscores the role of libraries in supporting universal access to information—a right that transcends geographic location, socioeconomic status, disability, or any other barriers,” Kahle added. “Our digital lending program is not just about lending responsibly; it’s about strengthening democracy by creating informed global citizens."

Through CDL, the Internet Archive and other libraries make and lend out digital scans of print books in their collections, subject to strict technical controls. Each book loaned via CDL has already been bought and paid for, so authors and publishers have already been fully compensated for those books; in fact, concrete evidence shows that the Archive’s digital lending—which is limited to the Archive’s members—does not and will not harm the market for books. 

Nonetheless, publishers Hachette, HarperCollins, Wiley, and Penguin Random House sued the Archive in 2020, claiming incorrectly that CDL violates their copyrights. A judge of the U.S. District Court for the Southern District of New York in March granted the plaintiffs’ motion for summary judgment, leading to this appeal. 

The district court’s “rejection of IA’s fair use defense was wrongly premised on the supposition that controlled digital lending is equivalent to indiscriminately posting scanned books online,” the brief argues. “That error caused it to misapply each of the fair use factors, give improper weight to speculative claims of harm, and discount the tremendous public benefits controlled digital lending offers. Given those benefits and the lack of harm to rightsholders, allowing IA’s use would promote the creation and sharing of knowledge—core copyright purposes—far better than forbidding it.”

The brief explains how the Archive’s digital library has facilitated education, research, and scholarship in numerous ways. In 2019, for example, the Archive received federal funding to digitize and lend books about internment of Japanese Americans during World War II. In 2022, volunteer librarians curated a collection of books that have been banned by many school districts but are available through the Archive’s library. Teachers have used the Archive to provide students access to books for research that were not available locally. And the Archive’s digital library has made online resources like Wikipedia more reliable by allowing articles to link directly to the particular page in a book that supports an asserted fact and by allowing readers to immediately borrow the book to verify it. 

For the brief: https://www.eff.org/document/internet-archive-opening-brief-us-court-appeals-second-circuit

For more on the case: https://www.eff.org/cases/hachette-v-internet-archive 

For the Internet Archive's blog post: https://blog.archive.org/2023/12/15/internet-archive-defends-digital-rights-for-libraries/

Contact:  CorynneMcSherryLegal Directorcorynne@eff.org
Josh Richman

Is This the End of Geofence Warrants?

2 months 2 weeks ago

Google announced this week that it will be making several important changes to the way it handles users’ “Location History” data. These changes would appear to make it much more difficult—if not impossible—for Google to provide mass location data in response to a geofence warrant, a change we’ve been asking Google to implement for years.

Geofence warrants require a provider—almost always Google—to search its entire reserve of user location data to identify all users or devices located within a geographic area during a time period specified by law enforcement. These warrants violate the Fourth Amendment because they are not targeted to a particular individual or device, like a typical warrant for digital communications. The only “evidence” supporting a geofence warrant is that a crime occurred in a particular area, and the perpetrator likely carried a cell phone that shared location data with Google. For this reason, they inevitably sweep up potentially hundreds of people who have no connection to the crime under investigation—and could turn each of those people into a suspect.

Geofence warrants have been possible because Google collects and stores specific user location data (which Google calls “Location History” data) altogether in a massive database called “Sensorvault.” Google reported several years ago that geofence warrants make up 25% of all warrants it receives each year.

Google’s announcement outlined three changes to how it will treat Location History data. First, going forward, this data will be stored, by default, on a user’s device, instead of with Google in the cloud. Second, it will be set by default to delete after three months; currently Google stores the data for at least 18 months. Finally, if users choose to back up their data to the cloud, Google will “automatically encrypt your backed-up data so no one can read it, including Google.”

All of this is fantastic news for users, and we are cautiously optimistic that this will effectively mean the end of geofence warrants. These warrants are dangerous. They threaten privacy and liberty because they not only provide police with sensitive data on individuals, they could turn innocent people into suspects. Further, they have been used during political protests and threaten free speech and our ability to speak anonymously, without fear of government repercussions. For these reasons, EFF has repeatedly challenged geofence warrants in criminal cases and worked with other groups (including tech companies) to push for legislative bans on their use.

However, we are not yet prepared to declare total victory. Google’s collection of users’ location data isn’t limited to just the “Location History” data searched in response to geofence warrants; Google collects additional location information as well. It remains to be seen whether law enforcement will find a way to access these other stores of location data on a mass basis in the future. Also, none of Google’s changes will prevent law enforcement from issuing targeted warrants for individual users’ location data—outside of Location History—if police have probable cause to support such a search.

But for now, at least, we’ll take this as a win. It’s very welcome news for technology users as we usher in the end of 2023.

Jennifer Lynch

Speaking Freely: Dr. Carolina Are

2 months 2 weeks ago

Dr. Carolina Are is an Innovation Fellow at Northumbria University Centre for Digital Citizens. Her research primarily focuses on the intersection between online abuse and censorship. Her current research project investigates Instagram and TikTok’s approach to malicious flagging against ‘grey area’ content, or content that toes the line of compliance with social media’s community guidelines.

She is also a blogger and creator herself, as well as a writer, pole dance instructor and award-winning activist. Dr. Are sat down for an interview with EFF’s Jillian York to discuss the impact of platform censorship on sex workers and activist communities, the need for systemic change around content moderation, and how there’s hope to be found in the younger generations. 

Jillian York: Can you introduce yourself and tell us a bit about your work? Specifically, can you give us an idea of how you became a free speech advocate?

Dr. Carolina Are: Sure, I’m Carolina Are, I’m an Innovation Fellow at Northumbria University Centre for Digital Citizens and I mainly focus on deplatforming, online censorship, and platform governance of speech but also bodies, nudity, and sex work.

I came to it from a pretty personal and selfish perspective, in the sense that I was doing my PhD on the moderation of online abuse and conspiracy theories while also doing pole dance as a hobby. At the time my social media accounts were separate because I still didn’t know how I wanted to present to academia. So I had a pole dance account on Instagram and an academic account on Twitter. This was around the time when FOSTA/ SESTA was approved in the US. In 2019, Instagram started heavily shadow banning– algorithmically demoting – poledancers’ content. And I was in a really unique position to be observing the moderation of stuff that wasn’t actually getting moderated and should have been getting moderated – it was horrible, it was abusive content– while my videos were getting heavily censored and were not reaching viewers anymore. So I started getting more interested in the moderation of nudity, the political circumstances that surrounded the step of censorship. And I started creating a lot of activism campaigns about it, including one that resulted in Instagram directly apologizing to me and to poledancers about the shadow banning of pole dance.

So, from there, I kind of shifted my public-facing research to the moderation of nudity and sexual activity and sexuality and just sexual solicitation in general. And I then unified my online persona to reflect both my experiences and my expertise. I guess that’s how I came to it. It started with me, and with what happened to me and the censorship my accounts faced. And because of that, I became a lot more aware of censorship of sex work, of people that have it a lot worse than me, that introduced me to a lot of fantastic activist networks that were protesting that and massively changed the direction of my research.

York: How do you personally define deplatforming and what sort of impact does it have on pole dancers, on sex workers, on all of the different communities that you work with? 

What I would define as deplatforming is the removal of content or a full account from a social media platform or an internet platform. This means that you lose access to the account, but you also lose access to any communications that you may have had through that account – if it’s an app, for instance. And you also lose access to your content on that account. So, all of that has massive impacts on people that work and communicate and organize through social media or through their platforms.

Let’s say, if you’re an activist and your main activist network is through platforms –maybe because people have a public-facing persona that is anonymous and they don’t want to give you their data, their email, their phone number– you lose access to them if you are deplatformed. Similarly, if you are a small business or a content creator, and you promote yourself largely through your social media accounts, then you lose your outlet of promotion. You lose your network of customers. You lose everything that helps you make money. And, on top of that, for a lot of people, as a few of the papers I’m currently working on are showing, of course platforms are an office – like a space where they do business – but at the same time they have this hybrid emotional/community role with the added business on top.

So that means that yes, you lose access to your business, you lose access to your activist network, to educational opportunities, to learning opportunities, to organizing opportunities – but you also lose access to your memories. You lose access to your friends. So I’m one of those people that become intermediaries between platforms like Meta and people that have been deleted because of my research. I sometimes put them in touch with the platform in order for them to restore mistakenly deleted accounts. And just recently I helped someone who – without my asking, because I do this for free – ended up PayPal-ing me a lot of money because I was the only person that helped while the platforms infrastructure and appeals were ineffective. And what she said was, “Instagram was the only platform where I had pictures with my dead stepmother, and I didn’t have access to them anymore and I would have lost them if you hadn’t helped me.”

So there is a whole emotional and financial impact that this has on people. Because, obviously, you’re very stressed out and worried and terrified if you lose your main source of income or of organizing or of education and emotional support. But you also lose access to your memories and your loved ones. And I think this is a direct consequence of how platforms have marketed themselves to us. They’ve marketed themselves as the one stop shop for community or to become a solo entrepreneur. But then they’re like, oh only for those kinds of creators, not for the creators that we don’t care about or we don’t like. Not for the accounts we don’t want to promote.

York: You mentioned earlier that some of your earlier work looked at content that should be taken down. I don’t think either of us are free speech absolutists, but I do struggle with the question of authority and who gets to decide what should be removed, or deplatformed—especially in an environment where we’re seeing lots of censorial bills worldwide aimed at protecting children from some of the same content that we’re concerned about being censored.  How do you see that line, and who should decide?

So that is an excellent question, and it’s very difficult to find one straight answer because I think the line moves for everyone and for people’s specific experiences. I think what I’m referring to is something that is already covered by, for instance, discrimination law. So outright accusing people of a crime that it’s been proved offline that they haven’t committed. When that has been proven that that is not the case and someone goes and says that online to insult or harass or offend someone – and that becomes a sort of mob violence – then I think that’s when something should be taken down. Because there’s direct offline harm to specific people that are being targeted en masse. It’s difficult to find the line, though, because that could happen even like, let’s say for something like #MeToo, when things ended being true about certain people. So it’s very difficult to find the line.

I think that platforms’ approach to algorithmic moderation – blanket deplatforming for things – isn’t really working when nuance is required. The case that I was observing was very specific because it started with a conspiracy theory about a criminal case, and then people that believed or didn’t believe in that conspiracy theory started insulting each other and everybody that’s involved with the case. So I think conspiracy theories are another interesting scenario because you’re not directly harassing anyone if you say, “It’s better to inject bleach into your veins instead of getting vaccinated.” But at the same time, sharing that information can be really harmful to public beliefs about stuff. If we’re thinking about what’s happening with measles, the fact that certain illnesses are coming back because people are so against vaccines from what they’ve read online. So I think there’s quite a system offline already for information that is untrue, for information that is directly targeting specific groups and specific people in a certain manner. So I think what’s happening a lot with what I’m seeing with online legislation is that it’s becoming very broad, and platforms apply it in a really broad way because they just want to cover their backs and don’t want to be seen to be promoting anything that might be remotely harmful. But I think what’s not happening is – or what’s happening in a less obvious fashion – is looking at what we already have and thinking how can we apply it online in a way that doesn’t wreck this infrastructure that we have. And I think that’s very apparent with the case of conspiracy theories and online abuse.

But if we move back to the people we were discussing– sex workers, people that post online nudity, and porn and stuff like that. Porn has already been viewed as free speech in trials from the 1950s, so why are we going back to that? Instead of investing in that and forcing platforms to over-comply, why don’t we invest in better sex education offline so that people who happen to access porn online don’t think that that is the way sex is? Or if there’s actual abuse being committed against people, why do we not regulate with laws that are about abuse and not about nudity and sexual activity? Because being naked is not the same as being trafficked. So, yeah, I think the debate really lacks nuance and lacks ad hoc application because platforms are more interested in blanket approaches because they’re easier for them to apply.

York: You work directly with companies, with platforms that individuals and communities rely on heavily. What strategies have you found to be effective in convincing platforms of the importance of preserving content or ensuring that people have the right to appeal, etc?

It’s an interesting one because I personally have found very few things to be effective. And even when they are apparently effective, there’s a downside. In my experience, for instance, because I have a past in social media marketing, public relations and communications, I always go the PR (public relations) route. Which is making platforms feel bad for something. Or, if they don’t feel bad personally, I try to make them look bad for what they’re doing, image-wise. Because at the moment their responses to everything haven’t been related to them wanting to do good, but they’ve been related to them feeling public and political pressure for things that they may have gotten wrong. So if you point out hypocrisies in their moderation, if you point out that they’ve… misbehaved, then they do tend to apologize.

The issue is that the apologies are quite empty– it’s PR spiel. I think sometimes they’ve been helpful in the sense that for quite a while platforms denied that shadow banning was ever a thing. And the fact that I was able to make them apologize for it by showing proof, even if it didn’t really change the outcome of shadow banning much – although now Meta does notify creators about shadowbanning, which was not something that was happening before– but it really showed people that they weren’t crazy. The gaslighting of users is quite an issue with platforms because they will deny that something is happening until it is too bad for them to deny it. And I think the PR route can be quite helpful to at least acknowledge that something is going on. Because if something is not even acknowledged by platforms, you’ve got very little to stand on when you question it.

The issue is, the fact that platforms respond in a PR fashion, shows a lack of care for their part, and also sometimes leads to changes which sound good on paper or look good on paper, but when you actually look at their implication it becomes a bit ridiculous. For instance, Naomi Nicholas Williams, who is an incredible activist and plus-size Black model – so someone who is terribly affected by censorship because she’s part of a series of demographics that platforms tend to pick up more when it comes to moderation. She fought platforms so hard for the censorship of her content that she got them to introduce this policy about breast-cupping versus breast-grabbing. The issue is that now there is a written policy where you are allowed to cup your breast, but if you squeeze them too hard you get censored. So this leads to this really weird scenario where an Internet company is creating norms of how acceptable it is to grab your breasts, or which way you should be grabbing your breasts. Which becomes a bit ridiculous because they have no place in saying that, and they have no expertise in saying that.

So I think sometimes it’s good to just point out that hypocrisy over and over again, to at least acknowledge that something is going on. But I think that for real systemic change, governments need to step in to treat online freedom of speech as real freedom of speech and create checks and balances for platforms so that they can be essentially – if not fined – at least held accountable for stuff they censor in the same way that they are held accountable for things like promoting harmful things.

York: This is a moment in time where there’s a lot of really horrible things happening online. Is there anything that you’re particularly hopeful about right now? 

I think something that I’m very, very hopeful about is that the kids are alright. I think something that’s quite prominent in the moderation of nudity discourse is “won’t somebody think of the children? What happens if a teenager sees a… something absolutely ridiculous.” But every time that I speak with younger people, whether that’s through public engagement stuff that I do like a public lecture or sometimes I teach seminars or sometimes I communicate with them online– they seem incredibly proficient at finding out when an image is doctored, or when an image is fake, or even when a behavior by some online people is not okay. They’re incredibly clued up about consent, they know that porn is not real sex. So I think we’re not giving kids enough credit about what they already know. Of course, it’s bleak sometimes to think these kids are growing up with quantifiable notions of popularity and that they can see a lot of horrible stuff online. But they also seemvery aware of consent, of bodily autonomy and of what freedoms people should have with their online content – every time I teach undergrads and younger kids, they seem to be very clued up on pleasure and sex ed. So that makes me really hopeful. Because while I think a lot of campaigners, definitely the American Evangelical far-right and also the far-right that we have in Europe, would see kids as these completely innocent, angelic people that have no say in what happens to them. I think actually quite a lot of them do know, and it’s really nice to see. It makes me really hopeful.

York: I love that. The kids are alright indeed. I’m also very hopeful in that sense. Last question– who is your free speech hero? 

There are so many it is really difficult to find just one. But I’d say, given the time that we’re in, I would say that anyone still doing journalism and education in Gaza… from me, from the outside world, just, hats off. I think they’re fighting for their lives while they’re also trying to educate us – from the extremely privileged position we’re in – about what’s going on. And I think that’s just incredible given what’s happening. So I think at the moment I would say them. 

Then in my area of research in general, there’s a lot of fantastic research collectives and sex work collectives that have definitely changed everything I know. So I’m talking about Hacking/ Hustling, Dr. Zahra Stardust in Australia. But also in the UK we have some fantastic sex working unions, like the Sex Worker Union, and the Ethical Strippers who are doing incredible education through platforms despite being censored all the time. So, yeah, anybody that advocates for free speech from the position of not being heard by the mainstream I think does a great job. And I say that, of course, when it comes to marginalized communities, not white men claiming that they are being censored from the height of their newspaper columns. 

Jillian C. York

Without Interoperability, Apple Customers Will Never Be Secure

2 months 2 weeks ago

Every internet user should have the ability to privately communicate with the people that matter to them, in a secure fashion, using the tools and protocols of their choosing.

Apple’s iMessage offers end-to-end encrypted messaging for its customers, but only if those customers want to talk to someone who also has an Apple product. When an Apple customer tries to message an Android user, the data is sent over SMS, a protocol that debuted while Wayne’s World was still in its first theatrical run. SMS is wildly insecure, but when Apple customers ask the company how to protect themselves while exchanging messages with Android users, Apple’s answer is “buy them iPhones.”

That’s an obviously false binary. Computers are all roughly equivalent, so there’s no reason that an Android device couldn’t run an app that could securely send and receive iMessage data. If Apple won’t make that app, then someone else could. 

That’s exactly what Apple did, back when Microsoft refused to make a high-quality MacOS version of Microsoft Office: Apple reverse-engineered Office and released iWork, whose Pages, Numbers and Keynote could perfectly read and write Microsoft’s Word, Excel and Powerpoint files.

Back in September, a 16 year old high school student reverse engineered iMessage and released Pypush, a free software library that reimplements iMessage so that anyone can send and receive secure iMessage data, maintaining end-to-end encryption, without the need for an Apple ID.

Last week, Beeper, a multiprotocol messaging company, released Beeper Mini, an alternative iMessage app reportedly based on the Pypush code that runs on Android, giving Android users the “blue bubble” that allows Apple customers to communicate securely with them. Beeper Mini stands out among earlier attempts at this by allowing users’ devices to directly communicate with Apple’s servers, rather than breaking end-to-end encryption by having messages decrypted and re-encrypted by servers in a data-center.

Beeper Mini is an example of “adversarial interoperability.” That’s when you make something new work with an existing product, without permission from the product’s creator.

(“Adversarial interoperability” is quite a mouthful, so we came up with “competitive compatibility” or “comcom” as an alternative term.)

Comcom is how we get third-party inkjet ink that undercuts HP’s $10,000/gallon cartridges, and it’s how we get independent repair from technicians who perform feats the manufacturer calls “impossible.” Comcom is where iMessage itself comes from: it started life as iChat, with support for existing protocols like XMPP

Beeper Mini makes life more secure for Apple users in two ways: first, it protects the security of the messages they send to people who don’t use Apple devices; and second, it makes it easier for Apple users to switch to a rival platform if Apple has a change of management direction that deprioritizes their privacy.

Apple doesn’t agree. It blocked Beeper Mini users just days after the app’s release.  Apple told The Verge’s David Pierce that they had blocked Beeper Mini users because Beeper Mini “posed significant risks to user security and privacy, including the potential for metadata exposure and enabling unwanted messages, spam, and phishing attacks.”

If Beeper Mini indeed posed those risks, then Apple has a right to take action on behalf of its users. The only reason to care about any of this is if it makes users more secure, not because it serves the commercial interests of either Apple or Beeper. 

But Apple’s account of Beeper Mini’s threats does not square with the technical information Beeper has made available. Apple didn’t provide any specifics to bolster its claims. Large tech firms who are challenged by interoperators often smear their products as privacy or security risks, even when those claims are utterly baseless.

The gold standard for security claims is technical proof, not vague accusations. EFF hasn't audited Beeper Mini and we’d welcome technical details from Apple about these claimed security issues. While Beeper hasn’t published the source code for Beeper Mini, they have offered to submit it for auditing by a third party.

Beeper Mini is back. The company released an update on Monday that restored its functionality. If Beeper Mini does turn out to have security defects, Apple should protect its customers by making it easier for them to connect securely with Android users.

One thing that won’t improve the security of Apple users is for Apple to devote its engineering resources to an arms race with Beeper and other interoperators. In a climate of stepped-up antitrust enforcement, and as regulators around the world are starting to force interoperability on tech giants, pointing at interoperable products and shouting “insecure! Insecure!” no longer cuts it. 

Apple needs to acknowledge that it isn’t the only entity that can protect Apple customers.

Cory Doctorow

Spritely and Veilid: Exciting Projects Building the Peer-to-Peer Web

2 months 2 weeks ago

While there is a surge in federated social media sites, like Bluesky and Mastodon, some technologists are hoping to take things further than this model of decentralization with fully peer-to-peer applications. Two leading projects, Spritely and Veilid, hint at what this could look like.

There are many technologies used behind the scenes to create decentralized tools and platforms. There has been a lot of attention lately, for example, around interoperable and federated social media sites using ActivityPub, such as Mastodon, as well as platforms like BlueSky using a similar protocol. These types of services require most individuals to sign up with an intermediary service host in order to participate, but they are decentralized in so far as any user has a choice of intermediary, and can run one of those services themselves while participating in the larger network.

Another model for decentralized communications does away with the intermediary services altogether in favor of a directly peer-to-peer model. This model is technically much more challenging to implement, particularly in cases where privacy and security are crucial, but it does result in a system that gives individuals even more control over their data and their online experience. Fortunately, there are a few projects being developed that are aiming to make purely peer-to-peer applications achievable and easy for developers to create. Two leading projects in this effort are Spritely and Veilid.

Spritely

Spritely is worth keeping an eye on. Being developed by the Institute of the same name, Spritely is a framework for building distributed apps that don’t even have to know that they’re distributed. The project is spearheaded by Christine Lemmer-Webber, who was one of the co-authors of the ActivityPub spec that drives the fediverse. She is taking the lessons learned from that work, combining them with security and privacy minded object capabilities models, and mixing it all up into a model for peer to peer computation that could pave the way for a generation of new decentralized tools.

Spritely is so promising because it is tackling one of the hard questions of decentralized technology: how do we protect privacy and ensure security in a system where data is passing directly between people on the network? Our best practices in this area have been shaped by many years of centralized services, and tackling the challenges of a new paradigm will be important.

One of the interesting techniques that Spritely is bringing to bear on the problem is the concept of object capabilities. OCap is a framework for software design that only gives processes the ability to view and manipulate data that they’ve been given access to. That sounds like common sense, but it is in contrast to the way that most of our computers work, in which the game Minesweeper (just to pick one example) has full access to your entire home directory once you start it up. That isn’t to say that it or any other program is actually reading all your documents, but it has the ability to, which means that a security flaw in that program could exploit that ability.

The Spritely Institute is combining OCap with a message passing protocol that doesn’t care if the other party it's communicating with is on the same device, on another device in the same room, or on the other side of the world. And to top things off they’re working on the protocol in the open, with a handful of other dedicated organizations. We’re looking forward to seeing what the Spritely team creates and what their work enables in the future.

Veilid

Another leading project in the push for full p2p apps was just announced a few months ago. The Veilid project was released at DEFCON 31 in August and has a number of promising features that could lead to it being a fundamental tool in future decentralized systems. Described as a cross between TOR and Interplanetary File System (IPFS), Veilid is a framework and protocol that offers two complementary tools. The first is private routing, which, much like TOR, can construct an encrypted private tunnel over the public internet allowing two devices to communicate with each other without anyone else on the network knowing who is talking to whom.

The second tool that Veilid offers is a Distributed Hash Table (DHT), which lets anyone look up a bit of data associated with a specific key, wherever that data lives on the network. DHTs go all the way back to Bittorrent’s tracker, where they help direct users to other nodes in the network that have the chunk of a file that they need, and they form the backbone of IPFS’s system. Veilid’s DHT is particularly intriguing because it is “multi-writer.” In most DHTs, only one party can set the value stored at a particular key, but in Veilid the creator of a DHT key can choose to share the writing capability with others, creating a system where nodes can communicate by leaving notes for each other in the DHT. Veilid has created an early alpha of a chat program, VeilidChat, based on exactly this feature.

Both of these features are even more valuable because Veilid is a very mobile-friendly framework. The library is available for a number of platforms and programming languages, including the cross-platform Flutter framework, which means it is easy to build iOS and Android apps that use it. Mobile has been a difficult platform to build peer-to-peer apps on for a variety of reasons, so having a turn-key solution in the form of Veilid could be a game changer for decentralization in the next couple years. We’re excited to see what gets built on top of it.

Public interest in decentralized tools and services is growing, as people realize that there are downsides to centralized control over the platforms that connect us all. The past year has seen interest in networks like the fediverse and Bluesky explode and there’s no reason to expect that to change. Projects like Spritely and Veilid are pushing the boundaries of how we might build apps and services in the future. The things that they are making possible may well form the foundation of social communication on the internet in the next decade, making our lives online more free, secure, and resilient.

Ross Schulman

No Robots(.txt): How to Ask ChatGPT and Google Bard to Not Use Your Website for Training

2 months 2 weeks ago

Both OpenAI and Google have released guidance for website owners who do not want the two companies using the content of their sites to train the company's large language models (LLMs). We've long been supporters of the right to scrape websites—the process of using a computer to load and read pages of a website for later analysis—as a tool for research, journalism, and archivers. We believe this practice is still lawful when collecting training data for generative AI, but the question of whether something should be illegal is different from whether it may be considered rude, gauche, or unpleasant. As norms continue to develop around what kinds of scraping and what uses of scraped data are considered acceptable, it is useful to have a tool for website operators to automatically signal their preference to crawlers. Asking OpenAI and Google (and anyone else who chooses to honor the preference) to not include scrapes of your site in its models is an easy process as long as you can access your site's file structure.

We've talked before about how these models use art for training, and the general idea and process is the same for text. Researchers have long used collections of data scraped from the internet for studies of censorship, malware, sociology, language, and other applications, including generative AI. Today, both academic and for-profit researchers collect training data for AI using bots that go out searching all over the web and “scrape up” or store the content of each site they come across. This might be used to create purely text-based tools, or a system might collect images that may be associated with certain text and try to glean connections between the words and the images during training. The end result, at least currently, is the chatbots we've seen in the form of Google Bard and ChatGPT.

It would ease many minds for other companies with similar AI products, like Anthropic, Amazon, and countless others, to announce that they'd respect similar requests.

If you do not want your website's content used for this training, you can ask the bots deployed by Google and Open AI to skip over your site. Keep in mind that this only applies to future scraping. If Google or OpenAI already have data from your site, they will not remove it. It also doesn't stop the countless other companies out there training their own LLMs, and doesn't affect anything you've posted elsewhere, like on social networks or forums. It also wouldn't stop models that are trained on large data sets of scraped websites that aren't affiliated with a specific company. For example, OpenAI's GPT-3 and Meta's LLaMa were both trained using data mostly collected from Common Crawl, an open source archive of large portions of the internet that is routinely used for important research. You can block Common Crawl, but doing so blocks the web crawler from using your data in all its data sets, many of which have nothing to do with AI.

There's no technical requirement that a bot obey your requests. Currently only Google and OpenAI who have announced that this is the way to opt-out, so other AI companies may not care about this at all, or may add their own directions for opting out. But it also doesn't block any other types of scraping that are used for research or for other means, so if you're generally in favor of scraping but uneasy with the use of your website content in a corporation's AI training set, this is one step you can take.

Before we get to the how, we need to explain what exactly you'll be editing to do this.

What's a Robots.txt?

In order to ask these companies not to scrape your site, you need to edit (or create) a file located on your website called "robots.txt." A robots.txt is a set of instructions for bots and web crawlers. Up until this point, it was mostly used to provide useful information for search engines as their bots scraped the web. If website owners want to ask a specific search engine or other bot to not scan their site, they can enter that in their robots.txt file. Bots can always choose to ignore this, but many crawling services respect the request.

This might all sound rather technical, but it's really nothing more than a small text file located in the root folder of your site, like "https://www.example.com/robots.txt." Anyone can see this file on any website. For example, here's The New York Times' robots.txt, which currently blocks both ChatGPT and Bard. 

If you run your own website, you should have some way to access the file structure of that site, either through your hosting provider's web portal or FTP. You may need to comb through your provider's documentation for help figuring out how to access this folder. In most cases, your site will already have a robots.txt created, even if it's blank, but if you do need to create a file, you can do so with any plain text editor. Google has guidance for doing so here.

EFF will not be using these flags because we believe scraping is a powerful tool for research and access to information.

What to Include In Your Robots.txt to Block ChatGPT and Google Bard

With all that out of the way, here's what to include in your site's robots.txt file if you do not want ChatGPT and Google to use the contents of your site to train their generative AI models. If you want to cover the entirety of your site, add these lines to your robots.txt file:

ChatGPT

User-agent: GPTBot

Disallow: /

Google Bard

User-agent: Google-Extended

Disallow: /

You can also narrow this down to block access to only certain folders on your site. For example, maybe you don't mind if most of the data on your site is used for training, but you have a blog that you use as a journal. You can opt out specific folders. For example, if the blog is located at yoursite.com/blog, you'd use this:

ChatGPT

User-agent: GPTBot

Disallow: /blog

Google Bard

User-agent: Google-Extended

Disallow: /blog

As mentioned above, we at EFF will not be using these flags because we believe scraping is a powerful tool for research and access to information; we want the information we’re providing to spread far and wide and to be represented in the outputs and answers provided by LLMs. Of course, individual website owners have different views for their blogs, portfolios, or whatever else you use your website for. We're in favor of means for people to express their preferences, and it would ease many minds for other companies with similar AI products, like Anthropic, Amazon, and countless others, announce that they'd respect similar requests.

Thorin Klosowski

The House Intelligence Committee's Surveillance 'Reform' Bill is a Farce

2 months 2 weeks ago

Earlier this week, both the House Committee on the Judiciary (HJC) and the House Permanent Select Committee on Intelligence (HPSCI) marked up two very different bills (H.R. 6570 - Protect Liberty and End Warrantless Surveillance Act in HJC, and HR 6611, the FISA Reform and Reauthorization Act of 2023 in HPSCI), both of which would reauthorize Section 702 of the Foreign Intelligence Surveillance Act (FISA)—but in very different ways. Both bills head to the House floor next week under a procedural rule called “Queen of the Hill,” where the bill with the most votes gets sent to the Senate for consideration. 

While renewing any surveillance authority remains a complicated and complex issue, this choice is clear - we urge all Members to vote NO on the Intelligence Committee’s bill, H.R.6611, the FISA Reform and Reauthorization Act of 2023.

Take action

TELL congress: Defeat this bad 702 Bill

On Nov. 16, HPSCI released a report calling for reauthorization of Section 702 with essentially superficial reforms. The bill that followed, H.R. 6611, was as bad as expected. It would renew the mass surveillance authority Section 702 for another eight years. It would create new authorities that the intelligence community has sought for years, but that have been denied by the courts. It would continue the indiscriminate collection of U.S. persons’ communications when they talk with people abroad for use by domestic law enforcement. This was not the intention of this national security program, and people on U.S. soil should not have their communications collected without a warrant because of a loophole.

As a reminder, Section 702 was designed to allow the government to warrantlessly surveil non-U.S. citizens abroad for foreign intelligence purposes. Increasingly, it’s this U.S. side of digital conversations that domestic law enforcement agencies trawl through—all without a warrant. FBI agents have been using the Section 702 databases to conduct millions of invasive searches for Americans’ communications, including those of protesters, racial justice activists, 19,000 donors to a congressional campaign, journalists, and even members of Congress

Additionally, the HPSCI bill authorizes the use of this unaccountable and out-of-control mass surveillance program as a new way of vetting asylum seekers by sifting through their digital communications. According to a newly released Foreign Intelligence Surveillance Court (FISC) opinion, the government has sought some version of this authority for years, but was repeatedly denied it, only receiving court approval for the first time this year. Because the court opinion is so heavily redacted, it is impossible to know the current scope of immigration- and visa-related querying, or what broader proposal the intelligence agencies originally sought. 

This new authority proposes to give immigration services the ability to audit entire communication histories before deciding whether an immigrant can enter the country. This is a particularly problematic situation that could cost someone entrance to the United States based on, for instance, their own or a friend’s political opinions—as happened to a Palestinian Harvard student when his social media account was reviewed when coming to the U.S. to start his semester.

The HPSCI bill also includes a call “to define Electronic Communication Service Provider to include equipment.” Earlier this year, the FISA Court of Review released a highly redacted opinion documenting a fight over the government's attempt to subject an unknown company to Section 702 surveillance. However, the court agreed that under the circumstances the company did not qualify as an "electronic communication service provider" under the law. Now, the HPSCI bill would expand that definition to include a much broader range of providers, including those who merely provide hardware through which people communicate on the Internet. Even without knowing the details of the secret court fight, this represents an ominous expansion of 702's scope, which the committee introduced without any explanation or debate of its necessity. 

By contrast, the House Judiciary Committee bill, H.R. 6570, the Protect Liberty and End Warrantless Surveillance Act, would actually address a major problem with Section 702 by banning warrantless backdoor searches of Section 702 databases for Americans’ communications. This bill would also prohibit law enforcement from purchasing Americans’ data that they would otherwise need a warrant to obtain, a practice that circumvents core constitutional protections. Importantly, this bill would also renew this authority for only three more years, giving Congress another opportunity to revisit how the reforms are implemented and to make further changes if the government is still abusing the program.

EFF has long fought for significant changes to Section 702. By the government’s own numbers, violations are still occurring at a rate of more than 4,000 per year. Our government, with the FBI in the lead, has come to treat Section 702—enacted by Congress for the surveillance of foreigners on foreign soil —as a domestic surveillance program of Americans. This simply cannot be allowed to continue. While we will continue to push for further reforms to Section 702, we urge all members to reject the HPSCI bill.

Hit the button below to tell your elected officials to vote against this bill:

Take action

TELL congress: Defeat this bad 702 Bill

Related Cases: Jewel v. NSA
India McKinney

In Landmark Battle Over Free Speech, EFF Urges Supreme Court to Strike Down Texas and Florida Laws that Let States Dictate What Speech Social Media Sites Must Publish

2 months 3 weeks ago
Laws Violate First Amendment Protections that Help Create Diverse Forums for Users’ Free Expression

WASHINGTON D.C.—The Electronic Frontier Foundation (EFF) and five organizations defending free speech today urged the Supreme Court to strike down laws in Florida and Texas that let the states dictate certain speech social media sites must carry, violating the sites’ First Amendment rights to curate content they publish—a protection that benefits users by creating speech forums accommodating their diverse interests, viewpoints, and beliefs.

The court’s decisions about the constitutionality of the Florida and Texas laws—the first laws to inject government mandates into social media content moderation—will have a profound impact on the future of free speech. At stake is whether Americans’ speech on social media must adhere to government rules or be free of government interference.

Social media content moderation is highly problematic, and users are rightly often frustrated by the process and concerned about private censorship. But retaliatory laws allowing the government to interject itself into the process, in any form, raises serious First Amendment, and broader human rights, concerns, said EFF in a brief filed with the National Coalition Against Censorship, the Woodhull Freedom Foundation, Authors Alliance, Fight for The Future, and First Amendment Coalition.

“Users are far better off when publishers make editorial decisions free from government mandates,” said EFF Civil Liberties Director David Greene. “These laws would force social media sites to publish user posts that are at best, irrelevant, and, at worst, false, abusive, or harassing.

“The Supreme Court needs to send a strong message that the government can’t force online publishers to give their favored speech special treatment,” said Greene.

Social media sites should do a better job at being transparent about content moderation and self-regulate by adhering to the Santa Clara Principles on Transparency and Accountability in Content Moderation. But the Principles are not a template for government mandates.

The Texas law broadly mandates that online publishers can’t decline to publish others’ speech based on anyone’s viewpoint expressed on or off the platform, even when that speech violates the sites' rules. Content moderation practices that can be construed as viewpoint-based, which is virtually all of them, are barred by the law. Under it, sites that bar racist material, knowing their users object to it, would be forced to carry it. Sites catering to conservatives couldn’t block posts pushing liberal agendas.

The Florida law requires that social media sites grant special treatment to electoral candidates and “journalistic enterprises” and not apply their regular editorial practices to them, even if they violate the platforms' rules. The law gives preferential treatment to political candidates, preventing publishers at any point before an election from canceling their accounts or downgrading their posts or posts about them, giving them free rein to spread misinformation or post about content outside the site’s subject matter focus. Users not running for office, meanwhile, enjoy no similar privilege.

What’s more, the Florida law requires sites to disable algorithms with respect to political candidates, so their posts appear chronologically in users’ feeds, even if a user prefers a curated feed. And, in addition to dictating what speech social media sites must publish, the laws also place limits on sites' ability to amplify content, use algorithmic ranking, and add commentary to posts.

“The First Amendment generally prohibits government restrictions on speech based on content and viewpoint and protects private publisher ability to select what they want to say,” said Greene. “The Supreme Court should not grant states the power to force their preferred speech on users who would choose not to see it.”

“As a coalition that represents creators, readers, and audiences who rely on a diverse, vibrant, and free social media ecosystem for art, expression, and knowledge, the National Coalition Against Censorship hopes the Court will reaffirm that government control of media platforms is inherently at odds with an open internet, free expression, and the First Amendment,” said Lee Rowland, Executive Director of National Coalition Against Censorship.

“Woodhull is proud to lend its voice in support of online freedom and against government censorship of social media platforms,” said Ricci Joy Levy, President and CEO at Woodhull Freedom Foundation. “We understand the important freedoms that are at stake in this case and implore the Court to make the correct ruling, consistent with First Amendment jurisprudence.”

"Just as the press has the First Amendment right to exercise editorial discretion, social media platforms have the right to curate or moderate content as they choose. The government has no business telling private entities what speech they may or may not host or on what terms," said David Loy, Legal Director of the First Amendment Coalition.

For the brief:
https://www.eff.org/document/eff-brief-moodyvnetchoice

Contact:  DavidGreeneCivil Liberties Directordavidg@eff.org
Karen Gullo

Think Twice Before Giving Surveillance for the Holidays

2 months 3 weeks ago

With the holidays upon us, it's easy to default to giving the tech gifts that retailers tend to push on us this time of year: smart speakers, video doorbells, bluetooth trackers, fitness trackers, and other connected gadgets are all very popular gifts. But before you give one, think twice about what you're opting that person into.

A number of these gifts raise red flags for us as privacy-conscious digital advocates. Ring cameras are one of the most obvious examples, but countless others over the years have made the security or privacy naughty list (and many of these same electronics directly clash with your right to repair).

One big problem with giving these sorts of gifts is that you're opting another person into a company's intrusive surveillance practice, likely without their full knowledge of what they're really signing up for.

For example, a smart speaker might seem like a fun stocking stuffer. But unless the giftee is tapped deeply into tech news, they likely don't know there's a chance for human review of any recordings. They also may not be aware that some of these speakers collect an enormous amount of data about how you use it, typically for advertising–though any connected device might have surprising uses to law enforcement, too.

There's also the problem of tech companies getting acquired like we've seen recently with Tile, iRobot, or Fitbit. The new business can suddenly change the dynamic of the privacy and security agreements that the user made with the old business when they started using one of those products.

And let's not forget about kids. Long subjected to surveillance from elves and their managers, electronics gifts for kids can come with all sorts of surprise issues, like the kid-focused tablet we found this year that was packed with malware and riskware. Kids’ smartwatches and a number of connected toys are also potential privacy hazards that may not be worth the risks if not set up carefully.

Of course, you don't have to avoid all technology purchases. There are plenty of products out there that aren't creepy, and a few that just need extra attention during set up to ensure they're as privacy-protecting as possible. 

What To Do Instead

While we don't endorse products, you don't have to start your search in a vacuum. One helpful place to start is Mozilla's Privacy Not Included gift guide, which provides a breakdown of the privacy practices and history of products in a number of popular gift categories. This way, instead of just buying any old smart-device at random because it's on sale, you at least have the context of what sort of data it might collect, how the company has behaved in the past, and what sorts of potential dangers to consider. U.S. PIRG also has guidance for shopping for kids, including details about what to look for in popular categories like smart toys and watches.

Finally, when shopping it's worth keeping in mind two last details. First, some “smart” devices can be used without their corresponding apps, which should be viewed as a benefit, because we've seen before that app-only gadgets can be bricked by a shift in company policies. Also, remember that not everything needs to be “smart” in the first place; often these features add little to the usability of the product.

Your job as a privacy-conscious gift-giver doesn't end at the checkout screen.

If you're more tech savvy than the person receiving the item, or you're helping set up a gadget for a child, there's no better gift than helping set it up as privately as possible. Take a few minutes after they've unboxed the item and walk through the set up process with them. Some options to look for: 

  • Enable two-factor authentication when available to help secure their new account.
  • If there are any social sharing settings—particularly popular with fitness trackers and game consoles—disable any unintended sharing that might end up on a public profile.
  • Look for any options to enable automatic updates. This is usually enabled by default these days, but it's always good to double-check.
  • If there's an app associated with the new device (and there often is), help them choose which permissions to allow, and which to deny. Keep an eye out for location data, in particular, especially if there's no logical reason for the app to need it. 
  • While you're at it, help them with other settings on their phone, and make sure to disable the phone’s advertising ID.
  • Speaking of advertising IDs, some devices have their own advertising settings, usually located somewhere like, Settings > Privacy > Ad Preferences. If there's an option to disable any ad tracking, take advantage of it. While you're in the settings, you may find other device-specific privacy or data usage settings. Take that opportunity to opt out of any tracking and collection when you can. This will be very device-dependent, but it's especially worth doing on anything you know tracks loads of data, like smart TVs
  • If you're helping set up a video or audio device, like a smart speaker or robot vacuum, poke around in the options to see if you can disable any sort of "human review" of recordings.

If during the setup process, you notice some gaps in their security hygiene, it might also be a great opportunity to help them set up other security measures, like setting up a password manager

Giving the gift of electronics shouldn’t come with so much homework, but until we have a comprehensive data privacy law, we'll likely have to contend with these sorts of set-up hoops. Until that day comes, we can all take the time to help those who need it.

Thorin Klosowski

EFF Reminds the Supreme Court That Copyright Trolls Are Still a Problem

2 months 3 weeks ago

At EFF, we spend a lot of time calling out the harm caused by copyright trolls and protecting internet users from their abuses. Copyright trolls are serial plaintiffs who use search tools to identify technical, often low-value infringements on the internet, and then seek nuisance settlements from many defendants. These trolls take advantage of some of copyright law’s worst features—especially the threat of massive, unpredictable statutory damages—to impose a troublesome tax on many uses of the internet.

On Monday, EFF continued the fight against copyright trolls by filing an amicus brief in Warner Chappell Music v. Nealy, a case pending in the U.S. Supreme Court. The case doesn’t deal with copyright trolls directly. Rather, it involves the interpretation of the statute of limitations in copyright cases. Statutes of limitations are laws that limit the time after an event within which legal proceedings may be initiated. The purpose is to encourage plaintiffs to file their claims promptly, and to avoid stale claims and unfairness to defendants when time has passed and evidence might be lost. For example, in California, the statute of limitations for a breach of contract claim is generally four years.

U.S. copyright law contains a statute of limitations of three years “after the claim accrued.” Warner Chappell Music v. Nealy deals with the question of exactly what this means. Warner Chappell Music, the defendant in the case, argued that the claim accrued when the alleged infringement occurred, giving a plaintiff three years after that to recover damages. Plaintiff Nealy argued that his claim didn’t “accrue” until he discovered the alleged infringement, or reasonably should have discovered it. This “discovery rule” would permit Nealy to recover damages for acts that occurred long ago—much longer than three years—as long as he filed suit within three years of that “discovery.”

How does all this affect copyright trolls? The “discovery rule” lets trolls reach far, far back in time to find alleged infringements (such as a photo posted on a website), and plausibly threaten their targets with years of accumulated damages. All they have to do is argue that they couldn’t reasonably have discovered the infringement until recently. The trolls’ targets would have trouble defending against ancient claims, and be more likely to have to pay nuisance settlements.

EFF’s amicus brief provided the court with an overview of the copyright trolling problem and gave examples of types of trolls. The brief then showed how an unlimited look-back period for damages under the discovery rule adds risk and uncertainty for the targets of copyright trolls and would encourage abuse of the legal system.

EFF’s brief in this case is a little unusual—the case doesn’t directly involve technology or technology companies (except indirectly, to the extent they could be targets of copyright trolls). The party we’re supporting is a leading music publishing company. Other amici on the same side include the RIAA, the U.S. Chamber of Commerce, and the Association of American Publishers. Because statutes of limitations are fundamental to the justice system, this infrequent coalition perhaps isn’t that surprising.

In many previous copyright troll cases, the courts have caught on to their abuse of the judicial system, and taken steps to shut down the trolling. EFF filed its brief in this case to ask the Supreme Court to extend these judicial safeguards, by holding that copyright infringement damages can only be recovered for acts occurring three years before the filing of the complaint. An indefinite statute of limitations would throw gasoline on the copyright troll fire and risk encouraging new trolls to come out from under the figurative bridge.

Related Cases: Warner Chappell Music v. Nealy
Michael Barclay

Meta Announces End-to-End Encryption by Default in Messenger

2 months 3 weeks ago

Yesterday Meta announced that they have begun rolling out default end-to-end encryption for one-to-one messages and voice calls on Messenger and Facebook. While there remain some privacy concerns around backups and metadata, we applaud this decision. It will bring strong encryption to over one billion people, protecting them from dragnet surveillance of the contents of their Facebook messages. 

Governments are continuing to attack encryption with laws designed to weaken it. With authoritarianism on the rise around the world, encryption is more important with each passing day. Strong default encryption, sooner, might have prevented a woman in Nebraska from being prosecuted for an abortion based primarily on evidence from her Facebook messages. This update couldn’t have come at a more important time. This introduction of end-to-end encryption on Messenger means that the two most popular messaging platforms in the world, both owned by Meta, will now include strong encryption by default. 

For now this change will only apply to one-to-one chats and voice calls, and will be rolled out to all users over the next few months, with default encryption of group messages and Instagram messages to come later. Regardless, this rollout is a huge win for user privacy across the world. Users will also have many more options for messaging security and privacy, including how to back-up their encrypted messages safely, turning off “read receipts,” and enabling “disappearing” messages. Choosing between these options is important for your privacy and security model, and we encourage users to think about what they expect from their secure messenger.

Backing up securely: the devil is in the (Labyrinthian) details

The technology behind Messenger’s end-to-end encryption will continue to be a slightly modified version of the Signal protocol (the same as Whatsapp). When it comes to building secure messengers, or in this case, porting a billion users onto secure messaging, the details are the most important part. In this case, the encrypted backup options provided by Meta are the biggest detail: in addressing backups, how do they balance security with usability and availability?

Backups are important for users who expect to log into their account from any device and retrieve their message history by default. From an encryption standpoint, how backups are handled can break certain guarantees of end-to-end encryption. WhatsApp, Meta’s other messaging service, only provided the option for end-to-end encrypted backups just a few years ago. Meta is also rolling out an end-to-end encrypted backup system for Messenger, which they call Labyrinth.

Encrypted backups means your backed-up messages will be encrypted on Facebook servers, and won’t be readable without your private key. Enabling encrypted backups (necessarily) breaks forward secrecy, in exchange for usability. If an app is forward-secret, then you could delete all your messages and hand someone else your phone and they would not be able to recover them. Deciding between this tradeoff is another factor you should weigh when choosing how to use secure messengers that give you the option.

If you elect to use encrypted backups, you can set a 6-digit PIN to secure your private key, or back up your private keys up to cloud storage such as iCloud or Google Cloud. If you back up keys to a third-party, those keys are available to that service provider and could be retrieved by law enforcement with a warrant, unless that cloud account is also encrypted. The 6-digit PIN provides a bit more security than the cloud back-up option, but also at the cost of usability for users who might not be able to remember a pin. 

Choosing the right secure messenger for your use case

There are still significant concerns about metadata in Messenger. By design, Meta has access to a lot of unencrypted metadata, such as who sends messages to whom, when those messages were sent, and data about you, your account, and your social contacts. None of that will change with the introduction of default encryption. For that reason we recommend that anyone concerned with their privacy or security consider their options carefully when choosing a secure messenger.

Cooper Quintin

Speaking Freely: Alison Macrina

2 months 3 weeks ago

Cohn: Alright, we’re doing a Speaking Freely Interview with Alison- Alison why don’t you say your name?

Alison Macrina, like Ballerina

Cohn: From the Library Freedom Project- and an EFF Award Winner 2023! Alright, let’s get into it. What does freedom of speech mean to you, Alison?

Well, to me it means the freedom to seek information, to use it, to speak it, but specifically without fear of retribution from those in power. And in LFP (Library Freedom Project) we’re really particularly concerned about how free speech and power relate. In the US, I think about power that comes from, not just the government, but also rich individuals and how they use their money to influence things like free speech, as well as corporations. I also think about free speech in terms of how it allows us to define the terms of public debate and conversation. And how also we can use it to question and shift the status quo to, in my view, more progressive ends. I think the best way that we can use our speech is using it to challenge and confront power. And identifying power structures. I think those power structures are really present in how we talk about speech. I’ve spent a lot of time thinking about all the big money that’s involved with shaping speech like the Koch brothers, etc, and how they’re influencing the culture wars. Which is why I think it’s really important, when I think about free speech, to think about things like social and economic justice. In LFP we talk about information democracy – that’s like the EFF Award that we got – and what that means to us is about how free expression, access, privacy, power, and justice interact. It’s about recognizing the different barriers to free expression, and what is actually being said, and also prioritizing the collective and our need to be able to criticize and hold accountable the people with power so that we can make a better world. 

Cohn: One of the things that I think the Library Freedom Project is that it’s really talking about the ability to access information as part of freedom of expression. Sometimes we only think about it as the speaking part, the part where it goes out, and I think one of the things that LFP really does is elevate the part where you get access to information which is equally, and importantly, a part of free speech. Is that something you want to talk about a little more? 

I think it’s one of the things that make libraries so special, right? It’s like what else do we have in our society that is a space that is just dedicated to information access? You know, anybody can use the library. Libraries exist in every community in the country. There’s all kinds of little sound bites about that, like, “there’s more libraries than there are McDonalds,” or, “there’s more libraries than Starbucks,” and what I think is also really unique and valuable about libraries is that they’re a public good that’s not means-tested. So in other words, they show up in poor communities, they’re in rich communities, they’re in middle-class communities. Most other public goods – if they exist – they’re only for the super, super poor. So it’s this, kind of… at it’s best… libraries can be such an equalizer. Some of the things we do in Library Freedom Project, we try to really push what the possibilities are for that kind of access. So offering trainings for librarians that expand on our understanding of free speech and access and privacy. Things like helping people understand artificial intelligence and algorithmic literacy. What are these tools? What do they mean? How do they work? Where are they at use? So helping librarians understand that so they can teach their communities about it. We try to think creatively about – what are the different kinds of technology at use in our world and how can librarians be the ones to offer better information about them in our communities? 

Cohn: What are the qualities that make you passionate about freedom of expression or freedom of speech? 

I mean it’s part of why I became a librarian. I don’t remember when or why it was what I wanted to do. I just knew it was what I wanted. I had like this sort of Loyd Dobler “say anything” moment where he’s like “I don’t want to buy anything that’s bought, sold, or made. I don’t want to sell anything that’s sold, bought, or made.” You know, I knew I wanted to do something in the public good. And I loved to read. And I loved to have an opinion and talk. And I felt like the library was the place that, not only where I could do that, but was a space that just celebrated that. And I think especially, all of the things that are happening in the world now, libraries are a place where we can really come together around ideas, we can expand our ideas, we can get introduced to ideas that are different from our own. I think that’s really extraordinary and super rare. I’ve always just really loved the library and wanted do it for my life. And so that’s why I started Library Freedom Project.

Cohn: That’s wonderful. Let’s talk a little about online speech and regulation. How do you think about online speech and regulation and how we should think about those issues? 

Well, I think we’re in a really bad position about it right now because, to my mind, there was a too-long period of inaction by these companies. And I think that now a decade or so of inaction created the conditions for a really harmful information movement. And now, it’s like, anything that we do, there’s unintended consequences. Content moderation is obviously extremely important- it’s an important public demand. I think it should be transparent and accountable. But all the ways that there are harmful information movements, everything I have seen, attempts to regulate them, have just resulted in people becoming hardened in their positions. 

This morning, for example, I was listening to the Senate Judiciary Hearings on book banning – because I’m a nerd – and it was a mess. It ended up not even really being about the book banning issue – which is a huge, huge issue in the library world – but it was all these Republican Senators talking about how horrible it was that the Biden administration was suppressing different kinds of COVID misinfo and disinfo. And they didn’t call it that, obviously, they called it “information” or “citizen science” or whatever. And it’s true that the Biden administration did do that – they made those demands of Facebook and so what were the results? It didn’t stop any of that disinformation. It didn’t change anybody’s minds about it. I think another big failure was Facebook and other companies trying to react to fake news by labeling stuff. And that was just totally laughable. And a lot of it was really wrong. You know, they were labeling all these leftwing outlets as Russian propaganda. I think that I don’t really know what the solution is to dealing with all of that. 

I think, though, that we’re at a place where the toothpaste is already so far out of the tube that I don’t know that any amount of regulation of it is going to be effective. I wish that those companies were regulated like public resources. I think that would make for a big shift. I don’t think companies should be making those kinds of decisions about speech. It’s such a huge problem, especially thinking about how it plays out for us at the local level in libraries- like because misinfo and disinfo are so popular, now we have people who request those materials from the library. And librarians have to make the decision- are we going to give in to public demand and buy this stuff or are we going to say, no, we are curators of information and we care about truth? We’re now in this position that because of this environment that’s been created outside of us, we have to respond to it. And it’s really hard- we’re also facing, relatedly, a massive rightwing assault on the library. A lot of people are familiar with this showing up as book bans, but it’s legislation, it’s taking over Boards, and all these other things. 

Cohn: What kind of situations, if any, is appropriate for governments or companies to limit speech? And I think they’re two separate questions, governments on the one hand and companies on the other. 

I think that, you know, Alex Jones should not be allowed to say that Sandyhook was a hoax – obviously, he’s facing consequences for that now. But the damage was done. Companies are tricky, because on the one hand, I think that different environments should be able to dictate the terms of how their platforms work. Like LFP is technically a company, and you’re not coming on any of my platforms and saying Nazi shit. But I also don’t want those companies to be arbiters of speech. They already are, and I think it’s a bad thing. I think that government regulation of speech we have to be really careful about. Because obviously it has the unintended consequence – or sometimes the intended consequences – are always harmful to marginalized people. 

Part of what motivated me to care about free speech is, I’ve been a political activist most of my life, on the left, and I am a big history nerd. And I paid a lot of attention to, historically, the way that leftist movements - how they’re speech has been marginalized and censored. From the Red Scare to anti-war speech. And I also look at a lot of what is happening now with the repression after the 2020 uprising, the No Cop City people just had this huge RICO indictment come down. And that is all speech repression that impacts things that I care about. And so I don’t want the government to intervene in any way there. At the same time, white supremacy is a really big problem. It has very real material effects and harms people. And one way this is a really big issue in my world, is part of the rightwing attack on libraries is, there is a bad faith free speech effort among them. They talk about free speech a lot. They talk about [how] they want their speech to be heard. But what they actually mean is, they want to create a hostile environment for other people. And so this is something that I end up feeling really torn about. Because I don’t want to see anyone go to prison for speech. I don’t want to see increased government regulation of speech. But I also think that allowing white supremacists to use the library meeting room or have their events there creates an environment where marginalized people just don’t go. I’m not sure what the responsible thing for us to do is. But I think that thinking about free speech outside of the abstract – thinking about the real material consequences that it has for people, especially in the library world – a lot of civil libertarians like to say, “you just respond with more speech.” And it’s like, well, that’s not realistic. You can’t easily do that especially when you’re talking about people who will cause some harm to these communities. One thing I do think, one reasonable speech regulation, is that I don’t think cops should be allowed to lie. And they are allowed, so we should do something about that. 

Cohn: Who is your free speech hero?

Well, okay, I have a few. Number one is so obvious that I feel like it’s trite to say, but, duh, Chelsea Manning. Everyone says Chelsea Manning, right? But we should give her her flowers again and again. Her life has been shaped by the decisions that she made about the things that she had to say in the public interest. I think that all whistleblowers in general are people that I have enormous respect for. People who know there are going to be consequences for their speech and do it anyway. And will sacrifice themselves for public good – it’s astounding. 

I also am very fortunate to be surrounded by free speech heroes all the time who are librarians. Not just in the nature of the work of the library, like the everyday normal thing, but also in the environment that we’re in right now. Because they are constantly pushing the bounds of public conversation about things like LGBT issues and racial justice and other things that are social goods, under extremely different conditions. Some of them are like, the only librarian in a rural community where, you know, the Proud Boys or the three percenters or whatever militant group is showing up to protest them, is trying to defund their library, is trying to remove them from their positions, is trying to get the very nature of the work criminalized, is trying to redefine what “obscenity” means. And these people, under those conditions, are still pushing for free speech and I think that’s amazing. 

And then the third one I’ll say is, I really try to keep an internationalist approach, and think about what the rest of the world experiences, because we really, even as challenging as things are in the US right now, we have it pretty good. So, when I was part of the Tor Project I got to go to Uganda with Tor to meet with some different human rights activists and talk to them about how they used Tor and help them with their situations. And I met all of these amazing Ugandan environmental activists who were fighting the construction of a pipeline – a huge pipeline from Tanzania to Uganda. And these are some of the world’s poorest people fighting some of the biggest corporations and Nation-States – because the US, Israel, and China all have a major interest in this pipeline. And these are people who were publishing anonymous blogs, with the use of Tor, under extreme threat. Many of them would get arrested constantly. Members of their organization would get disappeared for a few days. And they were doing it anyway, often with the knowledge that it wasn’t even going to change anything. Which just really was mind-blowing. And I stop and think about that a lot, when I think about all the issues that we have with free speech here. Because I think that those are the conditions that, honestly, most of the world is operating under, and those people are everyday heroes and they need to get their flowers. 

Cohn: Terrific, thank you Alison, for taking the time. You have articulated many of the complexities of the current place that we are and a few truths that we can hold, so thank you.

Cindy Cohn
Checked
2 hours 39 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed