Governor Cuomo: Keep Police and ICE Away from Our Contact Tracing Data

3 months ago

New York State lawmakers unanimously passed legislation (A10500C/S8450C) to protect New Yorkers cooperating with contact tracing efforts from having their data used against them in court proceedings or administrative hearings. Once enacted, the law will also ban police and immigration authorities from serving as contact tracers or otherwise obtaining contact tracing information. The bill waits for Governor Cuomo's signature.

Take Action

Tell governor cuomo to sign the bill

In May, with New York particularly hard hit by the pandemic, state health officials launched a robust contact tracing program. Trained contact tracers work with exposed New Yorkers to track and contain the virus's spread. They also help exposed people with finding resources they need as they self-quarantine–including groceries, child care, and medical assistance.

Effective contact tracing requires honest participation from exposed New Yorkers and trust between impacted residents and government health officials. While officials may assert that the information will be kept confidential, fear of misused personal information chills New Yorkers' participation in these critical life-saving efforts. This is of greatest concern in New York's communities of color and migrant communities, which have been particularly hard hit by both the virus and systemic bias. Concern over biased policing and harsh immigration enforcement has led many to worry about how sensitive information needed to fight the COVID-19 pandemic might be shared beyond agencies charged with protecting public health.

Public health officials are engaged in important work to contain the spread of COVID-19. This includes collecting and analyzing personal information about large numbers of identifiable people, including their health, travel, and personal relationships. However, many sensitive inferences can be drawn from a visit to a medical clinic, immigration attorney's office, religious institution, or a protest planning meeting. Police and ICE should not collect or obtain this COVID-related information.

Take Action

Tell governor cuomo to sign the bill

New measures taken to battle the pandemic must be "necessary and proportionate" to society's needs in fighting the virus. Assuring that the sensitive information collected will only be used to protect public health is critical to striking that balance and encouraging candid cooperation from affected New Yorkers. It's time for Governor Cuomo to join state lawmakers in making sure that fear doesn't exacerbate the COVID-19 crisis by chilling critical trust and cooperation. It’s time for Governor Cuomo to sign A10500C/S8450C into law.

Nathan Sheard

Section 230 is Good, Actually

3 months ago

Even though it’s only 26 words long, Section 230 doesn’t say what many think it does. 

So we’ve decided to take up a few kilobytes of the Internet to explain what, exactly, people are getting wrong about the primary law that defends the Internet.

Section 230 (47 U.S.C. § 230) is one of the most important laws protecting free speech online. While its wording is fairly clear—it states that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider" —it is still widely misunderstood. Put simply, the law means that although you are legally responsible for what you say online, if you host or republish other peoples' speech, only those people are legally responsible for what they say. 

But there are many, many misconceptions–as well as misinformation from Congress and elsewhere–about Section 230, from who it affects and what it protects to what results a repeal would have. To help explain what’s actually at stake when we talk about Section 230, we’ve put together responses to several common misunderstandings of the law. 

Section 230 should seem like common sense: you should be held responsible for your speech online, not the platform that hosted your speech or another party.

Let’s start with a breakdown of the law, and the protections it creates for you. 

How Section 230 protects free speech: 

Without Section 230, the Internet would be a very different place, one with fewer spaces where we’re all free to speak out and share our opinions. 

One of the Internet’s most important functions is that it allows people everywhere to connect and share ideas—whether that’s on blogs, social media platforms, or educational and cultural platforms like Wikipedia and the Internet Archive. Section 230 says that any site that hosts the content of other “speakers”—from writing, to videos, to pictures, to code that others write or upload—is not liable for that content, except for some important exceptions for violations of federal criminal law and intellectual property claims. 

Section 230 makes only the speaker themselves liable for their speech, rather than the intermediaries through which that speech reaches its audiences. This makes it possible for sites and services that host user-generated speech and content to exist, and allows users to share their ideas—without having to create their own individual sites or services that would likely have much smaller reach. This gives many more people access to the content that others create than they would ever have otherwise, and it’s why we have flourishing online communities where users can comment and interact with one another without waiting hours, or days, for a moderator, or an algorithm, to review every post.

And Section 230 doesn’t only allow sites that host speech, including controversial views, to exist. It allows them to exist without putting their thumbs on the scale by censoring controversial or potentially problematic content. And because what is considered controversial is often shifting, and context- and viewpoint- dependent, it’s important that these views are able to be shared. “Defund the police” may be considered controversial speech today, but that doesn’t mean it should be censored. “Drain the Swamp,” “Black Lives Matter,” or even “All Lives Matter” may all be controversial views, but censoring them would not be beneficial. 

Online platforms’ censorship has been shown to amplify existing imbalances in society—sometimes intentionally and sometimes not. The result has been that more often than not, platforms are more likely to censor disempowered individuals and communities’ voices. Without Section 230, any online service that did continue to exist would more than likely opt for censoring more content—and that would inevitably harm marginalized groups more than others.

No, platforms are not legally liable for other people’s speech–nor would that be good for users.

Basically, Section 230 means that if you break the law online, you should be the only one held responsible, not the website, app, or forum where you said the unlawful thing. Similarly, if you forward an email or even retweet a tweet, you’re protected by Section 230 in the event that that material is found unlawful. Remember—this sharing of content and ideas is one of the major functions of the Internet, from Bulletin Board Services in the 80s, to Internet Relay Chats of the 90s, to the forums of the 2000s, to the social media platforms of today. Section 230 protects all of these different types of intermediary services (and many more). While Section 230 didn’t exist until 1996, it was created, in part, to protect those services that already existed—and the many that have come after.

What’s needed to ensure that a variety of views have a place on social media isn’t creating more legal exceptions to Section 230.

If you consider that one of the Internet’s primary functions is as a way for people to connect with one another, Section 230 should seem like common sense: you should be held responsible for your speech online, not the platform that hosted your speech or another party. This makes particular sense when you consider the staggering quantity of content that online services host.  A newspaper publisher, by comparison, usually has 24 hours to vet the content it publishes in a single issue. Compare this with YouTube, whose users upload at least 400 hours of video [pdf] every minute, an impossible volume to meaningfully vet in advance of publishing online. Without Section 230, the legal risk associated with operating such a service would deter any entrepreneur from starting one.

We’ve put together an infographic about how Section 230 works that you can also view to get a quick rundown of how the law protects Internet speech, and a detailed explanation of how Section 230 works for bloggers and comments on blogs, if you’d like to see how this scenario plays out in more detail.

No, Section 230 is not a “hand-out to Big Tech,” or  a big tech “immunity, ” or a "gift" to companies. Section 230 protects you and the forums you care about, not just “Big Tech.”

Section 230 protects Internet intermediaries—individuals, companies, and organizations that provide a platform for others to share speech and content over the Internet. Yes, this includes social networks like Facebook, video platforms like YouTube, news sites, blogs, and other websites that allow comments. It also protects educational and cultural platforms like Wikipedia and the Internet Archive. 

But it also protects some sites and activities you might not expect—for example, everyone who sends an email, as well as any cybersecurity firm that uses user-generated content for their threat assessments, patches, and advisories. A list of organizations that signed onto a letter about the importance of 230 includes Automattic (makers of Wordpress), Kickstarter, Medium, Github, Cloudflare, Meetup, Patreon, Reddit, for example. But just as important as currently-existing services and platforms are those that don’t exist yet—because without Section 230, it would be cost-prohibitive to start a new service that allows user-generated speech. 

No, the First Amendment is not at odds with Section 230.

Online platforms are within their First Amendment rights to moderate their online platforms however they like, and they’re additionally shielded by Section 230 for many types of liability for their users’ speech. It’s not one or the other. It’s both. 

Some history on Section 230 is instructive here. Section 230 originated as an amendment to the Communications Decency Act (CDA), which was introduced in an attempt to regulate sexual material online. The CDA amended telecommunications law by making it illegal to knowingly send to or show minors obscene or indecent content online. The House passed the Section 230 amendment with a sweeping majority, 420-4. 

The online community was outraged by the passage of the CDA. EFF and many other groups pushed back on its overly broad language and launched a Blue Ribbon Campaign, urging sites to "wear" a blue ribbon and link back to EFF's site to raise awareness. Several sites chose to black out their webpages in protest.

The ACLU filed a lawsuit, which several civil liberties organizations like the EFF as well as other industry groups joined, and which reached the Supreme Court. On June 26, 1997, in a 9-0 decision, the Supreme Court applied the First Amendment by striking down the anti-indecency sections of the CDA. Section 230, the amendment that promoted free speech, was not affected by that ruling. As it stands now, Section 230 is pretty much the only part of the CDA left. But it took several different lawsuits to do that.

No, online platforms are not “neutral public forums.”

But Section 230 only shields an intermediary from liability that already exists. If speech is protected by the First Amendment, there can be no liability either for publishing it or republishing it, regardless of Section 230. As the Supreme Court recognized in the Reno v. ACLU case, the First Amendment’s robust speech protections fully apply to online speech. Section 230 was included in the CDA to ensure that online services could decide what types of content they wanted to host. Without Section 230, sites that removed sexual content could be held legally responsible for that action, a result that would have made services leery of moderating their users’ content, even if they wanted to create online spaces free of sexual content. The point of 230 was to encourage active moderation to remove sexual content, allowing services to compete with one another based on the types of user content they wanted to host. 

Moreover, the First Amendment also protects the right of online platforms to curate the speech on their sites—to decide what user speech will and will not appear on their sites. So Section 230’s immunity for removing user speech is perfectly consistent with the First Amendment. This is apparent given that prior to the Internet, the First Amendment gave non-digital media, such as newspapers, the right to decide what stories and opinions it would publish.

No, online platforms are not “neutral public forums.”

Nor should they be. Section 230 does not say anything like this. And trying to legislate such a “neutrality” requirement for online platforms—besides being unworkable—would violate the First Amendment. The Supreme Court has confirmed the fundamental right of publishers to have editorial viewpoints. 

It’s also foolish to suggest that web platforms should lose their Section 230 protections for failing to align their moderation policies to an imaginary standard of political neutrality. One of the reasons why Congress first passed Section 230 was to enable online platforms to engage in good-faith community moderation without fear of taking on undue liability for their users’ posts. In two important early cases over Internet speech, courts allowed civil defamation claims against Prodigy but not against Compuserve; since Prodigy deleted some messages for “offensiveness” and “bad taste,” a court reasoned, it could be treated as a publisher and held liable for its users’ posts. Former Rep. Chris Cox recalls reading about the Prodigy opinion on an airplane and thinking that it was “surpassingly stupid.” That revelation led to Cox and then Rep. Ron Wyden introducing the Internet Freedom and Family Empowerment Act, which would later become Section 230. 

In practice, creating additional hoops for platforms to jump through in order to maintain their Section 230 protections would almost certainly result in fewer opportunities to share controversial opinions online, not more: under Section 230, platforms devoted to niche interests and minority views can thrive. 

Print publishers and online services are very different, and are treated differently under the law–and should be.

It’s true that online services do not have the same liability for their content that print media does. Unlike publications like newspapers that are legally responsible for the content they print, online publications are relieved of this liability by Section 230. The major distinction the law creates is between online and offline publication, a recognition of the inherent differences in scale between the two modes of publication. (Despite claims otherwise, there is no legal significance to labeling an online service a “platform” as opposed to a “publisher.”) 

But an additional purpose of Section 230 was to eliminate any distinction between those who actively select, curate, and edit the speech before distributing it and those who are merely passive conduits for it. Before Section 230, courts effectively disincentivized platforms from engaging in any speech moderation. Section 230 provides immunity to any “provider or user of an interactive computer service” when that “provider or user” republishes content created by someone or something else, protecting both decisions to moderate it and those to transmit it without moderation. 

“User,” in particular, has been interpreted broadly to apply “simply to anyone using an interactive computer service.” This includes anyone who maintains a website, posts to message boards or newsgroups, or anyone who forwards email. A user can be an individual, a nonprofit organization, a university, a small brick-and-mortar business, or, yes, a “tech company.” 

Legacy news media companies—such as a newspaper publisher—may complain that Section 230 gives online social media platforms extra legal protections and thus an unfair advantage. But Section 230 makes no distinction between news entities and social media platforms. And plenty, if not the vast majority, of news media entities publish online—either solely or in tandem with their print editions. When a news media entity publishes online, it gets the exact same Section 230 immunity from liability based on publishing someone else’s content that a social media platform gets.

No, Section 230 does not stop platforms from moderating content.

The misconception that platforms can somehow lose Section 230 protections for moderating users’ posts has gotten a lot of airtime. This is false. Section 230 allows sites to moderate content how they see fit. And that’s what we want: a variety of sites with a plethora of moderation practices keeps the online ecosystem workable for everyone. The Internet is a better place when multiple moderation philosophies can coexist, some more restrictive and some more permissive.

Section 230 reforms (that we've seen) would not make platforms better at moderation.

It’s absolutely a problem that just a few tech companies wield immense control over what speakers and messages are allowed online. It’s a problem that those same companies fail to enforce their own policies consistently or offer users meaningful opportunities to appeal bad moderation decisions. 

But without Section 230 there’s little hope of a competitor with fairer speech moderation practices taking hold given the big players’ practice of acquiring would-be competitors before they can ever threaten the status quo.

But there are ways to make content moderation work better for users.

A group of content moderation experts, organizations, advocates, and academic experts agree that the best way to start improving moderation is for companies to implement “The Santa Clara Principles On Transparency and Accountability in Content Moderation.” These principles are a floor, not a ceiling. They state:

  • Companies should publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines.
  • Companies should provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension. 
  • Companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension.

Aside from these principles, there are many other ways for users (and Congress) to push for better moderation practices without repealing or modifying Section 230. While large tech companies might clamor for regulations that would hamstring their smaller competitors, they’re notably silent on reforms that would curb the practices that allow them to dominate the Internet today. That’s why EFF recommends that Congress update antitrust law to stop the flood of mergers and acquisitions that have made competition in Big Tech an illusion. Before the government approves a merger, the companies should have to prove that the merger would not increase their monopoly power or unduly harm competition.

But even updating antitrust policy is not enough. Major social media platforms’ business models thrive on practices that keep users in the dark about what information they collect on us and how it’s used. Decisions about what material (including advertising) to deliver to users are informed by a web of inferences about users, inferences that are usually impossible for users even to see, let alone correct.

Because of the link between social media’s speech moderation policies and its irresponsible management of user data, Congress can’t improve Big Tech’s practices without addressing its surveillance-based business models. What’s more, users shouldn’t be held hostage to a platform’s proprietary algorithm. Instead of serving everyone “one algorithm to rule them all” and giving users just a few opportunities to tweak it, platforms should open up their APIs to allow users to create their own filtering rules for their own algorithms. News outlets, educational institutions, community groups, and individuals should all be able to create their own feeds, allowing users to choose who they trust to curate their information and share their preferences with their communities.

In sum: what’s needed to ensure that a variety of views have a place on social media isn’t creating more legal exceptions to Section 230. Rather, companies should institute reasonable, transparent moderation policies. Platforms shouldn’t over-rely on automated filtering and unintentionally silence legitimate speech and communities in the process. And platforms should add features to give users themselves—not platform owners or third parties—more control over what types of posts they see.

No, reforming Section 230 will not hurt Big Tech companies like Facebook and Twitter–but it will hurt smaller platforms and users.

Some people wrongly think that eliminating Section 230 will fix their (often legitimate) concerns about the dominance of online services like Facebook and Twitter. But that won't solve those problems - it will only ensure that major platforms never face significant competition

Reforming Section 230 would not only fail to punish “Big Tech,” but would backfire in just about every way

Facebook was one of the first tech companies to endorse SESTA/FOSTA, the 2018 law that significantly undermined Section 230’s protections for free speech online, and Facebook is now leading the charge for further reforms to Section 230. Though calls to reform Section 230 are frequently motivated by disappointment in Big Tech’s speech moderation policies, evidence shows that further reforms to Section 230 would make it more difficult for new entrants to compete with Facebook or Twitter—and would likely make censorship worse, not better. 

Unfortunately, trying to legislate that platforms moderate certain content more forcefully, or more “neutrally,” would create immense legal risk for any new social media platform—raising, rather than lowering, the barrier to entry for new platforms. Likewise, if Twitter and Facebook faced serious competition, then the decisions they make about how to handle (or not handle) hateful speech or disinformation wouldn’t have nearly the influence they have today on online discourse. If there were twenty major social media platforms, then the decisions that any one of them makes to host, remove, or fact-check the latest misleading post about the election results wouldn’t have the same effect on the public discourse. 

Put simply: reforming Section 230 would not only fail to punish “Big Tech,” but would backfire in just about every way, leading to fewer places for people to publish speech online, and to more censorship, not less.

Repealing Section 230 would be disastrous for users.

We don’t have to guess about what would happen if we repeal Section 230. We’ve seen it. SESTA/FOSTA shot a hole right through Section 230, by creating new federal criminal and civil liability for anyone who “owns, manages, or operates an interactive computer service” and speaks, or hosts third-party content, with the intent to “promote or facilitate the prostitution of another person.” Its broad language means that if the owner of an interactive computer service hosts content or viewpoints that might be seen as promoting or facilitating prostitution, or as assisting, supporting or facilitating sex trafficking, the service is liable.

SESTA/FOSTA immediately led to censorship, and to increased risk for sex workers. Organizations that provide support and services to victims of trafficking and child abuse, sex workers, and groups and individuals promoting sexual freedom were implicated by its broad language. Fearing that comments, posts, or ads that are sexual in nature will be ensnared by FOSTA, many vulnerable people have gone offline and back to the streets, where they’ve been sexually abused and physically harmed. Additionally, numerous platforms that host entirely legal speech have had to shut down and self censor. 

It’s essential that newer, smaller companies be given the same chance to host speech that those companies had fifteen or twenty years ago

We’ve filed a lawsuit to repeal SESTA/FOSTA on behalf of several plaintiffs who have been harmed by the law.

On a large scale, the wholesale repeal of Section 230 means many, many newer and smaller platforms would simply have to heavily censor anything that could be construed as illegal speech. Many of these sites would stop hosting content entirely, and smaller sites that host content as their primary function would be forced offline. Those who still want to host content would have to use filters and other algorithmic moderation that would cast a wide net, and remaining posts would likely take days before they could be viewed and allowed online. 

Larger sites, though, would continue to function relatively similarly, although with much more censorship and much stricter automated moderation. Section 230 is one of the only legal incentives that sites have now to leave up a large amount of content that they would likely take down, from organizations planning a protest to individuals calling for the ouster of a government official. 

If online services were liable for more types of content, the Internet would likely be worse, not better.

We know that platforms are notoriously bad at moderation. Even when detailed guidelines for moderators exist, it’s often very hard to apply strict rules successfully to the vast array of types of speech that exist—when someone is being sarcastic, or content is ironic, for example. As a result, creating new categories of speech that online services are liable for hosting would almost certainly result in overbroad takedowns. 

Right now, platforms are allowed to mostly create their own rules for how they moderate. Giving the government more power to control speech would not be a remedy for the moderation problems that exist. As an example, social media platforms have long struggled with the problem of extremist or violent content on their platforms. Because there is no international agreement on what exactly constitutes terrorist, or even violent and extremist, content, companies look at the United Nations’ list of designated terrorist organizations or the US State Department’s list of Foreign Terrorist Organizations. But those lists mainly consist of Islamist organizations, and are largely blind to, for example, U.S.-based right-wing extremist groups. And even if there was consensus on what constitutes a terrorist, the First Amendment generally would protect those individuals’ ability to speak absent them making true threats or directly inciting violent acts.

The combination of these lists and blunt content moderation systems leads to the deletion of vital information not available elsewhere, such as evidence of human rights violations or war crimes. It is very difficult for human reviewers—and impossible for algorithms—to consistently get the nuances of activism, counter-speech, and extremist content itself right. The result is that many instances of legitimate speech are falsely categorized as terrorist content and removed from social media platforms. Those false positives, and other moderation mistakes, will fall disproportionately on Muslim and Arab communities. It also hinders journalists, academics, and government officials because they cannot view and or share this content. While sometimes problematic, the documentation and discussion of terrorist acts is essential given that it is one of the most important political and social issues in the world.

With further government intervention into what must be censored, this situation could potentially become much worse, putting marginalized communities and those with views that differ from whoever might be in power in an even more precarious situation online than they already are. Government actors often label political opposition or disempowered groups as terrorists. Section 230 ensures platforms make these choices based on their own calls about what constitutes speech they will not host, not the government’s whims.

We still need Section 230— now more than ever.

Now that a few companies have grown to contain a vast majority of user-generated online content, it’s essential that newer, smaller companies be given the same chance to host speech that those companies had fifteen or twenty years ago. Without Section 230, competition would be unlikely to succeed, because the liability for hosting online content would be so great that only the largest companies could survive the cost of (legitimate or illegitimate) lawsuits that they would have to fight. Additionally, though automated content moderation isn’t likely to succeed at scale, companies who could afford it would be the only ones who could attempt to moderate. We absolutely still need Section 230. In fact, we may need Section 230 even more than when we did in 1997. 

The protections of Section 230 aren’t hypothetical. It’s been used to protect users and services in court many, many times. 

Section 230 doesn’t just protect the big companies you’ve heard of—it protects all intermediaries equally. Removing that protection would open every intermediary up to lawsuits, forcing all but the largest of them to shut down, or stop hosting user-generated content altogether. And it would be much more difficult for new services that host speech to enter the online ecosystem.

Section 230 has already protected users in court. 

The protections of Section 230 aren’t hypothetical. It’s been used to protect users and services in court many, many times

A few examples: in 2006, women's health advocate Ilena Rosenthal posted a controversial opinion piece written by Tim Bolen onto a Usenet news group. Lawyers argued that Rosenthal was liable for libel, because posting the comments made her a "developer" of the information in question. The California Supreme Court upheld the strong protections of Section 230. Had the court found in favor of the plaintiffs, the implications for free speech online would have been far-reaching: bloggers could be held liable when they quote other people's writing and website owners could be held liable for what people say in message boards on their sites. 

In 2007, a third party posted defamatory statements about a company—Universal Communications Systems—on an online Lycos message board. The company sued Lycos, arguing in part that Lycos' registration process and link structure had prompted the statements and extended a type of "culpable assistance" to the author. The court rejected those claims, ruling that Lycos’ services were protected by Section 230. Lycos won the case, thanks to Section 230. 

And in 2003, a federal appellate court ruled that Section 230 protected the creator of a newsletter from legal claims by a third party whose email was included in the newsletter. The case recognized that Section 230 protected individual digital publishers from liability based on third-party content, a foundational principle that continues to protect individuals and online services today whenever they host or distribute other people’s content online.  

In 2018, a spreadsheet known as the “Shitty Media Men List,” initially created by Moira Donegan, gained recognition for containing individuals which were suspected of mistreatment of female employees. A defamation lawsuit against Donegan was brought by the writer Stephen Elliott, who was named on the list. But The Shitty Media Men list was a Google spreadsheet shared via link and made editable by anyone, making it particularly easy for anonymous speakers to share their experiences with men identified on the list. Because Donegan initially created the spreadsheet as a platform to allow others to provide information, Donegan is likely immune from suit under Section 230. The case is still pending, but we expect the court to rule that she is not liable. 

There are dozens of cases like these. But many, many more have never had to go to court, thanks to Section 230’s protections, much of which is now settled law.

No, Section 230 doesn’t mean certain political views are censored more than others.

Some politicians seem to believe (or at least have claimed) that Section 230 results in censorship of certain political views, despite there being no evidence to support the claim. Others seem to believe (or at least have claimed) that Section 230 results in platforms hosting a variety of “dangerous” content. Though it may be easy to point the finger at the platforms, and by extension, at the law that protects those online services from liability for much of the content that users generate, Section 230 is not the problem. As described above, even without Section 230, online services have a First Amendment right to moderate user-generated content. 

Reforming online platforms is tough work. Repealing Section 230 may seem like the easy way out, but as mentioned above, no reform to Section 230 that we’ve seen would solve these problems. Rather, reforms would likely backfire--increasing censorship in some cases, and dangerously increasing liability in others. 

There’s a lot of confusion around Section 230, but you can help.

The incredible thing about the Internet is that you’re not liable for what someone else wrote, even if you share it with others. Take a minute to exercise this right, and share this blog post, so that more people can get a clearer idea of why Section 230 matters, and how it helps the users of Internet services both big and small.

Related Cases: Woodhull Freedom Foundation et al. v. United StatesBarrett v. RosenthalAshcroft v. ACLU
Jason Kelley

Law Enforcement Purchasing Commercially-Available Geolocation Data is Unconstitutional

3 months ago

Many of the smartphone apps people use every day are collecting data on their users and, in order to make money, many of these apps sell that information. One of the customers for this data is the U.S. government, which regularly purchases commercially available geolocation data. This includes the Department of Defense, CBP, ICE, the IRS, and the Secret Service. But it violates the First and Fourth Amendments of the U.S. Constitution for the government to purchase commercially available location data it would otherwise have to get a warrant to acquire. 

A recent article in Motherboard reports that a Muslim prayer app (Muslim Pro), a Muslim dating app (Muslim Mingle), and many other popular apps have been selling geolocation data about their users to a company called X-Mode, which in turn provides this data to the U.S. military through defense contractors. 

Although Muslim Pro announced it would stop selling data to X-Mode, the awful truth remains: far too many companies that collect geolocation data can make a quick buck by selling that information, and the federal government is a regular buyer. 

This violates the  First and Fourth Amendments. In the current location data marketplace, if your phone and apps know where you are, then the government could, too. But the Supreme Court has decided that our detailed location data is so revealing about our activities and associations that law enforcement must get a warrant in order to acquire it. Government purchase of location data also threatens to chill people’s willingness to participate in protests in public places, associate with who they want, or practice their religion. History and legal precedent teach us that when the government indiscriminately collects records of First Amendment activities, it can lead to retaliation or further surveillance. 

Unfortunately, this problem goes well beyond X-Mode. Other data brokers that sell app-derived location data to the federal government include Anomaly Six, Locate X, and Venntel. In February 2020, the Wall Street Journal first revealed how the government bought geolocation data, originating from weather apps and mobile games, in order to fuel immigration enforcement. There is only limited oversight of this practice, in part because the government and its vendors have kept it secret. The government could potentially use this commercially available data to put people, including attorneys, activists, and journalists, under constant geolocation surveillance without a warrant. 

We need federal legislation preventing the government from purchasing location data. Just because the app developers are selling our data, does not mean we should let the government subvert warrant requirements and trample the Constitution by buying our data. The landscape of corporations buying and selling consumer data is already terrifying enough from a privacy perspective, which is why we also need federal comprehensive consumer data privacy legislation. Congress should step up and introduce a bill that would prevent the government from buying geolocation data that would allow law enforcement and intelligence agencies to know our every move. Just because our favorite video poker app knows where we’ve been, doesn’t mean Immigration and Customs Enforcement should know as well.

Matthew Guariglia

Sen. Ron Wyden Joins EFF on December 10 for Fireside Chat About the Future of Free Speech

3 months ago
Coauthor of Section 230, Wyden Will Address Calls to Repeal the Provision

San Francisco—Sen. Ron Wyden, a fierce advocate for the rights of technology users, will join EFF Legal Director Corynne McSherry on Thursday, December 10, for a livestream fireside chat about the fight to defend freedom of expression and innovation on the web.

Wyden is an original framer of Section 230, one of the legal pillars of the Internet. Section 230 protects online intermediaries—news websites, social media platforms, bloggers, online classifieds like craigslist, review sites like Yahoo, and much more—from lawsuits seeking to hold them legally responsible for what people who post or comment on their sites say and do.

Section 230 protects the online speech of ordinary people everywhere. Users can forward an email without worrying whether its contents might be deemed defamatory under some state's law. People can comment on and review books. Job search services can allow employees to share their views on various employers. Women who share stories of sexual harassment as part of the #MeToo movement can do so with less fear that the services they rely on to tell those stories will cut them off to avoid legal threats. Universities can provide forums for students to share their work, which is especially important during the pandemic, all because of protections afforded by Section 230.

Anti-speech and anti-security bills introduced in Congress aimed at breaking Section 230, including the dangerous EARN IT Act, would give the government power to decide what speech should and should not be allowed on the web, in direct conflict with the free speech principles that underpin our democracy.

Blaming Section 230 for the perceived ills of big social media companies is convenient, but misguided,” said McSherry. “We’re pleased to host Sen. Wyden for an important discussion about the origins and intent of Section 230, and why repealing or significantly weakening the provision will not only threaten free speech for all users but also impede the emergence of alternative platforms and services.”

In addition to championing the free speech rights of users, Wyden wrote the first bill to protect net neutrality, and has defended strong encryption. Wyden has called for strong data privacy protections, and last fall introduced the most comprehensive bill to protect Americans’ personal details online, the Mind Your Own Business Act.

The hour-long fireside chat and Q&A begins at 4 pm Pacific Time, and will be livestreamed on TwitchFacebook Live, Twitter, and YouTube Live. More information about how to view the chat is available at

For more on the event:

For more on Section 230:

Contact:  CorynneMcSherryLegal
Karen Gullo

Action for Egyptian Human Rights Defenders

3 months ago

The undersigned organisations strongly condemn the persecution of employees of the Egyptian Initiative for Personal Rights (EIPR) and Egyptian civil society by the Egyptian government. We urge the global community and their respective governments to do the same and join us in calling for the release of detained human rights defenders and a stop to the demonisation of civil society organisations and human rights defenders by government-owned or pro-government media.

Since November 15, Egyptian authorities have escalated their crackdown on human rights defenders and civil society organizations. On November 19, Gasser Abdel-Razek, Executive Director of the Egyptian Initiative for Personal Rights (EIPR)—one of the few remaining human rights organisations in Egypt—was arrested at his home in Cairo by security forces. One day prior, EIPR’s Criminal Justice Unit Director, Karim Ennarah, was arrested while on vacation in Dahab. The organization’s Administrative Manager, Mohamed Basheer, was also taken in the early morning hours from his home in Cairo 15 November. 

All three appeared in front of the Supreme State Security Prosecution where they were charged with joining a terrorist group, spreading false news, and misusing social media, and were remanded into custody and given 15 days of pre-trial detention.

The interrogations of the security services and then the prosecution of the leaders of the EIPR focused on the organisation's activities, the reports issued by it, and its efforts of advocating human rights, especially a meeting held in early November by EIPR and attended by a number of ambassadors and diplomats accredited to Egypt from some European countries, Canada, and the representative of the European Union.

The detention of EIPR staff means one thing: Egyptian authorities are continuing to commit human rights violations with full impunity. This crackdown comes amidst a number of other cases in which the prosecution and investigation judges have used pre-trial detention as a method of punishment. Egypt’s counterterrorism law was amended in 2015 under President Abdel-Fattah al-Sisi so that pre-trial detention can be extended for two years and, in terrorism cases, indefinitely. A number of other human rights defenders—including Mahienour el-Masry, Mohamed el-Baqer, Solafa Magdy, Alaa Abd El Fattah, Sanaa Seif, and Esraa Abdelfattah — are currently held in prolonged pre-trial detention. EIPR researcher Patrick George Zaki remains detained pending investigations by the Supreme State Security Prosecution (SSSP) over unfounded “terrorism”-related charges since his arrest in February 2020. Amnesty International has extensively documented how Egypt’s SSSP uses extended pre-trial detention to imprison opponents, critics, and human rights defenders over unfounded charges related to terrorism for months or even years without trial. 

In addition to these violations, Gasser Abdel-Razek told his lawyer that he received inhumane and degrading treatment in his cell that puts his health and safety in danger. He further elaborated that he was never allowed out of the cell, had only a metal bed to sleep on with neither mattress nor covers, save for a light blanket, was deprived of all his possessions and money, was given only two light pieces of summer garments, and was denied the right to use his own money to purchase food and essentials from the prison’s canteen. His head was shaved completely. 

The manner in which Egypt treats its members of civil society cannot continue, and we, an international coalition of human rights and civil society actors, denounce in the strongest of terms the arbitrary use of pre-trial detention as a form of punishment. The detention of EIPR staff is the latest example of how Egyptian authorities crackdown on civil society with full impunity. It’s time to hold the Egyptian government accountable for its human rights abuses and crimes. Join us in calling for the immediate release of EIPR staff, and an end to the persecution of Egyptian civil society.


Access Now
Africa Freedom of Information Centre (AFIC)
Americans for Democracy & Human Rights in Bahrain (ADHRB)
Arabic Network for Human Rights Information (ANHRI)
Association of Caribbean Media Workers (ACM)
Association for Freedom of Thought and Expression (AFTE)
Association for Progressive Communications (APC)
Cairo Institute for Human Rights Studies (CIHRS)
Center for Democracy & Technology
Committee for Justice (CFJ)
Digital Africa Research Lab
Digital Rights Foundation
Egyptian Front for Human Rights
Electronic Frontier Foundation (EFF)
Elektronisk Forpost Norge (EFN) - for digital rights
Fight for the Future
Free Media Movement (FMM)
Fundación Andina para la Observación y el Estudio de Medios (Fundamedios)
The Freedom Initiative
Fundación Ciudadanía Inteligente
Globe International Center
Gulf Centre for Human Rights (GCHR)
Homo Digitalis 
Human Rights Watch
Hungarian Civil Liberties Union (HCLU)
Index on Censorship
Independent Journalism Center Moldova (IJC-Moldova)
International Press Centre (IPC) Lagos-Nigeria
International Press Institute (IPI)
Initiative for Freedom of Expression - Turkey (IFoX)
International Free Expression Project
Masaar - Technology and Law Community
Mediacentar Sarajevo
Media Foundation for West Africa (MFWA)
Media Institute of Southern Africa (MISA) - Zimbabwe
MENA Rights Group
Myanmar ICT for Development Organization (MIDO)
Open Observatory of Network Interference (OONI)
Pacific Islands News Association (PINA)
Pakistan Press Foundation (PPF)
PEN Canada
PEN Norway
Privacy International (PI)
Public Foundation for Protection of Freedom of Speech (Adil Soz)
R3D: Red en Defensa de los Derechos Digitales 
Reporters Sans Frontières (RSF)
Scholars at Risk (SAR)
Skyline International Foundation
Social Media Exchange (SMEX)
South East Europe Media Organisation (SEEMO)
Statewatch (UK)
Vigilance for Democracy and the Civic State

Jillian C. York

Podcast Episode: From Your Face to Their Database

3 months ago
Episode 005 of EFF’s How to Fix the Internet

Abi Hassen joins EFF hosts Cindy Cohn and Danny O’Brien as they discuss the rise of facial recognition technology, how this increasingly powerful identification tool is ending up in the hands of law enforcement, and what that means for the future of public protest and the right to assemble and associate in public places.

In this episode you’ll learn about:

  • The Black Movement Law Project, which Abi co-founded, and how it has evolved over time to meet the needs of protesters;
  • Why the presumption that people don’t have any right to privacy in public spaces is challenged by increasingly powerful identification technologies;
  • Why we may need to think big when it comes to updating the U.S. law to protect privacy;
  • How face recognition technology can have a chilling effect on public participation, even when the technology isn’t  accurate;
  • How face recognition technology is already leading to the wrongful arrest of innocent people, as seen in a recent case of a man in Detroit;
  • How gang laws and anti-terrorism laws have been the foundation of a legal tools that can now be deployed against political activists;
  • Understanding face recognition technology within the context of a range of powerful surveillance tools in the hands of law enforcement;
  • How we can start to fix the problems caused by facial recognition through increased transparency, community control, and hard limits on law enforcement use of face recognition technology,
  • How Abi sees the further goal is to move beyond restricting or regulating specific technologies to a world where public protests are not so necessary, as part of reimagining the role of law enforcement.

Abi is a political philosophy student, attorney, technologist, co-founder of the Black Movement-Law Project, a legal support rapid response group that grew out of the uprisings in Ferguson, Baltimore, and elsewhere. He is also a partner (currently on leave) at O’Neill and Hassen LLP, a law practice focused on indigent criminal defense. Prior to this current positions, he was the Mass Defense Coordinator at the National Lawyers Guild. Abi has also worked as a political campaign manager and strategist, union organizer, and community organizer. He conducts trainings, speaks, and writes on topics of race, technology, (in)justice, and the law. Abi is particularly interested in exploring the dynamic nature of institutions, political movements, and their interactions from the perspective of complex systems theory. You can find Abi on Twitter at @AbiHassen, and his website is

Please subscribe to How to Fix the Internet via RSSStitcherTuneInApple PodcastsGoogle PodcastsSpotify or your podcast player of choice. You can also find the Mp3 of this episode on the Internet Archive and embedded below. Privacy info. This embed will serve content from

If you have any feedback on this episode, please email

Below, you’ll find legal resources – including links to important cases, books, and briefs discussed in the podcast – as well a full transcript of the audio.


Current State of Surveillance

European Regulation of Data and Privacy

State Use and Mis-Use of Surveillance

Flaws and Consequences of Surveillance

Protecting Oneself from Surveillance

Lawsuits Against Facial Recognition and Surveillance

Surveillance and Black-Led Movements

Activism Against Surveillance

Other Resources

Transcript of Episode 005: From Your Face to Their Database

Danny O’Brien:

Welcome to How to Fix the Internet with the Electronic Frontier Foundation, a podcast that explores some of the biggest problems we face online right now, problems whose source and solution is often buried in the obscure twists of technological development, societal change, and the subtle details of Internet lore.

Cindy Cohn:

Hi everyone. I'm Cindy Cohn, and I'm the Executive Director of the Electronic Frontier Foundation. And like a lot of us here, I'm a lawyer.

Danny O'Brien:

And I'm Danny O'Brien. I work at EFF too, but I'm not a lawyer.

Cindy Cohn:

Then what are you, Danny?

Danny O'Brien:

I’ve spent so long with lawyers, I've kind of forgotten what I am. It's a bit like if you're raised by wolves.

Cindy Cohn:

Well, this week, we're tackling facial recognition, which will tell us whether you're turned into a wolf, Danny. In the last few years, face recognition has gone from a high-tech party trick to a serious threat to civil liberties. Companies are already touting the ability to turn any photo into a name and identity based on pictures, taking from private records and also the public Internet. Cities and police forces are being sold equipment that can identify and track citizens as they go about their business in real time. And then permanently record that information for later investigations or, as we've seen, misuse.

Danny O'Brien:

I think most people have yet to realize just how good facial recognition has gotten recently. I think it's reached the point where it's a perfectly reasonable thing to expect the software to do, that you can take a photograph of a demonstration or live video, and the facial recognition software will be able to pick out the faces from a crowd. All of the faces, or as many as it can, and then correlate those to a database. A database that could contain everybody who's put their faces up on the Internet in a photograph or even a profile picture. That's a reasonable thing to expect modern facial recognition software to do. And that's the pitch that's being given to law enforcement by commercial companies selling this technology, and at a pretty cheap price as well. This is getting to the point of being off-the-shelf software rather than an expensive service that maybe only the NSA can use or large companies can fund.

Cindy Cohn:

At the same time that facial recognition is getting really good in some ways, it's also still really bad in some others, and quite dangerous. Then the results are often terribly biased. They fail to identify nonwhite people, People of Color correctly, far more often than it fails with white people. It's often embedded in systems and structures and policies that are racist as well. AI and machine learning inferences can only guess about the future based upon the data you fed them about the past. So if the training data is slanted or racist the guesses about the future will be, too. And in addition, there's a growing body of research and a budding set of tools being developed to fool facial recognition. In COVID time, we're seeing even that masks are causing flaws as well. We already have seen the first couple of false arrests based on bad uses of facial recognition and more on the way. And it's only a matter of time before this spills into political protests.

Danny O'Brien:

This is a sort of paradox that we see a lot at EFF in emerging technologies. If the technology really worked as well as it's being hyped, it's maybe terrifying for civil liberties. But it's still bad, even when it doesn't live up to those promises because it fails in ways that the authorities refuse to acknowledge or mitigate against.

Cindy Cohn:

Yeah, so we're damned if it works, and we're damned if it doesn't.

Danny O'Brien:

Joining us today is Abi Hassen, co-founder of the Black Movement Law Project, who has been watching just how facial recognition can be misused to silence dissent and track legitimate protest. He's been a key figure in the campaign to place limits on the use of this anything-but-benign technology, where it's increasingly problematic on the streets during protests. Welcome, Abi.

Abi Hassen:

Thank you so much, Danny and Cindy, for having me.

Cindy Cohn:

For our purposes, of course, Abi is a lawyer as well. So how did you get involved in all of this? The Black Movement Law Project is near and dear to our hearts here at EFF because our colleague Nash was also one of the co-founders. But how did you get involved?

Abi Hassen:

So, I started my legal career through several kinds of circuitous routes around the law. First, as a labor organizer and community activist and community organizer, and kind of a mixed role, but heavily involved in labor. And then I ended up transitioning to a more protest-focused job where I got hired at the National Lawyers Guild and coincidentally, right, I got hired there about six months before Occupy Wall Street started. So I cut my teeth in the world of protest. Protest law and protest activity at a pretty opportune time for seeing some action, so to speak. And so, and then as that Occupy moment changed and Trayvon Martin, and then Ferguson, and then Baltimore started happening, I ended up going down to Baltimore to help with doing some legal support.

Abi Hassen:

I met some other folks, and we started Black Movement Law project out of that recognition that protest legal space needed some explicit Black leadership and to help try to develop that and help try to work also in places that didn't have existing legal infrastructure. Those mid-sized cities where we were seeing a lot of insurrection and protests. But they weren't like the Bay Area or Los Angeles or New York where there's long-established legal support communities, so that's how I got involved. And then, the surveillance and protest stuff just naturally came out of working with activists. Working with activists in this post-Snowden moment, and then working with activists... I mean, the dynamics, obviously even changed more during the Trump moment. But it just became clearer and clearer that digital security and digital self-defense became more and more on people's minds.

Cindy Cohn:

Can you give us just a thumbnail sketch of what Black Movement Law Project does?

Abi Hassen:

It’s been transitioning as the moments and the political moments have changed. And so, for those first few years, in 2015, 2016, into 2017, it was a lot of on-the-ground helping, really develop jail support and helping people doing basic protest training, which is, know your rights and how to set up your own legal support. How to help guide people through the system, and then volunteer management and stuff that goes along with that. And our model was really based on helping people locally figure out how to do it themselves and not being in charge of it and not try to run it.

Abi Hassen:

But I mean, obviously that went back and forth into lots of different places. And then, like I said before that, it became clearer and clearer that there was a need for digital security training. And we saw more and more police just overtly using social media. And there was all this stuff about Stingrays started coming out. And this is all again post-Snowden. So, it's like there was already a level of concern. And so, we actually partnered with EFF pretty early on in that time and started a series of digital security trainings across the country. And then, I ended up working with EFF also on training tools and stuff like that. And then, as the moment has changed yet again. And then, we got Trump. And then, COVID. I've been doing some remote trainings. I've been working on some research projects.

Abi Hassen:

I think that the BMLP as what it was in those early days, it's not functioning in that way. Because we're all doing other things primarily, and doing the BMLP stuff as a second job if you will. And so, we're trying to take the knowledge we have and turn it into tools, turn it into trainings. I'm actually studying now, I'm doing a degree in political theory. So, trying to bring that experience also into more theoretical spaces. So, it's kind of a mixed bag. But it's really still developing tools for social change, for political movements, out of our experience and collective knowledge.

Cindy Cohn:

I mean, I think that an organization that can move with the needs of the community is really a healthy one. I want to ask a little more personal question. What fuels you about this? Why are you passionate about this? Where does it come from inside you?

Abi Hassen:

I know that your question is not just about technology. So, just a little background. I did a degree in computer science, and that was my undergraduate education. In philosophy and computer science. And so, I've always been interested in these deeper questions of how the world works. And I think that learning about technology and learning about the interaction of technology and society, it's a helpful way to gain insight into how the world works and how politics works. But I've also always had a strong kind of, I don't know if it's like a moral intuition or what. But I feel like, what is the thing that I can do at this moment, that can help what I consider to be the problems in the world?

Abi Hassen:

And so, that's led me on this journey from labor to protest to politics. Just trying to figure out how these things work in one hand, and how to best intervene on the other. I think that might've, that was probably the crazy mistake that led me into law school like the rest of us. But then, it took me a long time to realize that the law is not what I thought it was. But it is also a venue for creating change. But maybe not in the way I thought it was before going to law school, if that makes sense.

Cindy Cohn:

I think that's a realization a lot of us make in law school. I mean, you're describing a journey that feels very much like mine, I have to say, in terms of just having this inner feeling like your job is to figure out where you can help and where your tools and skills will help the best. And that often does lead you on a, I wouldn't say circuitous, but it's a dynamic conversation that you're tending to have with your skills and the world, and what the world needs.

Danny O'Brien:

And as the non-lawyer, I think I feel like a sort of anthropologist here. It's actually amazing that passage between someone studying computer science, or being involved in the vague outline and intuitions about technology, and then, realizing that, that's an important perspective in law, too. Especially, in times of rapid technological change, where the technology changes how civil liberties need to be interpreted in the world.

Cindy Cohn:

How would you talk about the risks of face recognition being used by law enforcement during protests, or other public political activities?

Abi Hassen:

If we play out where the technology is going, and think about what capacities that could mean for law enforcement, especially coupled with an increasingly aggressive, and I think it's fair to say, anti-democratic, attitude in law enforcement generally. I mean, the implications for just plain chilling of speech, are tremendous. I just think about like... I just talk of it a little bit personally, like about my family. My father came here from Ethiopia, and a lot of my family is from there. Like for example, in the post 9/11 moments, just seeing family members tell their children, "No, don't take pictures of anything. We're not allowed to take photos, because we're Muslim." Right?

Abi Hassen:

Or like when Trump was elected, just seeing cousins and other relatives, children, teenagers, just terrified. They were born in this country. They have no reason legally, to be scared. So, if you think about, I mean, that's a specific community. Maybe that's a specific context. But that has real effects on people's willingness to participate. In what is considered a right of an American, which is to be an active political participant in society. And that's just a spectrum, maybe that's on one end of the spectrum.

Abi Hassen:

But at every level, that increased capacity of law enforcement to know exactly where you are and what you were doing at all moments in your life, coupled with a political system and legal institutions that are at times just overtly anti-immigrant, anti-democratic, anti-left. So, I think that it's one more element in a pretty scary trajectory for the kind of active public participation that I think we would want to see, in a world that is clearly not in a stable and healthy place.

Cindy Cohn:

Yeah. I think that's right. It becomes more important for people to be able to have a little zone of privacy around these activities. And so, one of the issues that I think we're struggling with around facial recognition technology and the law is that historically, the law and the courts have held that when you're out on the streets, you're in public and you don't have a right to privacy. And this is something that we're struggling with a little bit. And I wondered how you think about that, and how we might need to push the law a bit, or address that problem in protecting people against face recognition.

Abi Hassen:

Historically, our constitution has been amended basically every generation. I think the last time was sometime in the eighties probably, is that right? But it's been quite a long time since anything like that has happened. And there's a lot of reasons for that, I don't want to just throw that out there lightly. But if we think about the invention of the Internet and its ramifications on society and think that we've done nothing at all… We've done nothing commensurate legally with the massive social change and economic change, frankly, that that creation has wrought. And so I think we need to be thinking big about this, right?

Abi Hassen:

Like I think that the European counterpart, the GDPR, is something that is at least putting a stake in at a slightly higher level. But I think we need to think big because, something you mentioned just a moment ago, the zone of privacy, I think is something that we really should engage with and take seriously. Right. Because it's truly, if we think about what the contours of privacy were, even at the time of the Constitution or whatever, it's different in different arenas, right? Like maybe your neighbor was more likely to spy on you walking out or whatever, but certainly vis-a-vis the state, the zone of privacy has massively shrunk.

Danny O'Brien:

I find that take really interesting because you mentioned the GDPR, and I think one of the things that makes the GDPR so powerful comes from almost an accident of history in the European Union. Because the European Union in some ways is a young country too, although it's made up of far older nations. And just by an accident of timing, the main constitutional document was crafted in 2007, the Treaty of Lisbon. And just because of that timing, it managed to embed into the fabric of the European Union, a relatively new right, which was the right of data protection, which is seen as separate from a privacy right. And so I do think that sometimes you can create, there's an importance to creating something that has the feeling of a constitutional-level decision, but is about the modern world.

Cindy Cohn:

The other thing I think about with regard to this question about, “How do you get privacy in public?”, is that the specific context we're talking about here is kind of core First Amendment context, right? The right to free speech, the right to petition government for grievances. And to me, face recognition applied to people when they're out engaging in public protests ought to be protected by the very strong framework of the First Amendment, rather than the relatively weak one of the Fourth. We still have a ways to go -- the courts haven't bought that argument. I keep trying to tee it up so we can raise it. But I think there are several ways to get at this. And one of them is thinking big about where we want to go in the future. And the other is maybe thinking a little harder about what's already, the tools that are already in our constitution.

Abi Hassen:

Yeah. I'm right with you on the first statement. I think that, I don't know the current status of success, but one of the things that I feel like is on the table is the assembly, First Amendment assembly, is just very underdeveloped, right? Like what does assembly actually mean today?

Cindy Cohn:

Yeah. I agree. Assembly, association, all of the things that I think we're getting at when we're thinking about, why do people come together in the streets to try to demand change? There are actually words in the Constitution that referenced that, but we haven't given them the kind of life that I think we could, but I agree.

Danny O'Brien:

And one thing that we were talking about earlier, and I think you brought out, is this: face recognition doesn't need to work to have a chilling effect on protest. And I think it's often very hard because people know that this invisible technology is around, and they're worried about what they can and can't do. And that actually prevents them from protesting, unless they're absolutely desperate. When you're doing security trainings and explaining the capabilities of law enforcement, how do you balance that line between explaining what's possible and not scaring people so much that they don’t actually exercise their constitutional rights?

Abi Hassen:

That's always a struggle and that's a dynamic that just... What I generally do is encourage people to try to think about, one, put all their cards on the table in terms of thinking about their risks and being very aware of them. But I also think that it's also important to understand the institutional prerogatives of law enforcement and try to take that into account. Right. Because especially like, again, especially after Snowden, everyone was convinced that the NSA was spying on them. And you have to have conversations where like, look, the NSA has X number of analysts, right. They're not looking at you. Right? Or, these are who their priorities are. Right?

Abi Hassen:

Like the people who are getting their laptops taken… And obviously that “you” is contextual. Because maybe they are, right? So, that's why it's important to understand your positionality. Like, to understand your positionality and the ideology and priority of law enforcement, but yeah. So, that's an important conversation to have, but the scary thing is the more it becomes turnkey, the more it becomes these AI systems that are just spitting out suspects to law enforcement, the more integrated those systems become, the less actual work law enforcement has to do, which means the more everyone actually is at risk, regardless of the institutional capacities.

Cindy Cohn:

Another issue that EFF spends a lot of time talking about are these. Who's selling this to government? Who's trying to make this as turnkey as possible? And we're seeing increasing public-private partnerships around surveillance generally. And face recognition I just don't think is very far behind. And right now I'm thinking specifically of Ring, Amazon Ring, which partners with law enforcement to promote its cameras to homeowners, and then suggests that people share their feeds with law enforcement. Or the case that EFF is litigating now, the Williams case, where a rich guy in San Francisco bought a bunch of cameras and gave them to local business districts, and then said, no, the cops won't have access to this. And then we discovered that during the Black Lives Matter protests earlier this year, and even the pride parade, the cops did get access in real time to those cameras. EFF and the ACLU are suing over that one. But how do you talk to likely impacted communities about these kinds of things?

Abi Hassen:

These companies are profit-seeking enterprises and a depression might be... It really... The dynamic of... Loss of advertising revenue, where are they going to go? It seems pretty clear that law enforcement... Because we're not cutting a lot of law enforcement budgets, and county government contracting is always a good source of income. I mean, how do we talk about those things? I think you have to have those conversations about... The bigger the threat, the bigger the coalition needs to be to counter it. You can't have... In some sense, it's an opportunity to say, "These aren't siloed conversations." Black Lives Matter is not... Or put something that's viewed as anti-police or police trying to counter police violence can't be separated from basically the political economy of surveillance.

Abi Hassen:

These actually are completely overlapping and intersecting problems that the threat is merging and the response has to be merging also, and it's an opportunity to say, look, I don't have the answer, we can't picket Amazon to any effect right now, but we can support the workers at Amazon who are trying to organize a union. And we can support our congressperson who is working, who is trying to try to do a report on monopolistic practices in Silicon Valley. We can try to build a kind of political consciousness about anti-monopoly. We can try to create coalitions where tech people who are focused on the technical threats or the civil liberties threats are learning from and cross pollinating with people who are working on other issues to expand our coalitions.

Danny O'Brien:

Yeah, and I think you have to spell out these links, too, and that's one of the big challenges. Because for a long time, there was this sort of strange and arbitrary division between people worrying about government surveillance and people worrying about corporate surveillance, and of course the last few years have shown that those are the same problem. And one of the challenges we face, and I think this is true in a lot of spaces, is that first of all, geeks who are our base love new technology, so they're the first adopters of things like Ring and surveillance cameras, and also, people worried about their safety and who don't trust law enforcement, or aren't being served by law enforcement, also invest in these surveillance programs.

Danny O'Brien:

We have this thing where we're actually talking to people who are going to, who should be the most knowledgeable, who should be the people who are most concerned about these alliances. And they're actually being drawn in to being complicit with their own neighborhood surveillance. And I have to say I was sort of worried about this, but was really impressed by how quickly everyone gets it. Once you paint that… I think we saw that in San Francisco with this sort of public safety surveillance program, and I think the activism around Ring is also kind of going that way.

Abi Hassen:

Yeah, I mean, I think it's like I found part of why I'm doing, why I've taken this last career trajectory, doing political philosophy or political theory, is, I found myself teaching technology to activists and teaching politics to technologists and teaching... and teaching both to lawyers. And I feel like that kind of integration is what we need to do. Like, never do a technical demonstration without some kind of hook into the broader political frameworks. Use all of those things as opportunities to do more and to expand.

Cindy Cohn:

I really like that. What I hear you saying is, in some ways it is kind of all connected, our standing up and helping the folks inside these companies who want to organize, and whether that's bringing in a union or otherwise have a bigger voice in what's going on, is part of how we help protect protesters from these technologies. Because if we empower folks to have a bigger voice in this, and they can begin to have a bigger say, and what are the kinds of tools that are being developed, and who are they being sold to? The thing that is especially troubling to me about both Ring and our San Francisco cases is that the technology is actually, it's the bleeding edge of the surveillance.

Cindy Cohn:

Our Williams case... The guy just handed out the cameras. That wasn't actually a business proposition. And with Ring, again, the cameras are not that expensive, then they're making it really easy for people to get them.  I'm a little worried that cool technology is the foot in the door to a really awful future. And that's not the first time we've seen it, but I really feel that right now.

Danny O'Brien:

Is this primarily a theoretical threat at the moment, that people are worried about facial recognition being misused?

Cindy Cohn:

We’ve already seen at least one example of face recognition technology being used to arrest the wrong person. Abi, you want to talk a little bit about that case?

Abi Hassen:

Yeah, you're referring to, out of Michigan, Robert Williams’ case. It's one of those things that I think paints a potential future that is quite bleak. It's a case where, it's a Black man in Detroit arrested because the computer got it wrong. It was a facial recognition algorithm that cops, the next day, the quote from the cops -- the ACLU is suing the police there on behalf of Robert -- and the quote from the cops the next day is, “the computer must have got it wrong.”

Abi Hassen:

And I think, one, it's telling that he's from Detroit and is Black. And the story basically is, the computer got it wrong, the police did absolutely no police work to verify anything, they just went up and arrested him because a computer spit out his name. And in some sense, if we don't fight these things, if we don't change these things, this is the kind of future we're living in. Where, yes, we already have cases where people are put in prison or falsely accused for all kinds of reasons, eye-witness testimony, or DNA, whatever. What's truly bleak is the concept of, the police now just are getting a name from a computer and arresting someone. And that's the beginning and end of it.

Cindy Cohn:

The fact that we do this with other things, really, it shouldn't be an excuse for just doing more of it. Some of the argument I hear so far is, well, humans misidentify people. And well, yeah, but the answer to that is to make the police do more work than that, not to just have another way that police don't have to do the full work.

Cindy Cohn:

And this story is especially scary because they've showed up and they handcuffed him in front of his two little daughters who were two and five years old. And his wife had to go to his work and say, "Look, please don't fire him. He's been wrongly arrested."

Cindy Cohn:

And now his DNA and all the other stuff are going to go into the databases. I mean, some of this is also, the machinery once you get arrested is really, really damaging to people in the long run. And so we need to be a lot more careful on the front end, not less.

Danny O'Brien:

And I think it also points to people's treatment of new technology, too, in that the idea that a computer can be faultless, and therefore you can just obey what it suggests, is something that I don't think that technologists, and include myself, all of us, do enough to disabuse people of. These systems aren't perfect in the way that you would want them to be perfect. They're very good in certain directions, but those directions don't necessarily point in the way of justice.

Abi Hassen:

Someone like Trump can come along and say Antifa, which is just a loose concept. But its looseness is exactly its purpose, because it allows law enforcement, or it allows a section within law enforcement, to enact a political agenda.

Abi Hassen:

And it's a moment. Our law has built up that capacity through terrorism and gang laws, primarily aimed at minority communities to the point that now they can, with Antifa, it can be done to a political community that is no longer a minority racially. So, that's a capacity that has been built and it's only augmented by the technical capacity of creating those networks and spitting out a list.

Cindy Cohn:

Yeah, and so much surveillance works like this. You create an other, a bad guy. After 9/11, it was Muslims. It's no surprise to me that Black Lives Matter activists have been suggested as being on the terrorism watch list for a long time. And now of course, you're right, it's Antifa, which is even less of a thing in terms of a cohesive movement.

Cindy Cohn:

But you create this category of people who can be subject to intense surveillance, intense tracking, intense watching. And then of course, the political pressures, the political influences, are going to start using that category for whoever they don't like, or to shore up their base by creating a hated other.

Cindy Cohn:

So, surveillance is just one of the tools that gets used in one of these systems where we stop thinking of everyone as having equal rights, but we start creating classes of people who, by virtue of being a member of something, or alleged member of something, just don't have rights. And I think you're totally right that this grows out of the way that law enforcement has achieved the ability to treat gangs as if they're not citizens.

Danny O'Brien:

And what, I think, ties this together is that what we're seeing is a process of law that isn't about what you've done, but about who you are and who you associate with. And that's an error. It's an obvious error in justice and due process, but it's the sort of error that can be really exploited if you're selling facial recognition systems and algorithms. Because those algorithms are designed to say, "Here is a cluster of people, and here is the evidence that connects them together."

Danny O'Brien:

And we shouldn't be basing judicial decisions on the clustering nature of people, because there's a right of association. And the clustering algorithms of systems that aren't designed to deal with the subtleties of mapping out those systems.

Cindy Cohn:

This really reminds me of the work that we've done around Cisco and the Great Firewall in China. Because Cisco sold a specific module to, at least the allegations are… Well, that's not even allegations, we have evidence. We have PowerPoint slides that show that Cisco sold a module to the Chinese government that they touted as being really great at being able to identify who's a member of the Falun Gong, based on who they talk to, where they go, and what they're looking at online. Because of course, the Great Firewall watches all of these things.

Cindy Cohn:

And essentially, what adding face recognition into the government's arsenal here does is, it helps make that kind of a system much more powerful and much more ready to be deployed against people who are engaging in political protests. Just like it could be deployed against identifying, in the context of China, a religious group. So it's very dangerous.

Abi Hassen:

Conversely, right, if it's working, it identifies them. And even if it's not working, it chills them.

Cindy Cohn:

Yeah. And that gets to the whole other side of things that we don't have time to talk about. What does transparency look like? What does accountability look like in these systems? We'd like to ban face recognition, but as you mentioned before we even got on this call, the face recognition’s just one of a suite of tools that law enforcement now has at their disposal that is pretty dangerous.

Cindy Cohn:

Making sure that we can have access to those tools, that we can unleash our techs to figure out how it actually works -- what is it looking at, and what was it trained on? All of those kinds of things are an essential part of how we need to think about law enforcement technologies before they get adopted, not trying to paste it on afterwards.

Danny O'Brien:

I mean, how do we fix this? It's the easiest question to get, but what is the way forward here? Because I think what you've described is a long-term problem that's embedded in the very direction that law and law enforcement has been, unfortunately, stumbling towards for decades, augmented by a technology that is rapidly improving. How do we navigate this? What would be on your list of things that you would want to achieve to stop such bad consequences from expanding?

Abi Hassen:

We have to keep doing what we're doing at the local level, at the state level, at the federal level, at the international level. We have to keep fighting things as they come up. We have to hold things at bay while we can. But in some sense, I worry that that's not enough, right? Because it's not just facial recognition. It's institutions. It's the structure of society in a lot of ways. And facial recognition is just another tool in that existing structure. And so, I think that we need to fight the fights that we have now so as to be able to fight the bigger fights later.

Danny O'Brien:

So this is a question of sort of holding things back. And this is the benefit of putting in transparency, putting in actual bans of facial recognition. This is to sort of hold this anti-democratic technology at bay so we can fix the bigger problems that allow it to be used.

Abi Hassen:

We need to be able to build ourselves space to build, right? I think that so much of the dynamic is just fighting the repression that keeps us from being able to figure out how to get out of it, right? So, I do think we should... I think we should ban a lot of things that... a lot of the law enforcement tactics and tools and ideologies need to be, I mean, we can't just ban them because they're institutions. But if they're asking for a new toy, we can try to ban that before it becomes integrated completely into the institution.

Danny O'Brien:

Thinking about what can be done, not just on a personal level but as part of a community of technologists, I think so much of this technological adoption is far away from any kind of evidence-based adoption, right? That it is literally sold on the glossy brochures and snake oil of surveillance manufacturers. I mean, we have someone on EFF staff who goes to these conferences where these are sold and comes back with catalogs, which is half spooky and scary for us to look through, and half us just going, "They're just lying, right? It can't do what they're saying it could do."

Danny O'Brien:

So, I think that one of the useful tasks that technologists can take on this is, calling out the bullshit when they see it. And you can do that in your own community. You can do that in your own neighborhood, because, often, surveillance is sold as a good, it's not hidden. It's like Ring, it's sold to a community as something that will improve their safety. And when it's out in the open like that, I think that there's some real benefits in challenging its capabilities and showing that its results aren't what people would ever want.

Abi Hassen:

I'm so glad you said that, because I think that that's really key at the social level point. If we think about the history of, so you guys are probably familiar with, I think it was like the 2009 NIST report on forensic science. Where looking at these things like bite mark evidence, burn pattern experts… It's full on phrenology for law enforcement, right? And engaging... Like I recently... COVID canceled it, but I was going to do a panel a few months ago with a biologist. And trying to build a connection between... because this is someone who actually knows how DNA works and actually knows population genetics. And bringing some of those kinds of crosscurrents -- we aren't engaging the real scientific community enough, I don't think. And maybe they're not engaging us enough, as well. Building... And that's part of what I was saying before about de-legitimizing. We have to delegitimize bad science.

Abi Hassen:

We can't give that up, but to do that, we need to engage with the real side. We need to engage with the technologists who know what they're talking about and understand, and have an inkling towards this kind of understanding, to show why these things are illegitimate or why they're not doing what they're saying they're doing or why it's... That's something I'm very interested in, in figuring out how to build some of those bridges. Because the project of, these investigatory projects of law enforcement, they're not actually investigations in that scientific sense. They are largely just a way of justifying something that you've already decided. They're quite the opposite. But as a defense lawyer, I can say that, but it doesn't have the weight that a scientist might have, for example.

Cindy Cohn:

Let's also talk a little more about what it would look like if we got it right. What I'm hearing is you're unhappy with the decision that the government makes. You can go out and protest and that doesn't go into your permanent record. What other things do you think about? Let's assume a world in which we get all of this right. What does it look like for somebody like you, or somebody who you're advising, Abi?

Abi Hassen:

Well, that's a very hard question, but in the realm of protest, I would want to see a world where people are much freer to organize together, to build new institutions without fear. To create new ways of working together and... As we talked about before, this kind of freedom of association, I would like to see that be real. I'd like to see that be real beyond even just marching in the street, but real in a way where we are organizing in the world to make our lives actually materially better. And that like just because that might be... we're using political mechanisms, we're using legal mechanisms, we’re using protests mechanisms. We're doing all kinds of things, but... because I don't want to say like, "Oh, we're just going to solve all the problems." There's no end state to history. We're engaging…. but I don't think that the police should have a place in that. I don't think that policing as we know it is a thing that we should have, frankly. But it especially shouldn't be the institution that stops people from making their lives better.

Cindy Cohn:

I wonder, what would it be like if when we went out to go and protest, we had a social service agency, the protest protection society that came out and made sure that things like putting up the barriers and making sure that traffic gets redirected, maybe those things, and making sure that if somebody is misbehaving, they're ejected. If we had something more like bouncers and social service people who attend the protests, rather than people who are engaged in trying to stop, and create accountability for, crime, because protesting isn't criminal.

Abi Hassen:

Yeah. I guess what I'm saying, we can get to the criminal part later because that's a whole other thing, maybe we don't have time, frankly. But in some instances, I don't want to be protesting, I want to be organizing, I want to be building something.

Cindy Cohn:

I love it. Your future is even better than mine. Go on, go on.

Abi Hassen:

Well, but what I'm saying is, a hundred years ago, the so-called protests were sit-down strikes. Shutting down the economy to end the robber barons from paying you a penny and you’re living in a company town or whatever. So our country has often forgotten, or not even taught, pretty brutal labor history. And that wasn't protesting, that was organizing. That was saying, "It's not right, and we're going to take what's ours. We're going to build an institution to have power to make our lives better." And the police and the Pinkertons and what have you were the pioneers of today's surveillance technology.

Abi Hassen:

The Pinkertons literally created the first database, which was the rogues' gallery, the first facial recognition system. And it was used primarily, or at least largely, to fight union organizing. I guess what I want to say is, what I want to see is us, we need to, one, fight these things so that we can have even the groundwork necessary to build something better, because right now, they're nipping it in the bud. So that's as close to... I don't want to paint a utopia because I don't know if that exists, but I do want to say that we need to be able to change things and build new things, and if we keep going on this path, they're not going to let us build anything.

Cindy Cohn:

I really love this, because I think it puts the street protests we're seeing in their place, which is, one of the few remaining tools of a society that is desperately headed off the rails, but also shutting down all the ways in which people can make it better short of that. By the time you're protesting in the street, things have gone terribly, terribly wrong. And what I hear you saying is, let's get to a future where we don't even have to get to that place because we've actually set the balance right at a much higher level with real ability to organize and make change that doesn't require us to get to that place where we have to take to the streets.

Danny O'Brien:

I often find bans to be a clumsy way of dealing with technological development, partly because it's not always clear what you should be banning ahead of time, and that are were ways of implementing the same thing that evade the ban, but also because that technology is always going to be around. Do you see… In these futures, what do you see the role of technologies like facial recognition being? And how do you think they should be controlled in a democratic society?

Abi Hassen:

Honestly, I feel like you've answered your question, just with the question, because you said what we need is a democratic society... I don't have a better answer than democracy, having an actual say in how our society is constructed. Part of the reason we're protesting on the street is because we don't have a say in actually changing things. We're told, "Oh, go and vote every four years." Sure, we should do that, but that's not enough. And so I think that there's a lot of frustration and people are saying, "Hey, things are getting worse, what can I even do but just yell in the street?"

Abi Hassen:

And so what we need is democracy, what we need is strong institutions that are democratically controlled by the people who are part of them that have power. And right now, when we're talking about the use of these technologies, when we're talking about anything... I guess the simplest way to say it is we need to reestablish or establish a commons that is the space of the people online, or the space of the people in the world as it relates to digital technology, and that needs to be ours, and we need to have control of it. It shouldn't be just that Amazon and the cops control how we live in public space. We should have control of that. And so yeah, the answer I would say is yes, democracy.

Cindy Cohn:

I think of transparency, making sure that that law enforcement has to tell us when they're looking at this, that we have a voice to say in whether they get to see, then we get to see what's really going on. Transparency reports after the fact are not really what we're talking about here, we're talking about pre-purchase transparency. And that has to involve both law enforcement and the companies who are providing the information, and then community control and input at every level, and then having accountability in the courts.

Cindy Cohn:

So when things go wrong and when the upfront transparency and control aren't working, you have the after-the-fact ability to create accountability and set things right. And I think of things like private rights of action and strengthening the Fourth Amendment and the First Amendment right to be able to declare something has been improper or throw out evidence. I'm all strategies, so I think of those as the things we're doing that are going to try to set the table so that we can have democratic control.

Danny O'Brien:

Abi, thank you very much. It's great talking to you.

Cindy Cohn:

Thanks, this has been great fun.

Abi Hassen:

Yeah, it's been really fun. Thank you so much.

Danny O'Brien:

Wow, that was one of those conversations that I didn't want to end because there was so much to unpack. I think one of the things that immediately stuck out for me, as a bit of an Internet utopian, optimist, is this very dystopian idea of face recognition as almost the opposite of what we want from technology. I always saw Internet as a way to help organize people and create new institutions and ways of cooperating together. And Abi makes this point that facial recognition is this anti-organization technology. It actually dissuades people from collectively acting.

Cindy Cohn:

Yeah, it's a really good point. I also liked how he ties the whole thing together with the broader movement for social justice and brings in his labor background, and really is talking about protests, not because we care about protests, but because we care about the bigger work of trying to make society better. So he really forced us to broaden the conversation from just the narrow topic of face surveillance at protests to the bigger “why we care about this.”

Danny O'Brien:

Yeah. And I think Abi pulled me out of the shortsighted way I view facial recognition as being particularly pertinent to protest, because he made this point that protest itself is a democratic failure mode, that no one immediately thinks of protest as the first step that they should take. And going out in the streets… It's only when other ways of speaking out or being able to change your environment have failed, do you go into this. So if we're going to think about civil liberties more broadly, and digital civil liberties, we have to think about what they're doing to everything else.

Cindy Cohn:

Yeah. The other thing I really like about Abi is that he comes at this as a criminal defense lawyer, and then, Danny, you made this really great point that ultimately if you're thinking about associations and assembly, these prosecutions are about who you are and who rather than what you did, and that's particularly dangerous. And it reminds me of why we care about metadata, right? And some of the fights that we have against not the local cops as much as the National Security Agency, but also local cops as well. That's what metadata does. It may not give the content of what you're doing, but it says who you are, who you're talking to, and who you associate and assemble with.

Danny O'Brien:

It lets police or the intelligence services construct this case from that association, when association should never be a crime in itself. The other thing Abi reminded me about was when he talked about evidence-based reform, and the need for a coalition of academics and computer scientists who can speak out about when these sort of facial recognition systems are just snake oil and what their failings are. And that really reminded me of the successful coalition we had on the war on encryption in the '90s and early 2000s, where trying to break encryption was presented in the same way. And also the danger of all our communications being encrypted was played up as this huge threat. And what we were able to do collectively is bring in the computer scientists and the academics to highlight where the hype was, and what was actually practical, and what could actually achieve change.

Danny O'Brien:

And the more I think about it, the more I think about how powerful that is in all kinds of police reform. That ultimately, I think when people are talking about changing the police or moving away from policing, what they want is something that's more effective in achieving what people want from law and order. And I can well believe that if we really started applying that reasonably to this new digital space, the institution could become unrecognizable from what we have now.

Cindy Cohn:

Yeah. I think that we have a lot of police work that really, we see the police really doing the same things over and over and over again, without the intervention of, is this actually working? Is it serving the community, or is it not serving the community? And what would serve the community better? Because nobody's arguing for an unsafe community, but I think there's a lot that could be gained from applying the same kind of scientific study to police tactics that, I think you're right, that we tried to bring so much to the encryption debate.

Cindy Cohn:

I think there are reasons to be hopeful about this. I think that we have pretty quickly been able to convince a wide swath of Americans that face recognition in the hands of police during protests is really a problem. I think we're helped a lot by that because it's pretty creepy and it's a pretty easy lift. But we're already seeing places across the country beginning to do bans on it and buying us the kind of time that Abi talked about to be able to sort out what we want to do and bring in all the other kinds of fixes that we talked about, like transparency and community control and these kinds of things. So, it would have been better if we could have gotten in on the ground floor before police departments had this technology, but as we know from the horrible story out of Detroit, that the police are already using some of this kind of stuff. But it's not as late as it's been with some other technologies, and so there's room for us to begin to really fix this.

Danny O'Brien:

It's always going to be difficult to stop an attractive piece of technology like this from falling into the hands of law enforcement, but we've made a good start. And with folks like Abi fighting for this, I think there's a real chance that we can fix this. Well, thanks for listening and see you next time.

Danny O’Brien:

Thanks again for joining us. If you'd like to support the Electronic Frontier Foundation, here are the three things you can do today. One, you can hit subscribe in your podcast player of choice. And if you have time, please leave a review. It helps more people find us. Two, please share on social media and with your friends and family. Three, please visit, where you will find more episodes, learn about these issues, you can donate and become a member, and lots more.

Danny O’Brien:

Members are the only reason we can do this work, plus you can get cool stuff like an EFF hat or an EFF hoodie, or even a camera cover for your laptop. Thanks once again for joining us. If you have any feedback on this episode, please email We do read every email. This podcast is produced by the Electronic Frontier Foundation with help from Stuga Studios. Music by Nat Keefe of BeatMower.

This work is licensed under a Creative Commons Attribution 4.0 International License


rainey Reitman

Double the Impact of Every Donation

3 months ago

Power Up Your Donation Week has begun! EFF is calling on tech users everywhere to give today and instantly double their impact on Internet freedom while the world needs it most.

Power Up

DOnate today and get an automatic 2x match!

For one week starting on #GivingTuesday, anyone who donates to EFF will have their gift automatically matched. That's all thanks to a group of passionate supporters who have offered to match every EFF membership and additional donation up to $319,600! It means every dollar you give becomes two dollars for EFF.

That’s twice the support for protecting your devices from unlawful searches, fighting censorship, ending police face surveillance, and defending the use of strong encryption around the world. In the midst of the pandemic, we’re experiencing the Internet’s unparalleled power to help us access vital information and stay connected to friends and loved ones in ways that were never possible before. We rely on digital connections more than ever in history, and that makes EFF’s mission to preserve our digital privacy, security, and free expression rights more urgent than ever. Give today and power up the movement for a better digital future.

Want to supercharge your impact today? Invite your friends and colleagues to get involved! Here’s some sample language you can share:

We’re celebrating 30 years of defending privacy and free expression online! Join me in supporting @EFF this week and your donation gets an automatic 2X match.

Twitter | Facebook | Email

No organization has worked on the frontlines of the digital rights movement for as long or as fiercely as EFF, and it has only been possible with your help. In one of the world’s toughest years on record, we’re deeply grateful to have you on our side.

Power Up

Double your impact (for free!)

Aaron Jue

EFF Condemns Egypt's Latest Crackdown

3 months 1 week ago

We are quickly approaching the tenth anniversary of the Egyptian revolution, a powerfully hopeful time in history when—despite all odds—Egyptians rose up against an entrenched dictatorship and shook it from power, with the assistance of new technologies. Though the role of social media has been hotly debated and often overplayed, technology most certainly played a role in organizing and Egyptian activists demonstrated the potential of social media for organizing and disseminating key information globally. 

2011 was a hopeful time, but hope quickly gave way to repression—repression that has increased significantly this year, especially in recent months as the Egyptian government, under President Abdel Fattah Al-Sisi, has systematically persecuted human rights defenders and other members of civil society. In the hands of the state, technology was and still is used to censor and surveil citizens.

In 2013, Sisi’s government passed a law criminalizing unlicensed street demonstrations; that law has since been frequently used to criminalize online speech by activists. Two years later, the government adopted a sweeping counterterrorism law that has since been updated to allow for even greater repression. The new provisions of the law were criticized in April by UN Special Rapporteur on human rights and counter terrorism, Fionnuala D. Ní Aoláin, who stated that they would “profoundly impinge on a range of fundamental human rights”. 

But it is the government’s enactment of Law 180 of 2018 Regulating the Press and Media that has had perhaps the most widespread recent impact on free expression online. The law stipulates that press institutions, media outlets, and news websites must not broadcast or publish any information that violates Constitutional principles, granting authorities to ban or suspend distribution or operations of any publications, media outlets, or even social media accounts (with more than 5,000 followers) that are deemed to threaten national security, disturb the public peace, or promote discrimination, violence, racism, hatred, or intolerance. Additionally, Law No. 175 of 2018 on Anti-Cybercrime grants authorities the power to block or suspend websites deemed threatening to national security or the national economy.

A new escalation

In the past two weeks, Egyptian authorities have escalated their crackdown on human rights defenders and civil society organizations. On November 15, Mohammed Basheer, a staffer at the Egyptian Initiative for Personal Rights (EIPR) was arrested at his Cairo home in the early morning hours. Three days later, the organization’s criminal justice unit director, Karim Ennarah, was arrested while on vacation in Dahab. Most recently, Executive Director Gasser Abdel-Razek was arrested at his home by security forces.

All three appeared in front of the Supreme State Security Prosecution and were charged with “joining a terrorist group,” “spreading false news,” and “misusing social media.” They were remanded into custody and sent to fifteen days of pre-trial detention—a tactic commonly used by the Egyptian state as a form of punishment.

In the same week, Egyptian authorities placed 30 individuals on a terrorism watch list, accusing them of joining the Muslim Brotherhood. Among them is blogger, technologist, activist, and friend of EFF, Alaa Abd El Fattah.

A blogger and free software developer, Alaa has the distinction of having been detained under every head of state during his lifetime. In March 2019, he was released after serving a five-year sentence for his role in the peaceful demonstrations of 2011. As part of his parole, he was meant to spend every night at a police station for five years.

But in September of last year, he was re-arrested over allegations of publishing false news and inciting people to protest. He has been held without trial ever since, and as of this week is marked as a terrorist by the Egyptian state.

This designation lays bare the dangers of entrusting individual states with the ability to define “terrorism” for the global internet. While Egypt has used this designation to attack human rights defenders, the country is not alone in politicizing the definition. And at a time when governments are banding together to “eliminate terrorist and extremist content online” through efforts like the Christchurch Call (of which we are a member of the advisory network), it is imperative that social media companies, civil society, and states alike exercise great care in defining what qualifies as “terrorism.” We must not simply trust individual governments’ definitions.

A call for solidarity

EFF condemns the recent actions by the Egyptian government and stands in solidarity with our colleagues at EIPR and the many activists and human rights defenders imprisoned by the Sisi government. And we urge other governments and the incoming Biden administration to stand against repression and hold Egypt’s government accountable for their actions.

As the great Martin Luther King Jr. once wrote: “Injustice anywhere is a threat to justice everywhere.”


Jillian C. York

EFF Urges Federal Appeals Court to Rehear Case Involving Unconstitutional Baltimore Aerial Surveillance Program

3 months 1 week ago

Last week, EFF urged the full U.S. Court of Appeals for the Fourth Circuit to reconsider a split three-judge panel’s ruling that the Baltimore Police Department’s aerial surveillance of the city’s more than half a million residents is constitutional. In a friend-of-the-court brief—which was joined by the Brennan Center for Justice, Electronic Privacy Information Center, FreedomWorks, National Association of Criminal Defense Lawyers, and the Rutherford Institute—we argue that the panel decision is both wrong on the law and failed to appreciate the disproportionate burden of government surveillance borne by communities of color.

In May, the Baltimore Police Department launched its Aerial Investigation Research (AIR) Pilot Program. For six months, three surveillance aircrafts operated by a private company called Persistent Surveillance Systems flew over Baltimore—covering about 90 percent of the city—for 12 hours every day. The planes produced images that even at a resolution of “one pixel per person” allowed the police to track individual’s movements over multi-day periods, especially when combined with the police’s networks of more than 800 ground-based surveillance cameras and automated license plate readers.

Before the AIR program went into effect, the ACLU sued to block it on behalf of a grassroots organization called Leaders of a Beautiful Struggle that advocates for the interests of Black people in Baltimore and two prominent community activists. But the district court allowed the AIR program to go forward, and a Fourth Circuit panel affirmed that decision by a 2-1 vote. The Fourth Circuit’s Chief Judge Roger Gregory issued a powerful dissent that criticized the court for “invoke[ing] the tragedies imparted by gun violence in Baltimore to justify its grant of sweeping surveillance powers to the [Baltimore Police Department].” 

Our brief urges the full Fourth Circuit to reconsider two crucial legal errors in the panel’s opinion. First, the panel failed to recognize that the Supreme Court’s recent decision in Carpenter applies to the AIR program. In Carpenter, the Court affirmed that the Fourth Amendment protects records of an individual’s location over time—precisely what the AIR program offers the police. And second, we showed why the panel’s discussion of the AIR program in the context of Supreme Court precedents involving searches for non-law enforcement objectives was severely misguided: the AIR program’s only purpose is to help the police investigate crimes. 

We also put the AIR program in the context police surveillance's harm to communities of color. As we write in our brief:

Police experiment with, and eventually deploy, intrusive technologies like the AIR program in cities with large communities of color. Before Baltimore, PSS operated surveillance flights above Compton, California; Philadelphia, Pennsylvania; and Dayton, Ohio.  The company also seeks to conduct surveillance of St. Louis, Missouri.  Further, governments routinely deploy aerial surveillance technologies against individuals participating in racial justice movements, like those protesting against the police killings of George Floyd in Minneapolis, Michael Brown in Ferguson,  and Freddie Gray in Baltimore

We are hopeful that the court will take up the case again and withdraw the panel’s flawed opinion. In another case involving a warrantless search earlier this year, the full Fourth Circuit vacated a panel decision and issued a careful, well-reasoned decision. As Chief Judge Gregory wrote in that case: “If merely preventing crime was enough to pass constitutional muster, the authority of the Fourth Amendment would become moot.” The same is true here.

Related Cases: Carpenter v. United States
Nathaniel Sobel

Visa Wants to Buy Plaid, and With It, Transaction Data for Millions of People

3 months 1 week ago

Visa, the credit card network, is trying to buy financial technology company Plaid for $5.3 billion. The merger is bad for a number of reasons. First and foremost, it would allow a giant company with a controlling market share and a history of anticompetitive practices to snap up its fast-growing competition in the market for payment apps. But Plaid is more than a potential disruptor, it’s also sitting on a massive amount of financial data acquired through questionable means. By buying Plaid, Visa is buying all of its data. And Plaid’s users—even those protected by California’s new privacy law—can’t do anything about it.

Since mergers and acquisitions often fall outside the purview of privacy laws, only a pointed intervention by government authorities can stop the sale. Thankfully, this month, the US Department of Justice filed a lawsuit to do just that. This merger is about more than just competition in the financial technology (fintech) space; it’s about the exploitation of sensitive data from hundreds of millions of people. Courts should stop the merger to protect both competition and privacy.

Visa's Monopolistic Hedge

The Department of Justice lawsuit outlines a very simple motive for the acquisition. Visa, it says, already controls around 70% of the digital debit card payment market, from which it earned approximately $2 billion last year. (Mastercard, at 25% market share, is Visa’s only significant competitor.) Thanks to network effects with merchants and consumers, plus exclusivity clauses in its agreements with banks, Visa is comfortably insulated from threats by traditional competitors. But apps like Venmo have started—just barely—to eat away at the digital transaction market. And Plaid sits at the center of that new wave, providing the infrastructure that Venmo and hundreds of other apps use to send money around the world.

According to the DoJ, a Visa executive predicted that Plaid would undercut its debit card processing business eventually, and that buying Plaid would be an “insurance policy” to protect Visa’s dominant market share. The lawsuit alleges that Plaid already had plans to leverage its relationships with banks and consumers to launch a new debit service. Seen through this lens, the acquisition is a simple preemptive strike against an emerging threat in one of Visa’s core markets. Challenging the purchase of a smaller company by a giant one, under the theory that the purchase eliminates future competition rather than creating a monopoly in the short term, is a strong step for the DoJ, and one we hope to see repeated in technology markets.

But users’ interest in the Visa-Plaid merger should extend beyond fears of market concentration. Both companies are deeply involved in the collection and monetization of personal data. And as the DoJ’s lawsuit underscores, “Acquiring Plaid would also give Visa access to Plaid’s enormous trove of consumer data, including real-time sensitive information about merchants and Visa’s rivals.”

Plaid, Yodlee, and the sorry state of fintech privacy

Plaid is what’s known as a “data aggregator” in the fintech space. It provides the infrastructure that connects banks to financial apps like Venmo and Coinbase, and its customers are usually apps that need programmatic access to a bank account.

It works like this: first, an app developer installs code from Plaid. When a user downloads the app, Plaid asks the user for their bank credentials, then logs in on their behalf. Plaid then has access to all the information the bank would normally share with the user, including balances, assets, transaction history, and debt. It collects data from the bank and passes it along to the app developer. From then on, the app can use Plaid’s services to initiate electronic transfers to and from the bank account, or to collect new information about the user’s activity.

In a shadowy industry, Plaid has tried to cultivate a reputation as the “trustworthy” data aggregator. Envestnet/Yodlee, a direct competitor, has long sold consumer behavior data to marketers and hedge funds. The company claims the data are “anonymous,” but reporters have discovered that that’s not always the case. And Finicity, another financial data aggregator, uses its access to moonlight as a credit reporting agency. A glance at data broker listings shows a thriving marketplace for individually-identified transactions data, with dozens of sellers and untold numbers of buyers. But Plaid is adamant that it doesn’t sell or monetize user data beyond its core business proposition. Until recently, Plaid has often been mentioned alongside Yodlee in order to contrast the two companies’ approaches, when it’s been mentioned at all.

Now, in the wake of the Visa announcement, two new lawsuits (Cottle et al v. Plaid Inc and Evans v. Plaid Inc) claim that Plaid has exploited users all along. Chief among the accusations is that Plaid’s interface misleads users into sharing their bank passwords with the company, a practice that plaintiffs allege runs afoul of California’s anti-phishing law. The lawsuits also claim that Plaid collected much more data than was necessary, deceived users about what it was doing, and made money by selling that data back to the apps which used it.

EFF is not involved in either lawsuit against Visa/Plaid, nor are we taking any position on the validity of the legal claims. We’re not privy to any information that hasn’t been reported publicly. But many of the facts presented by the lawsuits are relatively straightforward, and can be verified with Plaid’s own documentation. For example, at the time of writing, still hosts example sign-in flow with Plaid. Plaid does not dispute that it collects users’ real bank credentials in order to log in on their behalf. You can see for yourself what that looks like: the interface puts the bank’s logo front and center, and looks for all the world like a secure OAuth page. Try to think about whether, seeing this for the first time, you’d really understand who’s getting what information.

Who’s getting your credentials? Not just Citi.

Many users might not realize the scope of the data that Plaid receives. Plaid’s Transactions API gives both Plaid and app developers access to a user’s entire transaction and balance history, including a geolocation and category for each purchase made. Plaid’s other APIs grant access to users’ liabilities, including credit card debt and student loans; their investments, including individual stocks and bonds; and identity information, including name, address, email, and phone number.

A screenshot from Plaid’s demo. What, exactly, does “link” mean?

For some products, Plaid’s demo will throw up a dialog box asking users to “Allow” the app to access certain kinds of data. (It doesn’t explain that Plaid will have access as well.) When we tested it, access to the “transactions,” “auth,” “identity,” and “investments” products didn’t trigger any prompts beyond the default “X uses Plaid to link to your bank” screen. It’s unclear how users are supposed to know what information an app will actually get, much less what they’ll do with it. And once a user enters their password, the data starts flowing.

Users can view the data they’re sharing through Plaid, and revoke access, after creating an account at This tool, which was apparently introduced in mid-2018 (after GDPR went into effect in Europe), is useful—for users who know where to look. But nothing in the standard “sign in with Plaid” flow directs users to the tool, or even lets them know it exists.

On the whole, it’s clear that Plaid was using questionable design practices to “nudge” people into sharing sensitive information.

What’s in it for Visa?

Whatever Plaid has been doing with its data until now, things are about to change.

Plaid is a hot fintech startup, but Visa thinks it can squeeze more out of Plaid than the company is making on its own. Visa is paying approximately 50 times Plaid’s annual revenue to acquire the company—a “very steep” sum by traditional metrics.

A huge part of Plaid’s value is its data. Like a canal on a major trade route, it sits at a key point between users and their banks, observing and directing flows of personal information both into and out of the financial system. Plaid currently makes money by charging apps for access to its system, like levying tariffs on those who pass through its port. But Visa is positioned to do much more.

For one, Visa already runs a targeted-advertising wing using customer transaction data, and thus has a straightforward way to monetize Plaid’s data stream. Visa aggregates transaction data from its own customers to create “audiences” based on their behavior, which it sells to marketers. It offers over two hundred pre-configured categories of users, including “recently engaged,” “international traveler - Mexico,” and “likely to have recently shifted spend from gasoline to public transportation services.” It also lets clients create custom audiences based on what people bought, where they bought it, and how much they spent.


Plaid’s wealth of transaction, liability, and identity information is good for more than selling ads. It can also be used to build financial profiles for credit underwriting, an obviously attractive application for credit-card magnate Visa, and to perform “identity matching” and other useful services for advertisers and lenders. Documents uncovered by the DoJ show that Visa is well aware of the value in Plaid’s data.

Illustration by a Visa executive of Plaid’s untapped potential, included in Department of Justice filings. The executive “analogized Plaid to an island ‘volcano’ whose current capabilities are just ‘the tip showing above the water’ and warned that ‘what lies beneath, though, is a massive opportunity – one that threatens Visa.’” Note “identity matching,” “credit decisioning,” and “advertising and marketing”—all data-based businesses.

Through Plaid, Visa is about to acquire transaction data from millions of users of its competitors: banks, other credit and debit cards, and fintech apps. As TechCrunch has reported, “Buying Plaid is insurance against disruption for Visa, and also a way to know who to buy.” The DoJ went deeper into the data grab’s anticompetitive effects: “With this insight into which fintechs are more likely to develop competitive alternative payments methods, Visa could take steps to partner with, buy out, or otherwise disadvantage these up-and coming competitors,” positioning Visa to “insulate itself from competition.”

The Data-Sale Loophole

The California Privacy Rights Act, which amends the California Consumer Privacy Act (CCPA), was passed by California voters in early November. It’s the strongest law of its kind in the U.S., and it gives people a general right to opt out of the sale of their data. In addition, the Gramm-Leach-Bliley Act (GLBA), a federal law regulating financial institutions, allows Americans to tell financial institutions not to share their personal financial information. Since the CPRA exempts businesses which are already subject to GLBA, it’s not clear which of the two governs Plaid. But neither law restricts the transfer of data during a merger or acquisition. Plaid’s own privacy policy claims, loudly and clearly, that “We do not sell or rent personal information that we collect.” But elsewhere in the same section, Plaid admits it may share data “in connection with a change in ownership or control of all or a part of our business (such as a merger, acquisition, reorganization, or bankruptcy).” In other words, the data was always for sale under one condition: you had to buy everything.

That’s what Visa is doing. It’s acquiring everything Plaid has ever collected and—more importantly—access to data flows from everyone who uses a Plaid-connected app. It can monetize the data in ways Plaid never could. And the move completely side-steps restrictions on old-fashioned data sales.

Stop the Merger

It’s easy to draw parallels from the Visa/Plaid deal to other recent mergers. Some, like Facebook buying Instagram or Google buying YouTube, gave large companies footholds in new or emerging markets. Others, like Facebook’s purchase of Onavo, gave them data they could use to surveil both users and competitors. Still others, like Google’s acquisitions of Doubleclick and Fitbit, gave them abundant new inflows of personal information that they could fold into their existing databases. Visa’s acquisition of Plaid does all three.

The DoJ’s lawsuit argues that the acquisition would “unlawfully maintain Visa’s monopoly” and “unlawfully extend [Visa’s] advantage” in the U.S. online debit market, violating both the Clayton and Sherman antitrust acts. The courts should block Visa from buying up a nascent competitor and torrents of questionably-acquired data in one move.

Beyond this specific case, Congress should take a hard look at the trend of data-grab mergers taking place across the industry. New privacy laws often regulate the sharing or sale of data across company boundaries. That’s great as far as it goes—but it’s completely sidestepped by mergers and acquisitions. Visa, Google, and Facebook don’t need to buy water by the bucket, they can just buy the well. Moreover, analysts predict that this deal, if allowed to go through, could set off a spree of other fintech acquisitions. It may have already begun: just months after Visa announced its intention to buy Plaid, Mastercard (Visa’s rival in the debit duopoly) began the process of acquiring Plaid competitor Finicity. It’s long past time for better merger review and meaningful, enforceable restrictions on how companies can use our personal information.

Bennett Cyphers

Let’s Stand Up for Home Hacking and Repair

3 months 1 week ago

Let’s tell the Copyright Office that it’s not a crime to modify or repair your own devices.

Every three years, the Copyright Office holds a rulemaking process where it grants the public permission to bypass digital locks for lawful purposes. In 2018, the Office expanded existing protections for jailbreaking and modifying your own devices to include voice-activated home assistants like Amazon Echo and Google Home, but fell far short of the broad allowance for all computerized devices that we’d asked for. So we’re asking for a similar exemption, but we need your input to make the best case possible: if you use a device with onboard software and DRM keeps you from repairing that device or modifying the software to suit your purposes, see below for information about how to tell us your story.

DMCA 1201: The Law That Launched a Thousand Crappy Products

Why is it illegal to modify or repair your own devices in the first place? It’s a long story. Congress passed the Digital Millennium Copyright Act in 1996. That’s the law that created the infamous “notice-and-takedown” process for allegations of copyright infringement on websites and social media platforms. The DMCA also included the less-known Section 1201, which created a new legal protection for DRM—in short, any technical mechanism that makes it harder for people to access or modify a copyrighted work. The DMCA makes it unlawful to bypass certain types of DRM unless you’re working within one of the exceptions granted by the Copyright Office.

Suddenly manufacturers had a powerful tool for restricting how their customers used their products: build your product with DRM, and you can argue that it’s illegal for others to modify or repair it.

The technology landscape was very different in 1996. At the time, when most people thought of DRM, they were thinking of things like copy protection on DVDs or other traditional media. Some of the most dangerous abuses of DRM today come in manufacturers’ use of it to limit how customers use their products—farmers being unable to repair their own tractors, or printer manufacturers trying to restrict users from buying third-party ink.

When the DMCA passed, manufacturers suddenly had a powerful tool for restricting how their customers used their products: build your product with DRM, and you can argue that it’s illegal for others to modify or repair it.

Section 1201 caught headlines recently when the RIAA attempted to use it to stop the distribution of youtube-dl, a tool that lets people download videos from YouTube and other user-uploaded video platforms. Fortunately, GitHub put the youtube-dl repository back up after EFF explained on behalf of youtube-dl’s developers that the tool doesn’t circumvent DRM. Privacy info. This embed will serve content from

Abuse of legal protections for DRM isn’t just a United States problem, either. Thanks to the way in which copyright law has been globalized through a series of trade agreements, much of the world has similar laws on the books to DMCA 1201. That creates a worst-of-both-worlds scenario for countries that don’t have the safety valve of fair use to protect people’s free expression rights or processes like the Copyright Office rulemaking to remove the legal doubt around bypassing DRM for lawful purposes. The rulemaking process is deeply flawed, but it’s better than nothing.

Let’s Tell the Copyright Office: Home Hacking Is Not a Crime

Which brings us back to this year’s Copyright Office rulemaking. We’re asking the Copyright Office to grant a broad exception for people to take advantage of in modifying and repairing all software-enabled devices for their own use.

If you have a story about how:

  • someone in the United States;
  • attempted or planned to modify, repair, or diagnose a product with a software component; and
  • encountered a technological protection measure (including DRM or digital rights management—any form of software security measure that restricts access to the underlying software code, such as encryption, password protection, or authentication requirements) that prevented completing the modification, repair, or diagnosis (or had to be circumvented to do so)

—we want to hear from you! Please email us at with the information listed below, and we’ll curate the stories we receive so we can present the most relevant ones alongside our arguments to the Copyright Office. The comments we submit to the Copyright Office will become a matter of public record, but we will not include your name if you do not wish to be identified by us. Submissions should include the following information:

  1. The product you (or someone else) wanted to modify, repair, or diagnose, including brand and model name/number if available.
  2. What you wanted to do and why.
  3. How a TPM interfered with your project, including a description of the TPM.
    • What did the TPM restrict access to?
    • What did the TPM block you from doing? How?
    • If you know, what would be required to get around the TPM? Is there another way you could accomplish your goal without doing this?
  4. Optional: Links to relevant articles, blog posts, etc.
  5. Whether we may identify you in our public comments, and your name and town of residence if so. We will treat all submissions as anonymous unless you expressly give us this permission to identify you.
Elliot Harmon

Victory! Court Protects Anonymity of Security Researchers Who Reported Apparent Communications Between Russian Bank and Trump Organization

3 months 1 week ago

Security researchers who reported observing Internet communications between the Russian financial firm Alfa Bank and the Trump Organization in 2016 can remain anonymous, an Indiana trial court ruled last week.

The ruling protects the First Amendment anonymous speech rights of the researchers, whose analysis prompted significant media attention and debate in 2016 about the meaning of digital records that reportedly showed computer servers linked to the Moscow-based bank and the Trump Organization in communication.

In response to these reports, Alfa Bank filed a lawsuit in Florida state court alleging that unidentified individuals illegally fabricated the connections between the servers. Importantly, Alfa Bank’s lawsuit asserts that the alleged bad actors who fabricated the servers’ communications are different people than the anonymous security researchers who discovered the servers’ communications and reported their observations to journalists and academics.

Yet that distinction did not stop Alfa Bank from seeking the security researchers’ identities through a subpoena issued to Indiana University Professor L. Jean Camp, who had contacts with at least one of the security researchers and helped make their findings public. 

Prof. Camp filed a motion to quash the subpoena. EFF filed a friend-of-the-court brief in support of the motion to ensure the court understood that the security researchers had the right to speak anonymously under both the First Amendment and Indiana’s state constitution.

The brief argues: 

By sharing their observations anonymously, the researchers were able to contribute to the electorate’s understanding of a matter of extraordinary public concern, while protecting their reputations, families, and livelihoods from potential retaliation. That is exactly the freedom that the First Amendment seeks to safeguard by protecting the right to anonymous speech.

It’s not unusual for companies embarrassed by security researchers’ findings to attempt to retaliate against them, which is what Alfa Bank tried to do. That’s why EFF’s brief also asked the court to recognize that Alfa Bank’s subpoena was a pretext:

[T]he true motive of the litigation and the instant subpoena is to retaliate against the anonymous computer security researchers for speaking out. In seeking to impose consequences on these speakers, Alfa Bank is violating their First Amendment rights to speak anonymously.

In rejecting Alfa Bank’s subpoena, the Indiana court ruled that the information Alfa Bank sought to identify the security researchers “is protected speech under Indiana law” and that the bank had failed to meet the high bar required to justify the disclosure of the individuals’ identities.

EFF is grateful that the court protected the identities of the anonymous researchers and rejected Alfa Bank’s subpoena. We would also like to thank our co-counsel Colleen M. Newbill, Joseph A. Tomain, and D. Michael Allen of Mallor Grodner LLP for their help on the brief.

Aaron Mackey

Podcast Episode: Control Over Users, Competitors, and Critics

3 months 2 weeks ago
Episode 004 of EFF’s How to Fix the Internet

Cory Doctorow joins EFF hosts Cindy Cohn and Danny O’Brien as they discuss how large, established tech companies like Apple, Google, and Facebook can block interoperability in order to squelch competition and control their users, and how we can fix this by taking away big companies' legal right to block new tools that connect to their platforms – tools that would let users control their digital lives.

In this episode you’ll learn about:

  • How the power to leave a platform is one of the most fundamental checks users have on abusive practices by tech companies—and how tech companies have made it harder for their users to leave their services while still participating in our increasingly digital society;
  • How the lack of interoperability in modern tech platforms is often a set of technical choices that are backed by a legal infrastructure for enforcement, including the Digital Millennium Copyright Act (DMCA) and the Computer Fraud and Abuse Act (CFAA). This means that attempting to overcome interoperability barriers can come with legal risks as well as financial risks, making it especially unlikely for new entrants to attempt interoperating with existing technology;
  • How online platforms block interoperability in order to silence their critics, which can have real free speech implications;
  • The “kill zone” that exists around existing tech products, where investors will not back tech startups challenging existing tech monopolies, and even startups that can get a foothold may find themselves bought out by companies like Facebook and Google;
  • How we can fix it: The role of “competitive compatibility,” also known as “adversarial interoperability”  in reviving stagnant tech marketplaces;
  • How we can fix it by amending or interpreting the DMCA, CFAA and contract law to support interoperability rather than threaten it.
  • How we can fix it by supporting the role of free and open source communities as champions of interoperability and offering alternatives to existing technical giants.

Cory Doctorow ( is a science fiction author, activist and journalist. He is the author of many books, most recently ATTACK SURFACE, RADICALIZED and WALKAWAY, science fiction for adults, IN REAL LIFE, a graphic novel; INFORMATION DOESN’T WANT TO BE FREE, a book about earning a living in the Internet age, and HOMELAND, a YA sequel to LITTLE BROTHER. His latest book is POESY THE MONSTER SLAYER, a picture book for young readers.

Cory maintains a daily blog at He works for the Electronic Frontier Foundation, is a MIT Media Lab Research Affiliate, is a Visiting Professor of Computer Science at Open University, a Visiting Professor of Practice at the University of North Carolina’s School of Library and Information Science and co-founded the UK Open Rights Group. Born in Toronto, Canada, he now lives in Los Angeles. You can find Cory on Twitter at @doctorow.

Please subscribe to How to Fix the Internet via RSSStitcherTuneInApple PodcastsGoogle PodcastsSpotify or your podcast player of choice. You can also find the Mp3 of this episode on the Internet Archive, and embedded below.  If you have any feedback on this episode, please email Privacy info. This embed will serve content from

Below, you’ll find legal resources – including links to important cases, books, and briefs discussed in the podcast – as well a full transcript of the audio.


Anti-Competitive Laws

Anti-Competitive Practices 

Lawsuits Against Anti-Competitive Practices

Competitive Compatibility/Adversarial Interoperability & The Path Forward

State Abuses of Lack of Interoperability


Transcript of Episode 004: Control Over Users, Competitors, and Critics

Danny O'Brien:
Welcome to How to Fix the Internet with the Electronic Frontier Foundation, the podcast that explores some of the biggest problems we face online right now, problems whose source and solution is often buried in the obscure twists of technological development, societal change, and the subtle details of internet law.

Cindy Cohn:
Hello, everyone. I'm Cindy Cohn, I'm the executive director of the Electronic Frontier Foundation. And for our purposes today, I'm also a lawyer.

Danny O'Brien:
And I'm Danny O'Brien, and I work at the EFF too, and I could only dream of going to law school. So, this episode has its roots in a long and ongoing discussion that we have at EFF about competition in tech, or rather, the complete lack of it these days. I think there's a growing consensus that big tech--Facebook, Google, Amazon, you can make your own list at home--have come to dominate the net and tech more widely and really not in a good way. They stand these days as potentially impregnable monopolies and there doesn't seem much consensus on how to best fix that.

Cindy Cohn:
Yeah. This problem affects innovation, which is a core EFF value, but it also impacts free speech and privacy. The lack of competition has policymakers pushing companies to censor us more and more, which, as we know, despite a few high-profile exceptions, disproportionately impacts marginalized voices, especially around the world.

Cindy Cohn:
And critically, way too many of these companies have privacy-invasive business models. At this point, I like to say that Facebook doesn't have users, it has hostages. So, addressing competition empowers users, and today we're going to focus on one of the ways that we can reintroduce competition into our world. And that's interoperability. Now this is largely a technical approach, but as you'll hear, it can work in tandem with legal strategies, and it needs some legal support right now to bring it back to life.

Danny O'Brien:
Interoperability is going to be useful because it accelerates innovation, and right now, the cycle of innovation just seems to be completely stuck. I mean, this may make me sound old, but I do remember when the pre-Facebook and the pre-Google quasi-monopolies just popped up, but grew, lived gloriously, and then died and shriveled like dragonflies.

Danny O'Brien:
We had Friendster, then Myspace, we had Yahoo and Alta Vista, and then they moved away. Nothing seems to be shifting this new generation of oligopolies, in the marketplace at least. I know lawsuits and antitrust investigations take a long time. We think at EFF that there's a way of speeding things up so we can break these down as quickly as their predecessors.

Cindy Cohn:
Yep. And that's what's so good about talking this through with our friend, Cory Doctorow. He comes at this from a deeply technological, economic, and historical perspective, and especially a historical perspective on how we got here in terms of our technology and law.

Cindy Cohn:
Now, I tend to think of it as a legal perspective, because I'm a litigator--I think, what doctrines are getting in the way? How can we address them? And how can we get the legal doctrines out of the way? But Danny, if I may, I had some personal experience here too. I bought an HP printer a while back, and because I wouldn't sign up for their ink delivery service, the darn thing just bricked. It wouldn't let me use anybody else's ink, and ultimately, it just stopped working entirely.

Danny O'Brien:
Interoperability is the ability for other parties to connect and build upon existing hardware and software without asking for permission, or begging for authorization, or being thrown out if they don't follow all the rules. So, in your printer's case, Cindy--and I love how when your printer doesn't work, you recognize it as an indictment of our zaibatsu control prison, rather than me who just thinks I failed to install the right driver. But in your case, your printer in Hewlett-Packard was building an ecosystem that only allowed other Hewlett-Packard projects to connect with it.

Danny O'Brien:
There's no reason why third-party ink couldn't work in HP, except that the printer has code in it that specifically rejects cartridges, not based on whether they work or not, but whether they come from the parent company or not. And there's a legal infrastructure around that too. It's much harder for third-party companies to interoperate with Hewlett-Packard printers, simply because there's so much legal risk about doing so.

Danny O'Brien:
This is the sort of thing that Cory excels at explaining, and I'm so glad we managed to grab him between, oh my god, all the million things he does. For those of you who don't know him, Cory works as a special advisor to EFF, but he's also a best-selling science fiction author. He has his own daily newsletter, at, and a podcast of his own at

Danny O'Brien:
We caught him between publicizing his new kid's book Poesy the Monster Slayer, and promoting his new sequel to his classic "Little Brother" called "Attack Surface". And also curing world hunger, I'm pretty sure.

Cindy Cohn:
Hey, Cory.

Cory Doctorow:
It's always a pleasure to talk to you, and it's an honor to be on the EFF podcast.

Cindy Cohn:
So, let's get to it. What is interoperability? And why do we need to fix it?

Cory Doctorow:
Well, I like to start with an interoperability view that's pretty broad, right? Let's start with the fact that the company that sells you your shoes doesn't get to tell you whose socks you can wear, or that the company that makes your breakfast cereal doesn't get to tell you which dairy you have to go to. And that stuff is ... We just take it for granted, but it's a really important bedrock principle, and we see what happens when people lose interoperability: they also lose all agency and self-determination.

Cory Doctorow:
If you've ever heard those old stories about company mining towns where you were paid in company scrip that you could only spend at the company store, that was like non-interoperable money, right? The only way you could convert your company scrip into dollars would be to buy corn at the company store and take it down to the local moonshiner and hope he'd give you greenbacks, right?

Cory Doctorow:
And so, to the extent that you can be stuck in someone else's walled garden, it can turn, instead of, from a walled garden into a feedlot, where you become the fodder. And the tech industry has always had a weird relationship with interoperability,. On the one hand, computers have this amazing interoperable characteristic just kind of built into them. The underlying idea of things like von Neumann architectures, and Turing completeness really says that all computers can run all programs, and that you can't really make a computer that just, like, only uses one app store.

Cory Doctorow:
Instead, what you have to do is make a computer that refuses to use other app stores. You know, that tablet or that console you have, it's perfectly capable of using any app store that you tell it to. It just won't let you, and there's a really important difference, right? Like, I can't use a kitchen mixer to apply mascara, because the kitchen mixer is totally unsuited to applying mascara and if I tried, I would maim myself. But you can install any app on any device, provided that the manufacturer doesn't take steps to stop you.

Cory Doctorow:
And while manufacturers--tech manufacturers especially--have for a long time tried to take measures to stop use so they could increase their profits, what really changed the world was the passage of a series of laws, laws that we're very familiar with at the EFF: the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and so on, that started to allow companies to actually make it illegal--both civilly and criminally--for you to take steps to add interoperability to the products that you use, and especially for rivals to take steps.

Cory Doctorow:
I often say that the goal of companies who want to block interoperability is to control their critics, their customers, and their competitors, so that you have to arrange your affairs to benefit their shareholders. And if you don't, you end up committing an offense that our friend Saurok from the Cydia Project calls a felony contemptive business model.

Cindy Cohn:
This is something we care about in general at the EFF, because we worry a lot about the pattern of innovation, but I think it also has spillover effects on censorship and on surveillance. And I know you've thought about that a little bit, Cory, and I'd love to kind of just bring those out, because I think that it's important ... I mean, we all care, I think, about having functioning tools that really work. But there are effects on our rights as well, and that kind of old-school definition of rights, like what's in the constitution.

Cory Doctorow:
Yeah. Well, a lot of people are trusting of the firms that handle their communications. And that's okay, right? You might really think that Tim Cook is always going to exercise his judgment wisely, or that Mark Zuckerberg is a truly benevolent dictator and so on. But one of the things that keeps firms honest when they regulate your communications is the possibility that you might take your business elsewhere. And when firms don't face that possibility, they have less of an incentive to put your needs ahead of the needs of their shareholders. Or sometimes there's a kind of bank shot shareholder interest where, say, a state comes in and says, "We demand that you do something that is harmful to your users." And you weigh in the balance how many users you'll lose if you do it, versus how much it's going to cost you to resist the state.

Cory Doctorow:
And the more users you lose in those circumstances, the more you're apt to decide that the profitable thing to do is to resist state incursions. And there's another really important dimension, which is a kind of invitation to mischief that arises when you lock your users up, which is that states observe the fact that you can control the conduct of your users. And they show up and they say, "Great, we have some things your users aren't allowed to do." And you are now deputized to ensure that they don't do it, because you gave yourself that capability.

Cory Doctorow:
So, the best example of this ... I don't mean to pick on Apple, but the best example of this is Apple in China, where Apple is very dependent on the Chinese market, not just to manufacture its devices, but to buy and use its devices. Certainly, with President Trump's TikTok order, a lot of people have noted that some of the real fallout is going to be for Apple if they can't do business with Chinese firms and have Chinese apps and so on. And the Chinese government showed up at Apple's door and said, "You have to block working VPNs from your app store. We need to be able to spy on everyone who uses an iPhone. And so, the easiest way for us to accomplish that is to just tell you to evict any VPN that doesn't have a backdoor for us."

Danny O'Brien:
Just to connect those two things together, Cory, so what you're saying here is that because Apple phones don't have ... Apple has sort of exclusive control over them, and you can't just install your own choice of program on the iPhone. That means that Apple is this sort of choke point that bad actors can use, because they've got all this control for themselves, and then they can be pressured to impose that control on their customers.

Cory Doctorow:
They installed it so they could extract a 30% vig from Epic and other independent software vendors. But the day at which a government would knock on their door and demand that they use the facility that they developed to lock-in users to a store, to also lock-in users to authoritarian software, that day was completely predictable. You don't have to be a science fiction writer to say, "Oh, well, if you have a capability and it will be useful to a totalitarian state, and you put yourself in reach of that totalitarian states authority, they will deputize you to be part of their authoritarian project."

Cindy Cohn:
Yeah. And that's local as well as international. I mean, the pressure for the big platforms to be censors, to decide to be the omnipotent and always-correct deciders of what people get to say, is very strong right now. And that's a double-edged sword. Sometimes that can work well when there are bad actors on that, but really, we know how power works. And once you empower somebody to be the censor, they're going to be beholden to everybody who comes along who's got power over them to censor the people they don't like.

Cindy Cohn:
And it also then, I think, feeds this surveillance business model, this business model where tracking everything you do, and pay and trying to monetize that gets fed by the fact that you can't leave.

Danny O'Brien:
I want to try and channel the ghost of Steve Jobs here and present the other argument that lots of companies give for locking down their systems, which is that it prevents other smaller bad actors, it prevents malware, it means that Apple can control... But by controlling all of these avenues, it can build a securer, more consumer-friendly tool.

Cory Doctorow:
Yeah. I hear that argument, and I think there's some merit to it. Certainly, like, I don't have either the technical chops or the patience and attention to do a full security audit of every app I install. So, I like the idea of deputizing someone to figure out whether or not I should install an app, I just want to choose that person. I had a call recently with one of our colleagues from EFF, Mitch, who said that argument is a bit like the argument about the Berlin Wall, where the former East German government claimed that the Berlin Wall wasn't there to keep people in who wanted out, it was to stop people from breaking into the worker's paradise.

Cory Doctorow:
And if Apple was demonstrably only blocking things that harmed users, one would expect that those users would just never tick the box that says, "Let me try something else." And indeed, if that box was there, it would be much less likely that the Chinese state would show up and say, "Give us a means to spy on all your users," because Apple could say, "I will give you that means, but you have to understand that as soon as that's well understood, everyone who wants to evade your surveillance just ticks the box that says, 'Let me get a VPN somewhere else.'"

Cory Doctorow:
And so, it actually gives Apple some power to resist it. In that way, it's a bit like the warrant canaries that we're very fond of, where you have these national security letters that firms cannot disclose when they get them. And so, firms as they launch a new product say, "The number of national security letters we have received in respect to this product is zero," and they reissue that on a regular basis. And then they remove that line if they get a national security letter.

Cory Doctorow:
Jessamyn West, the librarian after the Patriot Act was passed, put a sign up in her library that said, "The FBI has not been here this week, watch for this sign to disappear," because she wasn't allowed to disclose that the FBI had been there, but she could take down the sign, and in the same way... And so, the idea here is that states are disincentivized to get up to this kind of mischief, where it relies on them keeping the existence of the mischief a secret, if that secrecy vanishes the instant they embark upon the mischief.

Cory Doctorow:
In the same way, if you have a lock-in model that disappears the instant you cease to act as a good proxy for your users' interests, then people who might want to force you to stop being a good proxy for your users' interest, have a different calculus that they make.

Cindy Cohn:
I just want to, sorry, put my lawyer hat on here. Warrant canaries are a really cute hack that are not likely to be something the FBI is just going to shrug its shoulders and say, "Oh, gosh, I guess you got us there, folks." So, I just, sorry...

Cory Doctorow:
Fair enough.

Cindy Cohn:
Sometimes I have to come in and actually make sure people aren't taking legal advice from Cory.

Danny O'Brien:
When we were kicking around ideas for the name of this podcast, one of them was, "This is not legal advice."

Cory Doctorow:
Well, okay, so instead, let's say binary transparency, where you just... automatically built into the app is a thing that just checks to see whether you got the same update as everyone else. And so that way, you can tell if you've been pushed to a different update from everyone else, and that's in the app when the app ships. And so, the only way to turn it off is to ship an update that turns it off, and if they only ship that to one user, it happens automatically. It's this idea of Ulysses pact, where you take some step before you're under coercion, or before you're in a position of weakness, to protect you from a future moment. It's equivalent of throwing away the Oreos than you go on a diet.

Cindy Cohn:
So, let's talk just a little bit more specifically about what are the things that we think are getting in the way of interoperability? And then, let's pivot to what we really want to do, which is fix it. So, what I've heard from you so far, Cory, is that we see law getting in the way, whether that's the Digital Millennium Copyright Act, Section 12, or one of the CFAA, or contract law--these kind of tools that get used by companies to stop interoperability. What are some of the other things that get in the way of us having our dream future where everything plugs into everything else?

Cory Doctorow:
I'd say that there's two different mechanisms that are used to block interop and that they interact with each other. There's law and there's tech, right? So, we have these technical countermeasures: the sealed vaults, chips, the TPMS inside of our computers, and our phones, and our other devices, which are dual-use technologies that have some positive uses, but they can be used to block interop.

Cory Doctorow:
As we move to more of a software-as-a-service model were some key element of the process happens in the cloud, it gives firms that control, that cloud, a gateway where they can surveil how users are using them and try and head off people who are adding interoperability--what we call competitive compatibility to a service, that's when you add a new interoperable feature without permission from the original manufacturer, and so on.

Cory Doctorow:
And those amount to a kind of cold war between different technologists working for different firms. So, on the one hand, you have companies trying to stop you from writing little scrapers that go into their website, and scrape their users waiting inboxes, and put them in a rival service on behalf of those users. And on the other hand, you have the people who are writing the scrapers, and we haven't seen a lot of evidence about who would win that fight, at least if it were a fair fight, because of the law--because between the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and a lot of other laws that kind of pop up as they are repurposed by firms with a lot of money to spend on legal entrepreneurship.

Danny O'Brien:
I want to drill down just a little bit with this because I loved your series that you wrote on competitive compatibility, which talked about the old age of the Internet, where we did have a far faster pace of innovation and the life and death of tech giants was far shorter, because they were kind of in this tooth and claw competitive mode, where ... I mean, just to plug an example, right? You would have, sort of, Facebook building on the contact lists that telephones and Google had by adversarially interoperating with them, right?

Danny O'Brien:
You would go to Facebook, and it would say, "Hey, tell us your friends." And it would be able to do that by connecting to their systems. Now, you can't do that with Facebook now, and you can't write an app that competes with Apple's software business, because neither of them will let you. And they're able to do that, I think what we're both saying, because... Not so much because of technical restrictions, but because of the laws that prevent you from doing that. You will get sued rather than out-innovated.

Cory Doctorow:
Well, yes. So, I think that's true. We don't know, right? I'm being careful here, because I have people who I trust as technologists who say, "No, it's really hard, they've got great engineers." I'm skeptical of that claim because we've had about a decade or more of companies being very afraid to try their hand at adversarial interoperability. And one of the things that we know is that well-capitalized firms can do a lot that firms that lack capital can't, and our investor friends tell us that what big tech has more than anything else is a kill zone--that even though Facebook, Apple, Google, and the other big firms have double-digit year-on-year growth with billions of dollars in clear profit every year, no one will invest in competitors of theirs.

Cory Doctorow:
So, I think that when technologists say, "Well, look, we beat our brains out on trying to write a bot that Facebook couldn't detect, or make an ad blocker that ... I don't know, the Washington Post couldn't stop or whatever, or write an app store and install it on iPhones and we couldn't do it." The part that they haven't tested is, well, what if an investor said, "Oh, I'm happy to get 10% of Facebook's total global profit, and I will capitalize you to reflect that expected return and let you spend that money on whatever it takes to do that"?

Cory Doctorow:
What if they didn't have the law on their side? What if they just had engineers versus engineers? But I want to get to this last piece, which is where all this law and these new legal interpretations come from, which is this legal entrepreneurship piece. So as I say, Facebook and its rivals, they have double-digit growth, billions of dollars in revenue every year, in profit, clear profit every year.

Cory Doctorow:
And some of that money is diverted to legal entrepreneurship. Instead of being sent to the shareholders, or being spent on engineering, or product design, it's spent on law. And that spend is only possible because there's just so much money sloshing around in those firms, and that spend is particularly effective, because they're all gunning for the same thing. They're a small number of firms that dominate the sector, and they have all used competitive compatibility to ascend to the top, and they are all committed to kicking away the ladder. And the thing that makes Oracle/Google so exceptional, is because it's an instance in which the two major firms actually have divergent interests.

Cory Doctorow:
Far more often, we see their industry associations and the executives from the firm's asking for the same things. And so, one of the things that we know about competition is when you lose competition, the firms that remain, find it easier to emerge a collusion. They don't have to actually all sit down and say, "This is what we all want." It's just easy for them to end up in the same place. Think about the kinds of offers you get for mobile phone plans, right? It's not that the executives all sat down and cooked up what those plans would be, it's just that they copy each other, and they all end up in the same place. Or publishing contracts, or record contracts.

Cory Doctorow:
Any super-concentrated industry is going to have a unified vision for what it wants in its lobbying efforts, and it's going to have a lot of money to spend on it.

Cindy Cohn:
So, let's shift because our focus here is fixing it. And my journey in this podcast is to have a vision of what a better future would look like, what does the world look like if we get this right? Because at the EFF and we spend a lot of time articulating all the ways in which things are broken, and that's a fine and awesome thing to do, but we need to fix them.

Cindy Cohn:
So, Cory, what would the world look like if we fixed interoperability? Give us the vision of this world.

Cory Doctorow:
I had my big kind of "road to Damascus" moment about this, when I gave a talk for the 15th anniversary of the computer science program at the University of Waterloo. They call themselves the MIT of Canada. I'm a proud University of Waterloo dropout. And I went back to give this speech and all of these super bright computer scientists were in the audience, grad students, undergrads, professors, and after I talked about compatibility, and so on, someone said, "How do we convince everyone to stop using Facebook and start using something else?"

Cory Doctorow:
And I had this, just this moment where I was like, "Why would you think that that was how you will get rid of Facebook?" Like, "When was it ever the case that if you decided you wanted to get a new pair of shoes, you throw away all your socks?" Why wouldn't we just give people the tool to use Facebook at the same time as something else, until enough of their friends have moved to the something else, that they're ready to quit Facebook?

Cory Doctorow:
And that, to me is the core of the vision, right? That rather than having this model that's a bit like the model that my grandmother lived through--my grandmother was a Soviet refugee. So, she left the Soviet Union, cut off all contact, didn't speak to her mother for 15 years, and was completely separated from her, it was a big momentous decision to leave the Soviet Union. We leave that, right? Where we tell people, "You either use Twitter or use Mastodon, but you don't read Twitter through Mastodon and have a different experience, and have different moderation rules," and so on. You're just either on Mastodon or you're on Twitter, they're on either sides of the iron curtain.

Cory Doctorow:
And instead, we have an experienced a lot more like the one I had when I moved to Los Angeles from London five years ago, where we not only got to take along the appliances that we liked and just fit them with adapters, we also get to hang out with our family back home by video call and visit them when we want to, and so on--that you let people take the parts of the offer that they like and stick with them, and leave behind the parts they don't like and go to a competitor. And that competitor might be another firm, it might be a co-op, it might be a thing just started by a tinkerer in their garage, it might be a thing started by a bright kid in a Harvard dorm room the way that Zuck did with Facebook.

Cory Doctorow:
And when those companies do stuff that makes you angry or sad, you take the parts of their service that you like, and you go somewhere else where people will treat you better. And you remain in contact with the people, and the hardware, and the services that you still enjoy, and you block the parts that you don't. So, you have technological self-determination, you have agency, and companies have to fight to keep your business because you are not a hostage, you're a customer.

Cindy Cohn:
Yeah. I think that that's exactly it, and well put. I think that we've gotten used to this idea of, what we called back in the days about the Apple app store--I don't know why Apple keeps coming up, because they're only one of the actors we're concerned about--but we used to call it the crystal prison, right? You buy an Apple device, and then it's really hard to get out of the Apple universe. It used to be that it was hard to use Microsoft Word unless you used a Windows machine. But we managed to pressure, and some of that was antitrust litigation, but we managed to make pressure so that that didn't work.

Cindy Cohn:
We want browsers that can take you to anywhere on the web, not just the ones that have made deals with the browsers. We want ISPs that offer you the entire web, just not the ones that pay for it. It really is an extension of network neutrality, this idea that we as users get to go where we want and get to dictate the terms of how we go there, at least to the extent of being able to interoperate.

Cory Doctorow:
I mean apropos of Apple, I don't want to pick on them either, because Apple are fantastic champions of interoperability when it suits them, right? As you say that, the document wars were won in part by the iWork suite, where Apple took some really talented engineers, reversed engineer the gnarly, weird hairball that is Microsoft Office formats, and made backwards compatible new office suites that were super innovative but could also save out to Word and Excel, even though you're writing into Numbers or Pages, and that part's great. And just like Amazon broke the DRM on music monopoly that Apple had when it launched the MP3 store, but now will not release its audiobooks from DRM through its Audible program.

Cory Doctorow:
Apple was really in favor of interoperability without permission when it came to document formats--it benefited--but doesn't like it when it comes to, say, rival app stores. Google is 100% all over interoperability when it comes to APIs, but not so much when it comes to the other areas where they enjoy lock-in. And I think that the lesson here is that we as users want interoperability irrespective of the effect that it has on a company's shareholders.

Cory Doctorow:
The companies have a much more instrumental view of interoperability: that they want interoperability when it benefits them, and they don't want it when it harms them. And I'm always reminded, Cindy, of the thing you used to say, when we were in the early days of the copyright wars around Napster. And we would talk to these lobbyists from the entertainment industry, and they would say, "Well, we are free speech organizations, that's where we cut our teeth." And you would say, "We know you love the First Amendment, we just wish you'd share."

Cindy Cohn:
Yeah, absolutely. Absolutely, and one of the things that we've done recently is, we started off talking about this as interoperability, then we called it adversarial interoperability to make it clear that you don't need to go on bended knee for permission. And recently, we started rebranding it to competitive compatibility. And Cory, you've used both of those terms in this conversation, and I just want to make sure our listeners get what we're doing. We're trying to really think about this. I mean, all of them are correct, but I think competitive compatibility, the reason we ended up there is not only is it fewer syllables, and we can call it ComCom, which we all like, but it's the idea that it's compatibility, it's competitive compatibility, it's being compatible with a competitive market-based environment, a place where users get to decide because people are competing for their interest and for their time.

Cindy Cohn:
And I really love the vision that if you're the old ... This was the product that Power Ventures tried to put out, was a service where it didn't matter whether you had a friend on LinkedIn, or you had a friend on Orkut--it's an old tool--or Facebook. You just knew you had a friend, and you just typed in, "Send the message to Cory," and the software just figured out where you were connected with them and sent the message through it.

Cindy Cohn:
I mean, that's just the kind of thing that should be easy, that isn't easy anymore. Because everybody is stuck in this platform mentality, where you're in one crystal prison or you're in the other, and you might be able to switch from one to the other, but you can't take anything with you in the way that you go.

Cindy Cohn:
The other tool that Power had that I thought was awesome was being able to look at all your social feeds in one screen. So, rather than switching in between them all the time, and trying to remember--which I spend a lot of time on right now, trying to remember whether I learned something on Twitter, or I learned it here on Facebook, or I learned it somewhere else--you have one interface, you have your own cockpit for your social media feeds, and you get to decide how it looks, instead of having to log into each of them separately and switch between them.

Cindy Cohn:
And those are just two ideas of how this world could serve you better. I think there are probably a dozen more. And I'd like for us to ... If there's other ones that you could think of like, how would my life be different if we fix this in ways that we can think about right now? And then of course, I think as with all tech and all innovation, the really cool things are going to be the things that we don't think about that show up anyway, because that's how it works.

Cory Doctorow:
Yeah, sure. I mean, a really good example right now is you'd be able to install Fortnite on your iPhone or your Android device, which is the thing you can't do as of the day that we record this. And again, that's what the app store lock-in is for, it's to take a bite out of Fortnite and other independent software vendors. But if you decided that you wanted to keep using the security system that your cable provider Comcast gave to you but then decided it wouldn't support anymore, which is the thing Comcast did last year, you could plug its cameras into a different control system.

Cory Doctorow:
If you decided that you liked the camera that Canary sent you and that you paid for, but you didn't like the fact that it's an unencrypted video--or rather, video that was an end-to-end encrypted to your phone and instead decrypted it in its data center so it could look at the video (and it does so for non-nefarious reasons, it wants to make sure it doesn't send you a motion control alert just because your cat walked by the motion sensor)--but you may decide that having a camera in your house that's always on, and that's sending video that third parties can look at, is not a thing you like, but you like the hardware, so you just plug that into something else too.

Danny O'Brien:
I love the Catalog of Missing Devices. This is the thing that Cory co-wrote, which was just a list of devices that we cannot see right now, because of some of the laws that prevent people confidently being able to innovate in this space. And I sort of concede, because we plod about this all the time, like, EFF's role in this, right? We're continuing to sort of lobby and also in the courts as well work out ways that we can challenge and redefine the legal environment here. But what's the message here if you're someone who's an open-source developer, or an entrepreneur, or a user? What's going to move the needle? What's going to take us into this future? And what can individuals do?

Cory Doctorow:
So, this is an iterative process, there isn't a path from A to Z. There is a heuristic for how we climb the hill towards a better future. And the way to understand this, as I tried to get at with this sort of technology and law and monopoly, is that our current situation is the result of companies having these monopoly rents, laws being bad because they got to spend them on it, companies being able to collude because their sectors are concentrated, and technology that works against users being in the marketplace.

Cory Doctorow:
And each one of those affects the other. So for example, if we had, say merger scrutiny, right? Say we said that firms were no longer allowed to buy nascent competitors, either to crush them or to acquire something that they couldn't do internally, the way you say Google has with most of its successful products. Really it's got search, and to a lesser extent Android, that was mostly an acquisition, and ... What's the other one? Search ... Oh, and Gmail are the really successful in-house products. Maybe Google Photos, although that's probably just successful because every Android device ships with it. But if we just say Google can't buy Fitbit, Google, a company that has tried repeatedly and failed to make a wearable, isn't allowed to buy Fitbit. In order to acquire that, then Google starts to lose some of its stranglehold on data, especially if you stop its rivals from buying Fitbit too. And that makes it weaker, so that it's harder for it to spend on legal entrepreneurship.

Cory Doctorow:
If we make devices that compete with Google, or tools that compete with Google--ad blockers, tracker blockers, and so on--then that also weakens them. And if they are weaker, they have fewer legal resources to deploy against these competitors as well. If we convince people that they can want more, right? If we can have a normative intervention, to say, "No one came down off a mount with two stone tablets," saying, 'Only the company that made your car can fix it,' or 'Only the company that made your phone can fix it,'" and we got them to understand that the right to repair has been stolen from them, then, when laws are contemplated that either improve our right to repair or take away our right to repair, there's a constituency to fight those laws.

Cory Doctorow:
So the norms, and the markets, and the technology, and the law, all work together with each other. And I'm not one of nature's drivers, I have very bad spatial sense, and when I moved to Los Angeles and became perforce a driver, I find myself spending a lot of time trying to parallel park. And the way that I parallel park is, I turn the wheel as far as I can, and then get a quarter of an inch of space, and then I turn it in the other direction, and I get a quarter of an inch of space. And I think as we try to climb this hill towards a more competitive market, we're going to have to see which direction we can pull in from moment to moment, to get a little bit more policy space that we can leverage to get a little bit more policy space.

Cory Doctorow:
And the four directions we can go in are: norms, conversations about what's right and wrong; laws, that tell you what's legal and not legal; markets, things that are available for sale; and tech, things that are technologically possible. And our listeners, our constituents, the people in Washington, the people in Brussels, they have different skill sets that they can use here, but everyone can do one of these things, right? If you're helping with the norm conversation, you are creating the market for the people who want to start the businesses, and you are creating the constituency for the lawmakers who want to make the laws.

Cory Doctorow:
So, everybody has a role to play in this fight, but there isn't a map from A to Z. That kind of plotting is for novelists, not political struggles.

Cindy Cohn:
I think this is so important. One of the things that ... And I want to close with this, because I think it's true for almost all of the things that we're talking about fixing, is that the answer to, "Should we do X or Y?" is "yes," right? We are in some ways, the kind of scrappy, underfunded side of almost every fight we're in around these kinds of things. And so, anybody who's going to force you to choose between strategies is undermining the whole cause.

Cindy Cohn:
These are multi-strategic questions. Should we break up big tech? Or should we create interoperability? The answer is, yes, we need to aim towards doing a bit of all of these things. There might be times when they conflict, but most of the time, they don't. And it's a false choice if somebody is telling you that you have to pick one strategy, and that's the only thing you can do.

Cindy Cohn:
Every big change that we have done has been a result of a whole bunch of different strategies, and you don't know which one is going to give way, which is going to pave the way faster. You just keep pushing on all things. So, we're finally moving on the Fourth Amendment on privacy, and we're moving in the courts. But we could have passed a privacy law, but the legislation got stuck. We got to do all of these things. They feed each other, they don't take away each other if we do it right.

Cory Doctorow:
Yeah, yeah. And I want to close by just saying, EFF's 30 years old, which is amazing. I've been with the organization nearly 20 years, which is baffling, and the thing that I've learned on the way is that these are all questions of movements and not individuals. Like as an individual, the best thing you can do is join a movement, right? If you're worried about climate change, it doesn't really ... How well you recycle is way less important than what you do with your neighbors to change the way that we think about our relationship to the climate.

Cory Doctorow:
And if you're worried about our technological environment, then your individual tech choices do matter. But they don't matter nearly so much as the choices that you make when you get together with other people to make this part of a bigger, wider struggle.

Cindy Cohn:
I think that's so right. And even those who are out there in their garages, innovating right now, they need all the rest of the conversation to work. Nobody just put something out there in the world and it magically caught fire and changed the world. I mean, we like that narrative, but that's not how it works. And so, even if you're one of those people--and there are many of them who are EFF fans and we love them--who are out there thinking about the next big idea, this whole movement has to move forward, so that that big idea finds the fertile ground it needs, take seed and grows, and then gives all the rest of us the really cool stuff in our fixed future.

Cindy Cohn:
So, thank you so much, Cory, for taking time with us. You never fail to bring exciting ideas. And I think that you also are really willing to talk to a sophisticated audience and not talk down to people and bring in complicated ideas without having ... And expect and get the audience to come up to the level of the conversation, so I certainly always learn from talking with you.

Cory Doctorow:
I was going to say, I learned it all from you guys, so thank you very much. And I miss you guys, I can't wait to see you in person again.

Danny O'Brien:
Cory is this little ball of pure idea concentrate, and I was madly scribbling notes through all of that discussion. But one of the phrases that stuck with me was that, he said the companies are blocking interoperability to control critics, customers, and competitors.

Cindy Cohn:
Yeah. I thought that was really good too, and obviously, the most important part of all of this is control. I mean, that's what the companies have. Of course, the part about critics is what especially triggers the First Amendment concerns, but control is the thing and I think that the ultimate power that we should have, the ultimate amount of control we should have is the ability to leave.

Cindy Cohn:
The ultimate power is the power to leave. That's the core thing that is needed to get companies to concentrate on their users. The conflict here is really between companies' desire to control users and users having the right to choose where they want to be.

Danny O'Brien:
One of the other things that I think comes out of this discussion is when you realize that companies, by blocking interoperability, can have exclusive power of censorship or control over their users. There's always someone else more powerful who has influenced itself over the companies and is ultimately going to take and use that power, right? And that's, generally speaking, governments.

Danny O'Brien:
We notice that when you have this capability to influence, or to censor, or to manipulate your users, governments and states ultimately would like access to that power also.

Cindy Cohn:
Yeah. We're seeing this all over the place, there's always a bigger fish. Right now, we see politicians in the United States, in very different directions, jockeying to force companies like Facebook to obey their preferences or agendas. And again, we have high-profile counter-examples, but where we live, EFF, in the trenches, we see that this power of censorship is most often used against those with the least voice in the political arena.

Cindy Cohn:
That kind of branches out to why we care about censorship and the First Amendment. I think that sometimes people forget this. We don't care about the First Amendment and free speech because we think it's okay for anybody to be able to say whatever they want, no matter how awful it is. The First Amendment isn't in our Constitution because we think it's really great to be an asshole. It's because the power to censor is so strong, and so easily misused.

Cindy Cohn:
As we've seen, once somebody has that power, everybody wants to control them. The other thing I think Cory really has a good grasp on is how we got here. We talked a little bit about the kill zone, that venture capitalists won't fund startups that attempt to compete. I think that's really right, and it's a piece that we're going to have to fix.

Danny O'Brien:
Yeah. I think one of the subtleties about the current VC environment that powers so much of current tech investment, at least, is the nature of the exit strategy. These days, a venture capitalist won't give ... expects to get their return, not by a company IPOing, or successfully overturning one of these monopolies, but actually by being bought out by those monopolies. And that really constrains and influences what new innovators or entrepreneurs plan on doing in the next few years. And I think that's one of the things that sticks in this current, less-than-useful cycle.

Danny O'Brien:
And usually, in these situations, I think that the community that I most expect to provide adversarial interoperability is, at least in theory, free of those financial incentives. And that's the free and open-source software community. So much of the history of open-source has been using interoperability to build and escape from existing proprietary systems, from the early days of Unix, to LibreOffice being a competitor, to Microsoft's word processing monopoly and so on.

Danny O'Brien:
And I think where these two things interact, is that these days a lot of open-source and free software gets its funding from the big companies themselves, because they don't necessarily want to fund interoperability. So, that means that the stuff that doesn't cater to interoperability gets a lot of rewards, and other communities who are fighting to shake off the shackles of proprietary software and dominant monopolies struggle without financial support.

Danny O'Brien:
And of course, there's legal liability there too. We just watched the youtube-dl case with GitHub throwing that off their service, because it's an attempt to interoperate with one of these big tech magnets.

Cindy Cohn:
Yeah. Free and open-source world is vital. They have those muscles, and it's always been how they work. They've always had to make sure that they can play on whatever hardware you have, as well as with other software. So, I think that this is a key to getting us into a place where we can make interoperability the norm, not the exception.

Cindy Cohn:
I also, am really pleased about the Internet Archive's work in really supporting the idea of a more distributed web. I think they really get the censorship possibilities, and are really supporting a lot of little companies, or little developers, or innovators who are trying to build a community to really get this done. And yes, the youtube-dl case, this is a situation in which you see the lack of protection for interoperability really meaning that the first thing that happened was this tool that so many people rely on went away as opposed to any other step. The first thing that happens is we lose the tool. That's because the legal system isn't set up to be really even or handle these kinds of situations, but rather just move to censorship first.

Cindy Cohn:
So in this, we've gone over what Cory talked about as the four levers of change. These four levers are things that were originated by Larry Lessig in the 90s. And those four levers are: law, like the DMCA, which is used in the YouTube case, the Computer Fraud and Abuse Act, and antitrust; norms; technology; and markets.

Cindy Cohn:
They all work together, and you can't just pick one. And there's a lot of efforts to try to say, "Well, you just have to pick one and let the others go." But in my experience, you really can't tell which one will create change, they all reinforce each other. And so, to really fix the Internet, we have to push on all four together.

Danny O'Brien:
But at least now we have four levers rather than no levers at all. On that note, I think we'll wrap up for today. Join us next time.

Danny O'Brien:
Thanks again for joining us. If you'd like to support the Electronic Frontier Foundation, here are three things you can do today. One: you can hit subscribe in your podcast player of choice. And if you have time, please leave a review. It helps more people find us. Two: please share on social media and with your friends and family. Three: please visit, where you will find more episodes, learn about these issues, you can donate to become a member, and lots more.

Danny O'Brien:
Members are the only reason we can do this work, plus you can get cool stuff like an EEF hat, or a EFF hoodie, or even a camera cover for your laptop. Thanks once again for joining us, and if you have any feedback on this episode, please email We do read every email. This podcast was produced by the Electronic Frontier Foundation with the help from Stuga Studios. Music by Nat Keefe of BeatMower.

rainey Reitman

The FCC’s Independence and Mission Are at Stake with Trump Nominee

3 months 2 weeks ago

When there are only five people in charge of a major federal agency, the personal agenda of even one of them can have a profound impact. That’s why EFF is closely watching the nomination of Nathan Simington to the Federal Communications Commission (FCC).

Simington’s nomination appears to be the culmination of a several-month project to transform the FCC and expand its purview in ways that threaten our civil liberties online. The Senate should not confirm him without asking some crucial questions about whether and how he will help ensure that the FCC does the public interest job Congress gave it, which is to expand broadband access, manage the public’s wireless spectrum to their benefit, and protect consumers when they use telecommunications services.

There’s good reason to worry: Simington was reportedly one of the legal architects behind the president’s recent executive order seeking to have the FCC issue “clarifying” regulations for social media platforms. The executive order purports to give the FCC authority to create rules to which social media platforms must adhere in order to enjoy liability protections under Section 230, the most important law protecting our free speech online. Section 230 protects online platforms from liability for the speech of their users, while protecting their flexibility to develop their own speech moderation policies. The Trump executive order would upend that flexibility. 

As we’ve explained at length, this executive order was based on a legal fiction. The FCC’s role is not to enforce or interpret Section 230; its job is to regulate the United States’ telecommunications infrastructure: broadband, telephone, cable television, satellite, and all the various infrastructural means of delivering information to and from homes and businesses in the U.S. Throughout the Trump administration, the FCC has often shirked that duty—most dramatically, by abandoning any meaningful defense of net neutrality. Simington’s nomination seems to be an at-the-buzzer shot by an administration that’s been focused on undermining our protections for free speech online, instead of upholding the FCC’s traditional role of ensuring affordable access to the Internet and other communications technologies, and ensuring that those technologies don’t unfairly discriminate against specific users or uses.

The FCC Is Not the Speech Police—And Shouldn’t Be

Let’s take a look at the events leading up to Simington’s nomination. Twitter first applied a fact-check label to a tweet of President Trump’s in May, in response to his claims that mail-in ballots were part of a campaign of systemic voter fraud. As a private company, Twitter has the First Amendment right to implement such fact-checks, or even to choose not to carry someone’s speech for any reason.

The White House responded with its executive order that, among other things, directed the FCC to draft regulations that would narrow the Section 230 liability shield. As a result, it perverted the FCC’s role: it’s supposed to be a telecom regulator, not the social media police.

The White House executive order reflects a long-running (and unproven) claim in conservative circles that social media platforms are biased against conservative users. Some lawmakers and commentators have even claimed that their biased moderation practices somehow strip social media platforms of their liability protections under Section 230. As early as 2018, Sen. Ted Cruz incorrectly told Facebook CEO Mark Zuckerberg that in order to be shielded by 230, a platform had to be a “neutral public forum.” In the years since then, members of Congress have introduced multiple bills purporting to condition platforms’ 230 immunity on “neutral” moderation policies. As we’ve explained to Congress, a law demanding that platforms moderate speech in a certain way would be unconstitutional. The misguided executive order has the same inherent flaw as the bills: the government cannot dictate online platforms’ speech policies.

It’s not the FCC’s job to police social media, and it’s also not the president’s job to tell it to. By design, the FCC is an independent agency and not subject to the president’s demands. But when Republican FCC commissioner Michael O’Rielly correctly pointed out that government efforts to control private actor speech were unconstitutional, he was quickly punished. O’Rielly wrote [pdf], “the First Amendment protects us from limits on speech imposed by the government – not private actors – and we should all reject demands, in the name of the First Amendment, for private actors to curate or publish speech in a certain way.” The White House responded by withdrawing O’Rielly’s nomination and nominating Simington, one of the drafters of the executive order.

During a transition of power, it’s customary for independent agencies like the FCC to pause on controversial actions. The current FCC has so far adhered to that tradition, only moving forward items that have unanimous support. Every item the FCC has voted on since the election had the support of the Chair, the other four commissioners, and industry and consumer groups. For example, the FCC has moved forward on freeing up of 5.9 Ghz spectrum for unlicensed uses, a move applauded by EFF and most experts. But we worry that in nominating Simington, the administration is attempting to pave the way for a future FCC to go far beyond its traditional mandate and move into policing social media platforms’ policies. We’re glad to see Fight for the Future, Demand Progress, and several other groups rightfully calling on the Senate to not move forward on Nate Simington’s nomination.

The FCC’s Real Job Is More Important Than Ever 

There’s no shortage of work to do within FCC’s traditional role and statutory mandate. The FCC must begin to address the pressure test that the COVID-19 pandemic has posed to the U.S. telecommunications infrastructure. Much of the U.S. population must now rely on home Internet subscriptions for work, education, and socializing. Millions of families either have no home Internet access at all or lack sufficient access to meet this new demand. The new FCC has a monumental task in front of itself. 

During his Senate confirmation hearing, Simington gave no real indication on how he plans to work on the real issues facing the agency: broadband access, remote school challenges, spectrum management, improving competition, and public safety rules, for example. The only things we learned from the hearing are that he plans to continue the Trump-era policy of refusing to regulate large ISPs and that he refuses to recuse himself from decisions on the misguided executive order that he helped write. Before the Simington confirmation hearing started, Trump again urged Republicans to quickly confirm his nominee on a partisan basis.

In response, Senator Richard Blumenthal called for a hold on Simington’s nomination, indicating real concern for the FCC’s independence from the White House. That means the Senate would need to bypass his filibuster if it truly wanted to confirm Trump’s nominee.

Sen. Blumenthal’s concerns are real and important. President Trump effectively fired his own commissioner (O’Rielly) for expressing basic First Amendment principles. Before it confirms Simington, the Senate ought to consider what the nomination means for the future of the FCC. As the pandemic continues to worsen, there are too many mission critical issues for the FCC to tackle for it to continue with Trump’s misguided war on Section 230.

Ernesto Falcon

ICANN Can Stand Against Censorship (And Avoid Another .ORG Debacle) by Keeping Content Regulation and Other Dangerous Policies Out of Its Registry Contracts

3 months 2 weeks ago

The Internet’s domain name system is not the place to police speech. ICANN, the organization that regulates that system, is legally bound not to act as the Internet’s speech police, but its legal commitments are riddled with exceptions, and aspiring censors have already used those exceptions in harmful ways. This was one factor that made the failed takeover of the .ORG registry such a dangerous situation. But now, ICANN has an opportunity to curb this abuse and recommit to its narrow mission of keeping the DNS running, by placing firm limits on so-called “voluntary public interest commitments” (PICs, recently renamed Registry Voluntary Commitments, or RVCs).

For many years, ICANN and the domain name registries it oversees have given mixed messages about their commitments to free speech and to staying within their mission. ICANN’s bylaws declare that “ICANN shall not regulate (i.e., impose rules and restrictions on) services that use the Internet’s unique identifiers or the content that such services carry or provide.” ICANN’s mission, according to its bylaws, “is to ensure the stable and secure operation of the Internet's unique identifier systems.” And ICANN, by its own commitment, “shall not act outside its Mission.”

But…there’s always a but. The bylaws go on to say that ICANN’s agreements with registries (the managing entities of each top-level domain like .com, .org, and .horse) and registrars (the companies you pay to register a domain name for your website) automatically fall within ICANN’s legal authority, and are immune from challenge, if they were in place in 2016, or if they “do not vary materially” from the 2016 versions.

Therein lies the mischief. Since 2013, registries have been allowed to make any commitments they like and write them into their contracts with ICANN. Once they’re written into the contract, they become enforceable by ICANN. These “voluntary public interest commitments”  have included many promises made to powerful business interests that work against the rights of domain name users. For example, one registry operator puts the interests of major brands over those of its actual customers by allowing trademark holders to stop anyone else from registering domains that contain common words they claim as brands.

Further, at least one registry has granted itself “sole discretion and at any time and without limitation, to deny, suspend, cancel, or transfer any registration or transaction, or place any domain name(s) on registry lock, hold, or similar status” for vague and undefined reasons, without notice to the registrant and without any opportunity to respond.  This rule applies across potentially millions of domain names. How can anyone feel secure that the domain name they use for their website or app won’t suddenly be shut down? With such arbitrary policies in place, why would anyone trust the domain name system with their valued speech, expression, education, research, and commerce?

Voluntary PICs even played a role in the failed takeover of the .ORG registry earlier this year by the private equity firm Ethos Capital, which is run by former ICANN insiders. When EFF and thousands of other organizations sounded the alarm over private investors’ bid for control over the speech of nonprofit organizations, Ethos Capital proposed to write PICs that, according to them, would prevent censorship. Of course, because the clauses Ethos proposed to add to its contract were written by the firm alone, without any meaningful community input, they had more holes than Swiss cheese. If the sale had succeeded, ICANN would have been bound to enforce Ethos’s weak and self-serving version of anti-censorship.

A Fresh Look by the ICANN Board?

The issue of PICs is now up for review by an ICANN working group known as “Subsequent Procedures.” Last month, the ICANN Board wrote an open letter to that group expressing concern about PICs that might entangle ICANN in issues that fall “outside of ICANN’s technical mission.” It bears repeating that the one thing explicitly called out in ICANN’s bylaws as being outside of ICANN’s mission is to “regulate” Internet services “or the content that such services carry or provide.” The Board asked the working group [pdf] for “guidance on how to utilize PICs and RVCs without the need for ICANN to assess and pass judgment on content.”

A Solution: No Contractual Terms About Content Regulation

EFF supports this request, and so do many other organizations and stakeholders who don’t want to see ICANN become another content moderation battleground. There’s a simple, three-part solution that the Subsequent Procedures working group can propose:

  • PICs/RVCs can only address issues with domain names themselves—not the contents of websites or apps that use domain names;
  • PICs/RVCs should not give registries unbounded discretion to suspend domain names;
  • and PICs/RVCs should not be used to create new domain name policies that didn’t come through ICANN processes.

In short, while registries can run their businesses as they see fit, ICANN’s contracts and enforcement systems should have no role in content regulation, or any other rules and policies beyond the ones the ICANN Community has made together.

A guardrail on the PIC/RVC process will keep ICANN true to its promise not to regulate Internet services and content.  It will help avoid another situation like the failed .ORG takeover, by sending a message that censorship-for-profit is against ICANN’s principles. It will also help registry operators to resist calls for censorship by governments (for example, calls to suppress truthful information about the importation of prescription medicines). This will preserve Internet users’ trust in the domain name system.

Mitch Stoltz

Once Again, Facebook Is Using Privacy As A Sword To Kill Independent Innovation

3 months 2 weeks ago

Facebook claims that their role as guardian of users’ privacy gives them the power to shut down apps that give users more control over their own social media experience. Facebook is wrong. The latest example is their legal bullying of Friendly Social Browser.

Friendly is a web browser with plugins geared towards Facebook, Instagram, and other social media sites. It’s been around since 2010 and has a passionate following. Friendly offers ad and tracker blocking and simplifies downloading of photos and videos. It lets users search their news feeds by keyword, or reorder their feeds chronologically, and it displays Facebook pages with alternative “skins.”

To Facebook’s servers, Friendly is just a browser like any other. Users run Friendly much as they would Google Chrome, Mozilla Firefox, or any other standard web browser. According to Friendly, its software doesn’t call any developer interfaces (APIs) into Facebook or Instagram. Friendly has also stated that they don’t collect any personal information about users, including posts or uploads. Friendly does collect some anonymous usage data, and sends the ads that people view to a third-party analytics firm.

Over the summer, Facebook’s outside counsel demanded that Friendly stop offering its browser. Facebook’s lawyer claimed that Friendly violated Facebook’s terms of service by “chang[ing] the way Facebook and Instagram look and function” and “impairing [their] intended operation.” She claimed, incorrectly, that violating Facebook’s terms of service was also a violation of the federal Computer Fraud and Abuse Act (CFAA) and its California counterpart.

Although Friendly explained to Facebook’s lawyers that their browser didn’t access any Facebook developer APIs, Facebook hasn’t budged from its demand that Friendly drop dead. 

Today, EFF sent Facebook a letter challenging Facebook’s legal claims. We explained that the CFAA and its California counterpart are concerned with “access” to a protected computer:

California law defines “access” as “to gain entry to, instruct, cause input to, cause output from, cause data processing with, or communicate with” a computer. Friendly is a web browser, so it is our understanding that Friendly does not itself “gain entry to” or “communicate with” Facebook in any way. Like other popular browsers such as Google Chrome or Mozilla Firefox, therefore, Friendly does not “access” Facebook; Facebook users do. But presumably Facebook knows better than to directly accuse its users of being malicious hackers if they change the colors of websites they view.

While EFF is not representing Friendly at this time, we weighed in because Facebook’s claims are dangerous. Facebook is claiming the power to decide which browsers its users can use to access its social media sites, an extremely broad claim. According to the reasoning of Facebook’s demand, accessibility software like screen readers, magnifiers, and tools that change fonts or colors to make pages more readable for visually impaired people all exist by Facebook’s good will, and could be shut down anytime if Facebook decides they “change the way Facebook and Instagram look and function.”

Friendly is far from the only victim of the company’s strong-arming. Just last month, Facebook threatened the NYU Ad Observatory, a research project that recruits Facebook users to install a plugin to collect the ads they’re shown. And in 2016, Facebook convinced a federal court of appeals that the CFAA barred a third-party social media aggregator from interacting with user accounts, even when those users chose to sign up for the aggregator’s service. In sum, Facebook’s playbook—using the CFAA to enforce spurious privacy claims—has made it harder for innovators, security experts, and researchers of all stripes to use Facebook in their work. 

Facebook has claimed that it must bring its legal guns to bear on any software that interoperates with Facebook or Instagram without permission, citing to the commitments that Facebook made to the Federal Trade Commission after the Cambridge Analytica scandal. But there are different kinds of privacy threats. Facebook’s understandable desire to protect users (and its own reputation) against privacy abuses by third parties like Cambridge Analytica doesn’t take away users’ right to guard themselves against Facebook’s own collection and mishandling of their personal data by employing ad- and tracker-blocking software like Friendly (or EFF’s Privacy Badger, for that matter). 

Nor do Facebook’s privacy responsibilities justify stopping users from changing the way they experience Facebook, and choosing tools to help them do that. Attempts to lock out third-party innovators are not a good look for a company facing antitrust investigations, including a pending lawsuit from the Federal Trade Commission.

The web isn’t television. Website owners might want to control every detail about how their sites look and function, but since the very beginning, users have always been in control of their own experience—it’s one of the defining features of the Web. Users can choose to re-arrange the content they receive from websites, save it, send it along to others, or ignore some of it by blocking advertisements and tracking devices. The law can’t stop users from choosing how to receive Facebook content, and Facebook shouldn’t be trying to lock out competition under a guise of protecting privacy.

Related Cases: Facebook v. Power Ventures
Mitch Stoltz

Video Analytics User Manuals Are a Guide to Dystopia

3 months 2 weeks ago

A few years ago, when you saw a security camera, you may have thought that the video feed went to a VCR somewhere in a back office that could only be accessed when a crime occurs. Or maybe you imagined a sleepy guard who only paid half-attention, and only when they discovered a crime in progress. In the age of internet-connectivity, now it’s easy to imagine footage sitting on a server somewhere, with any image inaccessible except to someone willing to fast forward through hundreds of hours of footage.

That may be how it worked in 1990s heist movies, and it may be how a homeowner still sorts through their own home security camera footage. But that's not how cameras operate in today's security environment. Instead, advanced algorithms are watching every frame on every camera and documenting every person, animal, vehicle, and backpack as they move through physical space, and thus camera to camera, over an extended period of time. 

The term "video analytics" seems boring, but don't confuse it with how many views you got on your YouTube “how to poach an egg” tutorial. In a law enforcement or private security context, video analytics refers to using machine learning, artificial intelligence, and computer vision to automate ubiquitous surveillance. 

Through the Atlas of Surveillance project, EFF has found more than 35 law enforcement agencies that use advanced video analytics technology. That number is steadily growing as we discover new vendors, contracts, and capabilities. To better understand how this software works, who uses it, and what it’s capable of, EFF has acquired a number of user manuals. And yes, they are even scarier than we thought. 

Briefcam, which is often packaged with Genetec video technology, is frequently used at real-time crime centers. These are police surveillance facilities that aggregate camera footage and other surveillance information from across a jurisdiction. Dozens of police departments use Briefcam to search through hours of footage from multiple cameras in order to, for instance, narrow in on a particular face or a specific colored backpack. This power of video analytic software would  be particularly scary if used to identify people out practicing their First Amendment right to protest. 

Avigilon systems are a bit more opaque, since they are often sold to business, which aren't subject to the same transparency laws. In San Francisco, for instance, Avigilon provides the cameras and software for at least six business improvement districts (BIDs) and Community Benefit Districts (CBDs). These districts blanket neighborhoods in surveillance cameras and relay the footage back to a central control room. Avigilon’s video analytics can undertake object identification (such as whether things are cars and people), license plate reading, and potentially face recognition. 

You can read the Avigilon user manual here, and the Briefcam manual here. The latter was obtained through the California Public Records Act by Dylan Kubeny, a student journalist at the University of Nevada, Reno Reynolds School of Journalism. 

But what exactly are these software systems' capabilities? Here’s what we learned: 

Pick a Face, Track a Face, Rate a Face

If you're watching video footage on Briefcam, you can select any face, then add it to a "watchlist." Then, with a few more clicks, you can retrieve every piece of video you have with that person's face in it. 

Briefcam assigns all face images 1-3 stars. One star: the AI can't even recognize it as a person. Two stars: medium confidence. Three stars: high confidence.  

Detection of Unusual Events

Avigilon has a pair of algorithms that it uses to predict what it calls "unusual events." 

The first can detect "unusual motions," essentially patterns of pixels that don't match what you'd normally expect in the scene. It takes two weeks to train this self-learning algorithm.  The second can detect "unusual activity" involving cars and people. It only takes a week to train. 

Also, there's "Tampering Detection" which, depending on how you set it, can be triggered by a moving shadow:

Enter a value between 1-10 to select how sensitive a camera is to tampering Events. Tampering is a sudden change in the camera field of view, usually caused by someone unexpectedly moving the camera. Lower the setting if small changes in the scene, like moving shadows, cause tampering events. If the camera is installed indoors and the scene is unlikely to change, you can increase the setting to capture more unusual events.

Pink Hair and Short Sleeves 

With Briefcam’s shade filter, a person searching a crowd could filter by the color and length of items of clothing, accessories, or even hair. Briefcam’s manual even states the program can search a crowd or a large collection of footage for someone with pink hair. 

In addition, users of BriefCam can search specifically by what a person is wearing and other “personal attributes.” Law enforcement attempting to sift through crowd footage or hours of video could search for someone by specifying blue jeans or a yellow short-sleeved shirt.

Man, Woman, Child, Animal

BriefCam sorts people and objects into specific categories to make them easier for the system to search for. BriefCam breaks people into the three categories of “man,” “woman,” and “child.” Scientific studies show that this type of categorization can misidentify gender nonconforming, nonbinary, trans, and disabled people whose bodies may not conform to the rigid criteria the software looks for when sorting people. Such misidentification can have real-world harms, like triggering misguided investigations or denying access.

The software also breaks down other categories, including distinguishing between different types of vehicles and recognizing animals.

Proximity Alert

In addition to monitoring the total number of objects in a frame or the relative size of objects, BriefCam can detect proximity between people and the duration of their contact. This might make BriefCam a prime candidate for “COVID-19 washing,” or rebranding invasive surveillance technology as a potential solution to the current public health crisis. 

Avigilon also claims it can detect skin temperature, raising another possible assertion of public health benefit. But, as we’ve argued before, remote thermal imaging can often be very inaccurate, and fail to detect virus carriers that are asymptomatic. 

Public health is a collective effort. Deploying invasive surveillance technologies that could easily be used to monitor protestors and track political figures is likely to breed more distrust of the government. This will make public health collaboration less likely, not more. 


One feature available both with Briefcam and Avigilon are watchlists, and we don't mean a notebook full of names. Instead, the systems allow you to upload folders of faces and spreadsheets of license plates, and then the algorithm will find matches and track the targets’ movement. The underlying watchlists can be extremely problematic. For example, EFF has looked at hundreds of policy documents for automated license plate readers (ALPRs) and it is very rare for an agency to describe the rules for adding someone to a watchlist. 

Vehicles Worldwide 

Often, ALPRs are associated with England, the birthplace of the technology, and the United States, where it has metastasized. But Avigilon already has its sights set on new markets and has programmed its technology to identify license plates across six continents

It's worth noting that Avigilon is owned by Motorola Solutions, the same company that operates the infamous ALPR provider Vigilant Solutions.


We’re heading into a dangerous time. The lack of oversight of police acquisition and use of surveillance technology has dangerous consequences for those misidentified or caught up in the self-fulfilling prophecies of AI policing

In fact,  Dr. Rashall Brackney, the Charlottesville Police Chief, described these video analytics as perpetuating racial bias at a recent panel. Video analytics "are often incorrect," she said. "Over and over they create false positives in identifying suspects."

This new era of video analytics capabilities causes at least two problems. First, police could rely more and more on this secretive technology to dictate who to investigate and arrest by, for instance, identifying the wrong hooded and backpacked suspect. Second, people who attend political or religious gatherings will justifiably fear being identified, tracked, and punished. 

Over a dozen cities across the United States have banned government use of face recognition, and that’s a great start. But this only goes so far. Surveillance companies are already planning ways to get around these bans by using other types of video analytic tools to identify people. Now is the time to push for more comprehensive legislation to defend our civil liberties and hold police accountable. 

To learn more about Real-Time Crime Centers, read our latest report here

Banner image source: Mesquite Police Department pricing proposal.

Dave Maass

Introducing Cover Your Tracks!

3 months 2 weeks ago

Today, we’re pleased to announce Cover Your Tracks, the newest edition and rebranding of our historic browser fingerprinting and tracker awareness tool Panopticlick. Cover Your Tracks picks up where Panopticlick left off. Panopticlick was about letting users know that browser fingerprinting was possible; Cover Your Tracks is about giving users the tools to fight back against the trackers, and improve the web ecosystem to provide privacy for everyone.

A screen capture of the front page of The mouse clicks on “Test your browser” button, which loads a results page with a summary of protections the browser has in place against fingerprinting and tracking. The mouse scrolls down to toggle to “detailed view”, which shows more information about each metric, such as further information on System Fonts, Language, and AudioContext fingerprint, among many other metrics.

Over a decade ago, we launched Panopticlick as an experiment to see whether the different characteristics that a browser communicates to a website, when viewed in combination, could be used as a unique identifier that tracks a user as they browse the web. We asked users to participate in an experiment to test their browsers, and found that overwhelmingly the answer was yes—browsers were leaking information that allowed web trackers to follow their movements.

The old Panopticlick website.

In this new iteration, Cover Your Tracks aims to make browser fingerprinting and tracking more understandable to the average user.  With helpful explainers accompanying each browser characteristic and how it contributes to their fingerprint, users get an in-depth look into just how trackers can use their browser against them.

Our browsers leave traces of identifiable information just like an animal might leave tracks in the wild. These traces can be combined into a unique identifier which follows users’ browsing of the web, like wildlife which has been tagged by an animal tracker. And, on the web and in the wild, one of the best ways to confuse trackers and make it hard for them to identify you individually. Some browsers are able to protect their users by making all instances of their browser look the same, regardless of the computer it’s running on. In this way, there is strength in numbers. Users can also “cover their tracks,” protecting themselves by installing extensions like our own Privacy Badger.

A screenshot from Cover Your Tracks’ learning page,

For beginners, we’ve created a new learning page detailing the methodology we use to mimic trackers and test browsers, as well as next steps users can take to learn more and protect themselves. Because tracking and fingerprinting are so complex, we wanted to provide users a way to deep-dive into exactly what kind of tracking might be happening, and how it is performed.

We have also worked with browser vendors such as Brave to provide more accurate results for browsers that are employing novel anti-fingerprinting techniques. Add-ons and browsers that randomize the results of fingerprinting metrics have the potential to confuse trackers and mitigate the effects of fingerprinting as a method of tracking. In the coming months, we will provide new infographics that show users how they can become safer by using browsers that fit in with large pools of other browsers.

We invite you to test your own browser and learn more - just head over to Cover Your Tracks!

Bill Budington

Find Out How Ad Trackers Follow You On the Web With EFF’s “Cover Your Tracks” Tool

3 months 2 weeks ago
Beginner-Friendly Tool Gives Users Options for Avoiding Browser Fingerprinting and Tracking

San Francisco—The Electronic Frontier Foundation (EFF) today launched Cover Your Tracks, a interactive tool that teaches users how advertisers follow them as they shop or browse online, and how to fight back against corporate trackers to protect their privacy, mitigate relentless ad targeting, and improve the web ecosystem for everyone.

With Black Friday and Cyber Monday just days away, when millions of users will be shopping online, Cover Your Tracks provides an in-depth learning experience—aimed at non-technical users—about how they are unwittingly being tracked online through their browsers.

“Our browsers leave traces of identifiable information when we visit websites, like animals might leave tracks in the wild, and that can be combined into a unique identifier that follows us online, like wildlife that’s been tagged,” said EFF Senior Staff Technologist Bill Budington. “We want users to take back control of their Internet experience by giving them a tool that lets them in on the hidden tricks and technical ploys online advertisers use to follow them so they can cover their tracks.”

Cover Your Tracks allows users to test their browsers to see what information about their online activities is visible to, and scooped up by, trackers. It shines a light on tracking mechanisms that utilize cookies, code embedded on websites, and more. Users can also learn how to cover some of their tracks by changing browser settings and using anti-tracking add-ons like EFF’s Privacy Badger.

Cover Your Tracks builds on EFF’s ground-breaking tracker awareness tool Panopticlick, which exposed how advertisers create “fingerprints” of users by capturing little bits of information given off by their browsers and using that to identify and follow them around the web and build profiles for ad targeting.

Panopticlick showed users that browser fingerprinting existed. Cover Your Tracks takes the next step, helping empower users to uncover and combat trackers. The goal is to provide easy-to-understand information about exactly what kind of fingerprint tracking might be happening and how it’s performed.

“Cover Your Tracks shows how Amazon, Facebook, Google, Twitter, and hundreds of lesser known entities work together to exploit browser information in order to track users. They then use that information to bombard users with ads,”  said Budington. “We want users to learn a few tricks of their own to confuse trackers by utilizing browsers and extensions that give off the same information regardless of what computers they’re running on, or randomize certain bits of information so they can’t be used as a reliable tracker.”

Cover Your Tracks offers a learning page about the methodology EFF uses to mimic trackers and test browsers. EFF plans to add new infographics demonstrating how users can employ add-ons and new kinds of anti-fingerprinting browsers to fight tracking.

Visit Cover Your Tracks:

For more on corporate surveillance:

For more on Panopticlick:

Contact:  WilliamBudingtonSenior Staff
Karen Gullo

macOS Leaks Application Usage, Forces Apple to Make Hard Decisions

3 months 2 weeks ago

Last week, users of macOS noticed that attempting to open non-Apple applications while connected to the Internet resulted in long delays, if the applications opened at all. The interruptions were caused by a macOS security service attempting to reach Apple’s Online Certificate Status Protocol (OCSP) server, which had become unreachable due to internal errors. When security researchers looked into the contents of the OCSP requests, they found that these requests contained a hash of the developer’s certificate for the application that was being run, which was used by Apple in security checks.[1] The developer certificate contains a description of the individual, company, or organization which coded the application (e.g. Adobe or Tor Project), and thus leaks to Apple that an application by this developer was opened.

Moreover, OCSP requests are not encrypted. This means that any passive listener also learns which application a macOS user is opening and when.[2] Those with this attack capability include any upstream service provider of the user; Akamai, the ISP hosting Apple’s OCSP service; or any hacker on the same network as you when you connect to, say, your local coffee shop’s WiFi. A detailed explanation can be found in this article.

Part of the concern that accompanied this privacy leak was the exclusion of userspace applications like LittleSnitch from the ability to detect or block this traffic. Even if altering traffic to essential security services on macOS poses a risk, we encourage Apple to allow power users the ability to choose trusted applications to control where their traffic is sent.

Apple quickly announced a new encrypted protocol for checking developer certificates and that they would allow users to opt out of the security checks. However, these changes will not roll out until sometime next year. Developing a new protocol and implementing it in software is not an overnight process, so it would be unfair to hold Apple to an impossible standard.

But why has Apple not simply turned the OCSP requests off for now? To answer this question, we have to discuss what the OCSP developer certificate check actually does. It prevents unwanted or malicious software from being run on macOS machines. If Apple detects that a developer has shipped malware (either through theft of signing keys or malice), Apple can revoke that developer’s certificate. When macOS next opens that application, Apple’s OCSP server will respond to the request (through a system service called `trustd`) that the developer is no longer trusted. So the application doesn’t open, thus preventing the malware from being run.

Fixing this privacy leak, while maintaining the safety of applications by checking for developer certificate revocations through OCSP, is not as simple as fixing an ordinary bug in code. This is a structural bug, so it requires structural fixes. In this case, Apple faces a balancing act between user privacy and safety. A criticism can be made that they haven’t given users the option to weigh the dilemma on their own, and simply made the decision for them. This is a valid critique. But the inevitable response is equally valid: that users shouldn’t be forced to understand a difficult topic and its underlying trade-offs simply to use their machines.

Apple made a difficult choice to preserve user safety, but at the peril of their more privacy-focused users. macOS users who understand the risks and prefer privacy can take steps to block the OCSP requests. We recommend that users who do this set a reminder for themselves to restore these OCSP requests once Apple adds the ability to encrypt them.

[1] Initial reports of the failure claimed Apple was receiving hashes of the application itself, which would have been even worse, if it were true.

[2] Companies such as Adobe develop many different applications, so an attacker would be able to establish that the application being opened is one of the set of all applications that Adobe has signed for macOS. Tor, on the other hand, almost exclusively develops a single application for end-users: the Tor Browser. So an attacker observing the Tor developer certificate will be able to determine that Tor Browser is being opened, even if the user takes steps to obscure their traffic within the app.

Bill Budington
24 minutes 2 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed