Tell Trump’s Patent Office Director: Don’t Make Permanent Rule Changes Now

2 weeks 1 day ago

In the final days of the administration, Andre Iancu, President Trump’s Director of the U.S. Patent and Trademark Office, is trying to push through permanent rule changes that would destroy the post-grant review system. Iancu is going all out to weaken “inter partes review” proceedings (or IPRs), which are the most effective mechanisms we have for getting the Patent Office to cancel patents it never should have granted in the first place. If these rules are adopted, the weakened IPR system will become a bonanza for patent trolls—and stay that way into the next administration. 

We spoke out earlier this year about how the Patent Office was undermining the IPR process through bogus rules the Patent and Trial Appeal Board (PTAB) pushed through last year. Now, the Director is seeking to make these rules even more powerful and permanent. Now, we need EFF supporters to help us stop these dangerous changes. 

Update: The Patent Office has changed the deadline for public comment to Thursday, December 3, 2020.  

Take Action

Defend Strong Patent Reviews

Trump’s Patent Office Director, Andre Iancu, has instituted new policies that enable more patent abuse and help patent trolls. In 2018, Iancu claimed that small businesses and individuals who spoke out against patent trolls were spreading “scary monster stories.” 

At EFF, we hear regularly from small businesses and individuals who are fighting off extortionate patent demands. We know their stories are all too real.  

Now, Iancu is proposing rule changes that will sabotage the system that lets the Patent Office cancel bad patents. Congress created the IPR system in 2011, as part of the America Invents Act. It allows members of the public to go to the Patent Trial and Appeal Board and present evidence that a patent is invalid. 

In the past several years, IPR has become the most important way to get the Patent Office to correct its mistakes. That’s crucial because more than 300,000 patents are granted each year, especially in the fields of software and technology; yet more than half of patents that go trial turn out to be invalid. The rate is even higher in IPR cases that go to a final decision: more than 60% of the time, PTAB judges find that all the patent’s claims are invalid. 

The IPR process is faster and cheaper than fighting patents in a federal district court, which can cost millions of dollars and take years. That’s why EFF was able to use the IPR process to knock out the “podcasting patent,” whose owner falsely claimed to have invented the basic idea of  podcasting—and then moved aggressively to force podcast creators to pay licensing fees. 

If Iancu can push through this package of new rules, the PTAB will throw out many IPR petitions before judges even look at the challenger’s evidence. 

First, the PTAB will be able to deny an IPR challenge anytime there’s a related court case against the challenger. This change alone could tear apart the IPR system, because it will let patent owners game the system. Patent trolls will be able to game trial schedules and then use them to get an IPR denied.   

Second, the PTAB won’t consider more than one petition per patent—even if the petitioners are different with rare exceptions. The PTAB is supposed to consider any petition that satisfies the statute’s criteria. If the new rules pass, a patent that survives one IPR may never have to face another—even if the second IPR is based on new and stronger evidence.

Together with allied organizations, we spoke out against Iancu’s attempt to undermine the IPR process. But he’s pushing ahead anyway. 

We need your help to protect IPRs. Right now, the rules are in a public comment period that continues until November 19th. We need EFF supporters to file a comment opposing the proposed rules. 

The best comments will state in your own words how you’ve been affected by invalid patents, or why you’re upset that the Patent Office is considering unfair rules that are harmful to the economy and innovation.

We’re also including a sample statement that you can cut and paste. If you use the sample, please consider adding some details about your own experience or concerns with poor-quality patents or patent trolls.

Sample Comment:

I oppose the U.S. Patent and Trademark Office’s proposed regulations changing the nature of PTAB trials., Docket No. PTO-C-2020-0055.

[Write why you care about the public's ability to fight low-quality patents. For example, perhaps you work in technology and bad software patents have affected you, your own small business, or your employment.] 

First, if the regulations are adopted, people and companies won’t be able to challenge patents through the IPR process when they need to.  The PTAB will be able to deny IPRs simply because of the timing of district court cases. This will allow patent holders to game the system and file strategic litigation to avoid IPRs. The PTAB should not give any consideration to the status of court proceedings when deciding whether to initiate an IPR.

Second, the regulations limit the number of petitions that can be filed against the same patent. That makes no sense. There will often be multiple challenge to the same patent, especially if it’s being asserted aggressively. Different challenges raise different evidence and sometimes address different claims. Congress’s intent in the America Invents Act was to reduce the amount of unnecessary patent litigation by allowing the PTAB to weed out invalid patents before a trial takes place.  There should be no arbitrary limits on the number of petitions per patent.

The rights of technology developers and users are no less important than the rights of patent owners. When patents are evaluated in federal court, nearly half of them are found to be invalid. 

Overall, PTAB trials must be fair, affordable, and accessible. When petitions are likely to succeed on the merits, they should be granted. What happens in the courts, or to other petitions, shouldn’t matter.

These proposed regulations will destroy the U.S. system for post-grant patent challenges. Wrongly granted patents are a major burden on the economy and drain on innovation. Every week, they’re used to threaten small businesses with extortionate licensing demands—especially people who make and use technology. To promote innovation, the Patent Office needs to improve the quality of granted patents, and to do that, we need the robust IPR system Congress designed.

This is just a sample; if you want to make changes or write your own comment entirely, that’s great! The most important thing is that you send a comment. It doesn’t need to sound perfect, and you don’t need to be an expert on patents. 

At EFF, we speak up for technology users who are victimized by illegitimate patent threats. Today, we need your help. 

Take Action

Defend Strong Patent Reviews

Our “Take Action” button will link you directly to the government’s public comment form. You can read the details of the government’s proposed rulemaking here on the Federal Register’s website. Note that the comments filed with the government in this matter will become public records. 

Joe Mullin

Podcast Episode: The Secret Court Approving Secret Surveillance

2 weeks 1 day ago
Episode 001 of EFF’s How to Fix the Internet

Julian Sanchez joins EFF hosts Cindy Cohn and Danny O’Brien as they delve into the problems with the Foreign Intelligence Surveillance Court, also known as the FISC or the FISA Court. Sanchez explains how the FISA Court signs off on surveillance of huge swaths of our digital lives, and how the format and structure of the FISA Court is inherently flawed.

In this episode, you’ll learn about:

  • How the FISA Court impacts your digital privacy.
  • The makeup of the FISA Court and how judges are chosen;
  • How almost all of the key decisions about the legality of America's mass Internet spying projects have been made by the FISC;
  • How the current system promotes ideological hegemony within the FISA court;
  • How the FISC’s endless-secrecy-by-default system insulates it from the ecosystem of jurisprudence that could act as a guardrail against poor decisions as well as accountability for them;
  • How the FISC’s remit has ballooned from approving individual surveillance orders to signing off on broad programmatic types of surveillance;
  • Why we need a stronger amicus role in the FISC, and especially a bigger role for technical experts to advise the court;
  • Specific reforms that could be enacted to address these systemic issues and ensure a more fair review of surveillance systems.

Julian is a senior fellow at the Cato Institute and studies issues at the intersection of technology, privacy, and civil liberties, with a particular focus on national security and intelligence surveillance. Before joining Cato, Julian served as the Washington editor for the technology news site Ars Technica, where he covered surveillance, intellectual property, and telecom policy. He has also worked as a writer for The Economist’s blog Democracy in America and as an editor for Reason magazine, where he remains a contributing editor. Sanchez has written on privacy and technology for a wide array of national publications, ranging from the National Review to The Nation, and is a founding editor of the policy blog Just Security. He studied philosophy and political science at New York University. Find him on Twitter at @Normative.

Below, you’ll find legal resources – including links to important cases, books, and briefs discussed in the podcast – as well a full transcript of the audio.

 Please subscribe to How to Fix the Internet using RSSStitcherTuneInApple Podcasts, Google Podcasts, Spotify, or your podcast player of choice. You can also find this episode as an MP3 on the Internet Archive. If you have any feedback on this episode, please email podcast@eff.org

Resources NSA & FBI Court Cases Section 215 & FISA Books Transcript of Episode 001: The Secret Court Approving Secret Surveillance

Danny O'Brien:
Welcome to How to Fix the Internet with the Electronic Frontier Foundation, a podcast that explores some of the biggest problems with face online right now. Problems whose source and solution is often buried in the obscure twists of technological development, societal change, and subtle details of Internet law.

Cindy Cohn:
Hi everyone, I'm Cindy Cohn, the Executive Director of the Electronic Frontier Foundation, and I'm also a lawyer.

Danny O'Brien:
And I'm Danny O'Brien, and they let me work at EFF too—even though I'm not a lawyer. Welcome to How to Fix the Internet, a podcast that explores some of the more pressing problems facing the net today and then solves them. You're welcome, Internet.

Cindy Cohn:
It's easy to see everything that's wrong with the Internet and the policies that govern it. It's a lot harder to start naming the solutions to those problems, and even harder sometimes to imagine what the world would look like if we got it right. But frankly, that's the most important thing. We can only build a better Internet if we can envision it.

Danny O'Brien:
So with an ambitious name like 'How to Fix the Internet', you might think we're going to tackle just about everything. But we're not, and we're doing that on purpose. Instead, we've chosen to go deep on just a few specific issues in this podcast.

Cindy Cohn:
And sometimes we know the right answer—we're EFF after all. But other times, we don't. And like all complex things, the right answer might be a mix of different ideas or there may be many solutions that could work or many roads to get us there. There is also some bad ideas some times and we have to watch for the blow back from those. But what we hope to create here is a place where experts can both tell us what's wrong, but give us hope in their view of what it's going to look like if we get it right.

Danny O'Brien:
I do feel that some parts of the digital world are a little bit more obviously broken than others. Mass surveillance seems like one of those really blatant flaws at EFF we've spent years fighting pervasive US government surveillance online and our biggest fights have been in what seem to us the most obvious place to fight it, which is in the public US courts. But there is one court where our lawyers will likely never get a chance to stand up and argue their case. Even though it's got surveillance in its name.

Cindy Cohn:
Our topic today is the Foreign Intelligence Surveillance Court, which is also called the FISC or the FISA Court. The judges who sit on this court are hand picked by the chief justice of the United States Supreme Court, that's currently Justice Roberts. The FISA Court meets in secret and has a limited public docket and until recently it had almost no public records of its decisions. In fact, the very first case on the FISC docket was an EFF transparency case that ended up getting referred to the FISC. But this where almost all of the key decisions about the legality about America's mass Internet spying projects have been made and what that means is pretty much everybody in the United States is affected by the secret court's decisions despite having no influence over it and no input into it and no way to hold the court accountable if it gets things wrong.

Danny O'Brien:
Joining us now to discuss just what an anomaly an American and global injustice the secret FISA Court is, and how we could do better is Julian Sanchez, the Cato Institute's specialist in surveillance legal policy. Before joining Cato, Julian served as the Washington editor for Ars Technica where he covered surveillance, intellectual property and telecom policy. He has also worked as a writer for the Economist blog, Democracy in America and is an editor for Reason Magazine where he remains a contributing editor. He's also on Twitter as Normative and that's one of my favorite follow there.

Danny O'Brien:
Julian, welcome to the podcast. We are so happy to have you hear today.

Julian Sanchez:
Thanks for having me on.

Cindy Cohn:
Julian, you have been incredibly passionate about reining in mass surveillance for as long as almost anyone, perhaps even me. Where does that passion come from for you?

Julian Sanchez:
I don't know if I have an origin story. I was bitten by a radioactive J. Edgar Hoover or something, but as an adolescent I was in a way much more technical than I am now. I ran a dial-up BBS when that was still a thing before everyone was on the Internet and I remember watching people dial in and I think it was something people sensed was a private activity as they were writing messages to each other and tooling around looking for things to download. Sometimes I would just be sitting there watching them and thinking, gosh, the person who operates the platform really has visibility on a lot of things that we don't instinctively think of as observed. Probably just as a result of being online, for some values of online from a pretty young age, I was interested in a lot of the puzzles of how you apply rules that we expect to govern our conduct in the physical space to this novel regime.

Julian Sanchez:
I remember in college jumping ahead and reading Lawrence Lessig code and discussing the puzzle of the idea of a perfect search. That is to say, if you had a piece of software, a virus let's say, that could go out and look only for contraband, it would only ever report back to the server if it found known child pornography or known stolen documents. Would that constitute a search? Is that the kind of conduct that essentially, because it would never reveal anything but contraband, could be done universally without a warrant or should we think differently about it than, for example, the Supreme Court thinks about dog sniffs. If it only ever reveals what is criminal, that is, the presence of narcotics or bombs, then it doesn't technically count as a search even though it is a way of peering into a protected space.

Julian Sanchez:
more recently, whimsically, the Risen and Lichtblau story back in 2005 'Bush Lets US Spy on Callers Without Courts,' which was the first public hint of what we later came to know was a mass program of warrantless surveillance called, Stellar Wind. I was just dissatisfied with the quality of the coverage and ended up buying the one book you could get about FISA, 'National Security Investigations and Prosecutions' by David Kris and Douglas Wilson, and burning through it like Harry Potter. I just found it inherently fascinating. This was at a time when, and I was still a journalist at the time, it was a time when most of the reporters writing about this did not understand FISA very well. They certainly had not read this rather thick, and to normal human beings, boring treatise and so I found myself, because I now have this rather strange knowledge base, writing quite a lot about it, partly just because the quality of a lot of the coverage of the issue was not very well informed.

Cindy Cohn:
We had a similar experience here at EFF, which was, at that time it was my colleague, Lee Tien and I, and we had read Kris and so we ended up becoming the only people around who knew about the secret court before everybody suddenly became aware of it. But let's back up a second. Why do we have a FISA Court? Where is it? I've talked a little about who is on it, but where does this idea come from?

Julian Sanchez:
This grows out of the Foreign Intelligence Surveillance Act of 1978 that was passed in response to disclosures of a dizzying array of abuses of surveillance authority and their power more generally by the FBI especially, but the American intelligence community in general. For decades, oversimplifying a bit, effectively wire tapping had been initially just illegal period and then very tightly constrained and the FBI had essentially decided those rules can't possibly really apply to us and so FISA, for the first time, created an intelligence specific framework for doing electronic surveillance. The idea of having a separate court for this, I think, grew out of a number of factors.

Julian Sanchez:
One is the sense that there was this need for extreme secrecy where you were dealing with potentially people with foreign state backing who were not necessarily going to be sticking around for criminal prosecution. And when you're talking about intelligence gathering, criminal prosecution isn't necessarily the point. And so this is an activity that is not really designed to yield criminal cases. You don't really want the methods ever disclosed. You're dealing with adversaries who have the capability to potentially plant people in ordinary courts, that's where you're discussing interests, sources, and methods in your intelligence so there was a sense that it would be better to have a separate, extra secure court. And also that you might not want to have to explain all this both highly sensitive and potentially quite complicated intelligence practices and information to whatever random magistrate judge happened to be on the roster in the jurisdiction where you were looking.

Julian Sanchez:
And also that the nature of intelligence surveillance is quite different in so far as, again, you're not necessarily looking at someone who has committed a crime, you think someone is working on behalf of a foreign power and trying to gather intelligence for them or engage in clandestine intelligence activities. But you don't necessarily have a specific crime you think has been committed. Your purpose in gathering intelligence is not to prosecute crimes. These are the cluster of reasons around the formation of a separate court for that purpose and it originally consisted of seven federal circuit judges, now it's 11 after the USA Patriot Act increased the number and so they continue serving on their regular courts and then, in effect, take turns in rotation sitting for a week and hearing applications from the Justice Department and the FBI to conduct electronic surveillance.

Cindy Cohn:
The court started out as one thing, this idea of individual secret warrants for spies basically, but it's really changed in the past decade. Can you walk us through how those shifts happened and why?

Julian Sanchez:
And of course to the extent that older FISA Court opinions are not available. The first ever published opinion of FISA Court was in 2002 and it was quite a few years before we got a second. Now quite a number of more recent ones are public, but we still have to speculate about the earlier history of the court, but veterans of the court, that is retired FISC judges have effectively confirmed that, in its early years the FISA Court was primarily about assessing the adequacy of individual warrant applications. It was just a bread and butter magistrate judge usually almost scut work. Okay, have you made the showing that there is probable cause to believe that the target of the surveillance is an agent of a foreign power. You have, you haven't. In 99.9% of cases, it was, you have and they took a pass on that individual warrant and as we get to the, in particular, the post-9/11 era and you're dealing with questions of trying to, one, often figure out who an unknown target is. You might have someone whose using a particular email address or other account that you don't otherwise necessarily have an identity.

Julian Sanchez:
You're potentially trying to sift through a lot of data to figure out who your target is or which data pertains to the people you're interested in. There is a shift toward more programmatic sorts of surveillance and so the court increasingly is not passing on the question of have you established a probable cause showing with respect to "bad guy X" but rather does the law, does a statute written to deal with pre-Internet communications technology permit you to do the surveillance you're contemplating and in particular, might it allow you to gather information in ways that go beyond just targeting a particular facility, a particular phone line, that is the home phone of a particular known target. And so it ended up building this kind of secret body of precedent around what kinds of programs for Internet type network surveillance were permissible under a statute that was not written with that in mind.

Cindy Cohn:
They really did shift from individual warrants to approving whole programs and whole programs that really went beyond, is this person a spy to let's look at this whole network and see maybe if there is something that indicates that a spy might be there. It really flips the kind of basis way that we think about investigations. From my perspective, obviously, I've been litigating this in the courts for a long time so it kind of flipped the whole thing on its head.

Julian Sanchez:
And so we know, for example, maybe I should give some maybe more concrete examples. We know there was a bulk telephony metadata program under one FISA authority that actually was sort of the second case of this kind the FISA Court had to consider. There was an earlier question presented by a program that used what was called the pen trap authority, pen register trap and trace authority, which is, in the traditional phone context, this is about essentially real-time metadata surveillance. Meaning let's say there's a particular phone number that we think is up to no good, maybe we don't have a full blown probable cause wiretap order for that number yet, but we want to know who this target is calling and whose calling that target.

Julian Sanchez:
A pen register trap and trace order lets you get realtime data about what calls are happening to and from that number and who they are from and how long the call lasts and in the Internet era the question is, what kind of realtime metadata does that let you get and when the statute talks about a facility at which this information collection is directed, traditionally that meant a phone number is the facility, but in the Internet era, you had questions like, because the standard for this kind of trap is because you're not getting full blown, in theory, you're getting the full content, the full email, the full phone conversation. You can get one of these pen trap orders under Section 214 of the USA Patriot Act with a lot less than probable cause.

Julian Sanchez:
The question is, we're talking about regular phones anymore, we're talking about Internet accounts and IP addresses and server. What can a facility be? Can we say, we want all the metadata and the realtime transactional information for a particular server and all the traffic coming to and from that? So we're not just talking about one individual phone line or maybe even a corporate phone line used by a number of people, but facilities that may be handling millions of peoples traffic, or at least tens of thousands of peoples traffic. The court, I don't think that is an opinion that is public in full at this point, but essentially said, at least with respect to international communications, we're going to be pretty permissive about what you can collect.

Danny O'Brien:
This is the other shift that I see, which is that not only is FISA not dealing with regular phones anymore, but it's dealing with these big servers with millions of people, but also the sort of target has changed too, partly because we're not really talking about agents of a foreign power, we're not talking about spy versus spy. It became much more dissolved than that. It's like we're talking about random stochastic terrorists who you don't necessarily know who they are. But also, this switch between "we can do foreign surveillance because we're targeting foreign powers and their spies",to "we're just surveilling foreigners", like they don't have rights under this court. So the question is, how do we scoop out this data and separate the stuff that legally we are concerned about, which is US citizens communications, but everything else is kind of fair game. And then we have a secret court that doesn't even have any kind of representation of US citizens interests, but also making this kind of human rights and foreign policy decision too.

Julian Sanchez:
The debate around the authorities that the FISA Court oversees has been very, US citizens-centric, so you can watch tapes from CSPAN where a lot of defenders are saying, "look, as long as they are targeting foreigners, who cares if they don't have constitutional rights". Some of us think, people are human and have human rights even if they had the poor taste to be born somewhere other than the United States and so this is perhaps not something we should entirely shrug off. But also that there's this interesting shift from the idea that you should be concerned if the communications of an American with Fourth Amendment rights are surveilled too. The idea that really what's significant in terms of encroaching on peoples' rights is who is targeted. And for practical reasons, of course, you understand why this would be the focus because you cannot in advance know whose communications you will intercept when you target somebody. You know who you're going to target, but you have no idea who they might talk to. That's the point in part of doing the surveillance.

Julian Sanchez:
But if you look at the text of the Fourth Amendment, it doesn't say the "right of the people against being targeted shall not be violated". It says "the right of the people to be secure in their persons and houses" and papers or the digital equivalent thereof. And in a sense, the fundamental Fourth Amendment concern was, at the time, were the general warrants, with the idea of these sort of open-ended authorizations to search, that did not target anyone. From the perspective of the people who signed off on the Fourth Amendment, it was not a mitigating consideration to say, don't worry if your communications are collected, you weren't the target. The thing they found most egregious, the thing they thought was the most defensive abuse was surveillance that did not have a particular target that made it open to anyone to be swept into the dragnet.

Danny O'Brien:
Right. And just to spell this out, general warrants and this is a British invention so I apologize, was this idea that you could just get a warrant for everybody in a town or everybody who might be associated with it so this early mass surveillance warrant.

Julian Sanchez:
And it's intimately connected with political, essentially political dissent and suppression. Some of the most controversial early cases that the American framers looked to involved a publication called the 'North Britain 45', was the one that really annoyed the King and so there was an authorization given to the King's messengers to make diligent search for these unknown anonymous writers and publishers of this seditious publication. The whole problem was it was published in the United States so they didn't know in advance who was responsible so they thought we need the authority to be able to riffle through the possessions of all the folks we suspect of maybe not being as loyal as they ought to be and give them cart blanche to decide who the appropriate targets are so the British courts ultimately said was destructive of liberty in a pretty toxic way. Chief Justice Pratt, later Lord Camden wrote some pretty inspirational prose about why that kind of authority was fundamentally incompatible with a free society and that was a great influence on the defenders of the Fourth Amendment who had the same objection to general warrants or a general search authorizations that empowered customs officials to essentially look for contraband without particularized judicial authorization.

Danny O'Brien:
And there's this subtle thing here where you only get to make that kind of discrimination, that kind of difference particularly when you're separating what is terrorism and what is political action, if there is someone in the court testifying on behalf of the person that might be being targeted and that's what a secret court like the FISA Court just didn't have for a very long time, barely has now.

Julian Sanchez:
Regular courts don't have that either, of course. When you were going to apply for a wiretap, even if it's in a criminal case, you don't call up the lawyer of the person you're wiretapping and say, would you come in and do an adversarial proceeding in court about whether we can wiretap you. You tend not to get very much useful information that way. But there is the back end, which is to say, yeah, ex parte proceeding on the front end, you don't notify the target in advance that you're going to do a wiretap, but that process is conditioned by the knowledge that the point of a criminal wiretap, a so-called title three wiretap, is to gather evidence for a criminal prosecution that when that prosecution occurs, you're going to have discovery obligations to defense counsel. They are going to have an incentive to kick the tires pretty hard and poke everything with a stick and make sure everything was executed properly and the warrant was obtained properly and if it wasn't, get the case thrown out.

Julian Sanchez:
That knowledge that you've got to expect that kind of wire brush when it comes time to go to court, means that really from the outset, you talk to people who work on getting criminal wiretap orders that they are in consultation with their lawyers and they are talking about how are we going to do this in a way that is going to stand up in court, because if this gets thrown out, you've just wasted your time and probably a fair amount of money in the process. The fact that that doesn't exist on the FISA side, that essentially 99% of FISA orders are not intended to ever result in a criminal prosecution are never going to result in disclosure to target, are effectively, permanently covert means you really don't have to worry about that. You are presenting to the FISA Court your version of "why I think there's evidence that this person is a foreign agent" and if you've cherry picked the facts as seems to have happened in the case of former Trump campaign advisor, Carter Page, if you decided to include the inculpatory information but leave out the information that might call into doubt your theory of the case or make it look like perhaps there's another explanation for some of these things that look incriminating on face.

Julian Sanchez:
You're probably never going to be called into account for that because the FISA Court is relying on your representation and they are probably never going to hear from the target. They put together a very misleading argument for why I was a foreign agent.

Cindy Cohn:
I feel like a part of the problem here is that judges, they really do only get one side of the story. This is one of the reasons that EFF helped get past some changes to the law as part of the USA Freedom Act to create another entity that could at least weigh in and help the court hear from the other side, make it a little more adversarial. But I do think the judges get captured and also one of the things we've learned now is that thanks to the US Supreme Court catching the Department of Justice not even telling criminal defendants when FISA information was used. They are supposed to be telling criminal defendants when FISA information was used and to date, nobody whose been prosecuted even in the public courts on the basis of secret FISA information has ever had access to be able to figure out whether what they were told was true.

Cindy Cohn:
The Carter Page situation is really an anomaly compared to so many others-

Julian Sanchez:
Literally unique. The only case of a FISA Court application being even partly public.

Cindy Cohn:
And that didn't happen because there was a legal system to do it. That happened because of political decisions and so nobody else is going to get that, is the point I think. People should say, "Carter Page found out that there were lies underneath his". I think that it's good to get that input but I think it's unreasonable to expect that that's the only time that's ever happened. It's just the only time we've ever found out about it.

Danny O'Brien:
As a non-lawyer and someone who tries to avoid looking at politics almost all the time these days, could you just explain what Carter Page was and why that was different?

Julian Sanchez:
Carter Page was a foreign policy advisor to the Trump campaign who had all sorts of incredibly sketchy ties to Russia. He was actually someone who was on the FBI, the New York office of the FBI's radar before he had any association with the Trump campaign. They were essentially preparing to open an investigation of him before he was announced as a Trump advisor. When he tried to campaign, this was passed on to FBI headquarters and he, in a sense, they were generally trying to figure out to what extent the Trump campaign was aware of and potentially complicit in the electoral interference operation that Russia was running on Trump's behalf or at least against the interests of Secretary Clinton and because of the panoply of shady connections, Carter Page became the person they thought, this is the one we can most easily target or get a warrant for. We don't want to go after the candidate himself, but and at this point Page had actually left the campaign, but he was the one who seemed to be the most likely, to actually be directly connecting Russian intelligence with the campaign. The most plausible link.

Julian Sanchez:
There was a really disturbing exchange between, I think it was, Marsha Blackburn and Inspector General Horowitz from DOJ, put out this IG report on Crossfire Hurricane that focused pretty centrally on the surveillance of Carter Page and I was very critical of the many errors and omissions and that process, in particular when it came to the renewals of the surveillance of Page. And Blackburn, I think she asked this with the aim that he would say this is incredibly unusual and therefore the only explanation for it is some sort of agenda to get Trump or political bias against Trump. But she asked, how common is it for there to be this many errors and this much sloppiness in the FISA process? Is this out of the ordinary? Horowitz had to, quite candidly, say, "I just don't know. I hope not, but we've just never done this kind of individualized deep dive on a FISA application before.

Julian Sanchez:
We've done audits, but this kind of we're digging into the case file, not just looking at whether the facts in the application matched what was in the case file, but whether there are important facts that were left out and painted a misleading picture. We just haven't done that before so frankly, we don't know how unusual this is". And that ought to be disturbing.

Cindy Cohn:
We do know though, even the programmatic looks at, the Inspector Generals have looked or when the FISA Court themselves has caught the Department of Justice in lies, which they have a lot, that this is really an ongoing problem. It's one of the big frustrations for us in terms of trying to bring some accountability to the mass spying is that the FISA Court ... the part where the FISA Court approves a lot of things that come before it doesn't really bother me as much as the fact that the FISA Court itself continually finds out that the Department of Justice has been lying to them and doing things very differently than they've represented and having a lot of problems and they always just kind of continue to say, "go and sin no more", rather than actually creating an accountability or changes and I think that that message gets received.

Danny O'Brien:
And that's sort of the point where a court like this becomes a rubber stamp because the FBI or whoever is coming to them saying, "we just want to extend this investigation. It's just the same as it normally was." Do you think that the FISA judges get captured in this way, that they just end up spending so much time listening to the intelligence services and the FBI and not hearing the other side of the story, that they just end up being overly reliant on that point of view?

Julian Sanchez:
Absolutely. That's just necessarily the case. I've heard from retired FISA judges that they would hear from government lawyers things like, "you will have blood on your hands if you don't approve this surveillance." And again, because most of the stuff is never going to be public, you have on one side, look, if you are too precious about protecting civil liberties, you have people saying there could be an attack that would kill dozens or hundreds or thousands of people and on the other side, you're never going to be accountable for authorizing too much surveillance because this is not designed to end in a trial. You're never going to be really grilled about why you approved this dubious electronic surveillance.

Julian Sanchez:
I would add that there is a defense intelligence folks and former FISC judges themselves sometimes make of the very high approval rate, which is quite high or you certainly, for most of the court's history, it's been extremely high, 99% plus, though not that much higher, frankly, than ordinary title three applications. And one of the ways that they would defend this and say "we're not a rubber stamp despite this 99% approval rate" is, they would say look, you need to understand how this process really works in practice. Which is, it's not that they just come in blind with an application that we decide. There is this back and forth where they will have a read application, a first draft, and they will go to, not the judges directly, but FISA Court staff, who may be in contact with the judges and say, this is the application we were thinking of submitting and they'll hear back.

Julian Sanchez:
Maybe you should narrow this a little bit, maybe we would approve it for a shorter period of time or for these people, but not those people or we would approve this if you had better support for this claim. And so there is this sort of exchange that then essentially results in applications only being submitted when the FBI and DOJ know it's in a state the FISC is going to approve it. Maybe they don't submit it at all if the court says, "no, this is not something we would sign off on." And just to finish this point of, which is, and you would think, okay, that would explain it, but the problem is, you've created a process that is guaranteed to result in a FISA docket history that consists only of approvals. So when you get a proposed application and the court says, well, this doesn't quite meet the standard, that application doesn't actually ultimately get submitted.

Julian Sanchez:
It only gets submitted when they know the court is going to say yes, when they've refined it in such a way that the court is willing to sign off on it. The problem is then you've created a body of precedent that consists exclusively of approvals. A particular set of facts where the balance of considerations is such that the court is going to say yes and so then you thus have no record of where the boundaries are of these are the conditions and the fact patterns under which the court will say no so that years down the line, a judge who is looking at applications can say, "okay, here is our record of yeses and nos. Here's our record of what's within bounds and here's our record of what's out of bounds." You only have a history of yes and that is very problematic. You don't have a documentary record of what previous judges have said, no, under these facts that's a bridge too far.

Cindy Cohn:
And so that's why some of the former judges have said, look, this isn't really a court anymore. It's more like some kind of administrative agency. This is what you do if you want the FCC to approve a license. You can have this back and forth and then you finally submit something that works. There's lots of other kinds of bodies that work that way, but courts don't. And courts don't for some good reasons.

Danny O'Brien:
All right, you've said that this court doesn't really have much oversight, but I have heard spoke that there's another institution around the FISC called the FISCR. Is that just like the superlative of the FISC or how do those relate?

Julian Sanchez:
What we really is need is a FISCR. The Foreign Intelligence Surveillance Court of Review is where appeals from the Foreign Intelligence Court go. They've sat, that we know of, maybe a half a dozen times, all in the 21st century. It's possible they sat previously and we don't know about it, but five or six times that the public is aware of and the interesting thing structurally about the FISCR is that effectively the only time they are going to hear a case is on the rare occasions when the government didn't get what they want.

Cindy Cohn:
I have to agree. This isn't really a way that holds the FISC accountable when it makes errors and certainly not when it makes errors that hurt you, the people who are the subjects of surveillance. You know, we managed however, to get some reforms over the years. EFF played a pretty big role in getting some changes to the FISA Court as part of the USA Freedom Act. What's your view on those changes and the impact of them, Julian?

Julian Sanchez:
I think they've been pretty significant. I think we already have cases that we know about where the amicus of the USA Freedom Act created a panel of amici or friends of the court who at least in cases involving novel questions of law or technology can be invited by the court to provide their expertise, provide perhaps a contrary view to the government's argument inevitably why they should have more power to surveil, more broadly. And we already have cases where amici have successfully opposed/proposed surveillance that we know about or identified problems with practices by the FBI. There is, I think, a release made about a year and change ago that was essentially initiated by one of the amici that involved discovery that the FBI agents were searching this bulk foreign surveillance database. It's called the 702 database in a variety of improper ways and essentially taking this supposedly foreign intelligence database and routinely looking for US person information without any real connection to any national security or foreign intelligence case.

Julian Sanchez:
We were probably catching more problems than we were before. It doesn't fundamentally change the structural problems with the court, but it does, I think, make it a little bit better. It has already paid off in ways that are public and perhaps in others that we don't know about.

Cindy Cohn:
I think so too. Honestly, we felt like the first thing we have to do is get more information out about it so that we can make our case that Congress ought to step in and change it because those kinds of changes take a pretty strong lift on our side if we want to try to change things especially because the other side gets to do secret briefings to the intelligence communities.

Cindy Cohn:
The theme of this podcast is how do fix these things. Julian, what would it look like if we got this right. We need to do national security investigations. I don't think anybody would say that we're never going to do those. What would it look like if we got the role of the FISA Court right?

Julian Sanchez:
I mentioned this, I'm not sure this is the right idea, but it's worth putting out the possibility which is just we don't necessarily need a FISA Court. There are other countries that just have all surveillance governed by a uniform set of rules that regular judges are handling. And you could say, applications will go to the whatever jurisdiction is appropriate with the extent that you know one. You'll use the same procedures you use any time a court that is not a special secret court has to handle classified information, which can happen in a variety of circumstances like for example, when you need to prosecute someone for a crime that involves using classified information. But assuming the FISA Court is going to stick around, I think the most important thing that can be done is just remove the presumption of permanent covert.

Julian Sanchez:
The amici, I think, have been very useful, but they are fundamentally a kind of clutch. They are a way of trying to partially reintroduce the kind of back end accountability that is the norm for criminal searches and criminal electronic surveillance in criminal investigations, surveillance that is criminal. One way you could do that more directly is just by ending the presumption of permanent covertness. I think the idea that electronic surveillance is going ultimately to be disclosed to the target eventually. It's something the Supreme Court has effectively said is an essential constitutional requirement, that one of the things that makes a search reasonable in Fourth Amendment terms is, if not at the time it's conducted then at least after the fact the target of that surveillance or that search needs to become aware of it and have an opportunity to challenge it and have an opportunity to seek remedies if they believe that they've been targeted inappropriately.

Julian Sanchez:
The idea that you can just systematically make a judgment that that's not appropriate, that that's not necessary for this entire category of surveillance targets, even in cases where they do the surveillance and they say "we were wrong, this person was not a foreign agent, we didn't find what we expected", just seems totally misguided. You can't that frivolously dispense with an essential constitutional requirement. There may be cases where you don't want to reveal the surveillance after the fact, especially if we're talking about a foreign person, someone who does not actually have Fourth Amendment rights, but there may be cases where there are some powerful considerations that you should maybe for quite a while not disclose that the surveillance happened, but this shouldn't be the presumption.

Julian Sanchez:
This is something they should have to argue for in the individual case. That, okay, the surveillance is done, why should you not have to tell this US person, and maybe in very many cases, there will be good reasons not to, but it shouldn't be taken for granted. It should be something that eventually they should assume we will in fact have to disclose or certainly if it turns out we were wrong, it's very likely the court is going to make us disclose and therefore, one, introduce the actual check on the back end of people kicking the tires and having the opportunity to challenge surveillance they believe is improper. But also on the front end, creating the understanding on the part of the people who are submitting these applications that you cannot assume this will be secret. You cannot assume that you will be accountable if you've targeted, especially an American, either on weak evidence or a selective arrangement of the evidence. I think that would go a long way toward aligning incentives in a much healthier way.

Cindy Cohn:
I totally agree. I certainly, from your mouth to the Ninth Circuit's ears, because we have that very question up in EFFs case concerning national security letters, which do empower the government to request information from service providers and then carry what is essentially turning out to be an eternal gag on those companies. I completely agree with you that having something, the public having a little sunshine, be the disinfectant for some of the problems that we've seen can be very helpful.

Cindy Cohn:
I also think that I'm not quite sure why we need a secret court hand selected by the Chief Justice of the Supreme Court to do this. Our Article three judges do handle cases involving classified information. We have a very special law called the Classified Information Protection Act that governs that and people are not regularly leaking classified information out of the federal courts. So I feel like it might have been reasonable in 1978 to think that that could be a problem. I think now in 2020 we have a lot of experience with regular courts handling classified information and we don't see a problem there. We might be able to help a little bit by broadening the scope of the judges involved from the hand picked ones.

Danny O'Brien:
Isn't this also part and parcel of fixing all the problems around the FISA Court, reforming the classification process because I think that something you've identified, Julian, is this dark black ops world of government where the default is to classify information and then just the rest of government which has this presumption that it should be exposed to public review and we've got this creeping movement particularly around surveillance where the presumption is classification. And there's no external way of challenging that. The same people who want to conduct these programs are also the people that determine whether they are secret or not.

Julian Sanchez:
I think that's absolutely right and it's one of the reasons I think the FISA Court has the appearance of a regular court. You always hear when people criticize the FISA Court, they say, "these are regular Article three judges." But in a lot of ways, it is sort of potemkin court because it is a court with a lot of the trappings but divorced from the larger context that gives us some reason to have confidence in the output, I guess, of the legal process which is to say, these Article three judges, but normally Article three judges do not exist in a vacuum, they exist in a context of higher courts who will be reviewing their decisions and hearing arguments from whoever lost the case that you ruled on and may issue a bench slap, may overturn your ruling in a perhaps gentle and perhaps somewhat scathing way.

Julian Sanchez:
You have the knowledge that this is something that advocacy groups are going to look at write about, that the legal community is going to write law review articles about that you may find your peers and colleagues in the legal community not making fun of you, but the gentile law journal version of a kick me sign on your back if you write something that's not very well thought out. So you remove all of that context, you remove the review from above, the review, in a sense by a larger community and you remove a lot of the incentives for decisions to be effectively high quality.

Danny O'Brien:
Can I just quickly ask, what's an Article three judge? What does that mean?

Julian Sanchez:
Article three of the Constitution establishes the judicial branch so these are judges who are part of the judicial branch of the American government as laid out in Article three of the Constitution.

Danny O'Brien:
Right. As opposed to FISA, which is really part of the executive almost?

Cindy Cohn:
Article three judges, as Julian said, are judges who are appointed and approved by Congress in accordance with the way the Constitution creates the judiciary. There's lots of other people who are judges in our world who get called judge, but aren't Article three judges. So the magistrate judges who are judges who handle a lot of stuff for judges. Immigration judges. Lots of people.

Danny O'Brien:
Judge Judy.

Cindy Cohn:
Judge Judy. Well, she's a state court judge. But TV judges. Lots of people get called judges and so when people like Julian and I say Article three judges, we mean judges who were selected by the President and approved by the Congress in accordance with the processes that have developed out of Article three. Article three of the Constitution doesn't actually lay all of that out, but that's the process. It's to distinguish from other kinds of judges and the FISA Court is made up of judges who have been approved under Article three. It's just a subset of those that are handpicked by the Chief Justice of the US Supreme Court to serve on it. And for a long, long time, the Chief Justice would generally only pick judges who lived in the eastern side of the country. There were very, very few judges from the 9th circuit, which is where we are out here in California. And the theory was, what if they have to get on their horse and drive to DC to look at secret things. And we made fun of them and so did a lot of other people point out that there are ways that you don't physically have to be in DC and that you can still review classified information because the FBI does it all the time. We finally have one judge from the Ninth Circuit who is on the FISA Court.

Julian Sanchez:
Although by statute, I think there is a kind of minimum number of FISA Court who have to live within, I forget the distance, but it's 30 miles of DC or something like that. But it is a very unusual structure. That's to say, I think it's pretty basically unique. This is a court with 11 judges, all of whom were chosen by one person, John Roberts. And you can say, "they are all people who have been at least approved by the Senate and confirmed to their regular posts", but the composition of the panel is important. They don't usually sit as a panel. They usually, individually, take turns hearing cases. But there is a lot of social science research showing that essentially your peer group matters. If you have a bench that is composed of lets say, democratic appointees and republican appointees that if the majority of judges are conservative, liberal judges on that panel, on that bench, will tend to vote more like conservatives and vise versa.

Julian Sanchez:
Conservatives, or at least someone who started as a conservative, with a bunch of democratic appointees as their peers will come to vote more and more like a liberal and in deed may vote more liberally than the initially conservative judge with a majority peer group of liberals. So the fact that you have people chosen essentially by one person probably not particularly ideologically diverse or diverse in perspective. I know there's a lot more former prosecutors and former defense attorneys who get picked for the FISC. That's probably true for the judiciary in general, it does mean you have not just all the structural reasons that the court is going to be disposed to be deferential to the government, but also a selection bias in the composition of the court to the extent that John Roberts is favorably disposed toward granting the government this kind of authority and chooses people whose perspectives he finds congenial]. You're going to have a body that probably does not have a lot of very staunch civil libertarians on it.

Cindy Cohn:
One of the things that we did as part of helping push for this amicus rule is to include in the kind of people who can help the judges, technical people, because one of the things we saw after Mr. Snowden revealed a lot of the spying and the government unilaterally made some of these decisions public is that they were not nearly as well reasoned as we had hoped. And some of that may be because the judges don't have the kind of help that they need to do this because of the secrecy and the limitations on access to classified information.

Cindy Cohn:
We were able to get the amicus to include not just lawyers, but also technical people. But I feel like at that point it's kind of too late. One of the things that I think would make, frankly, and this just isn't FISA Court, but I think all courts do a better job with technical issues is if they had more resources to explain how the tech works for them. I think that especially in the kinds of situations around mass spying, which is where we started and where we spend a lot of EFFs energy anyway. These are complex systems and if you're turning a legal analysis about whether how our people are targeted and how target information is collected, you have to understand how the technology works.

Julian Sanchez:
There's some specific rulings related to the bulk metadata collection, both the telephone records collection under 215 and then that prior Internet metadata ruling where looking back on some of these that eventually have became public, the court is effectively saying well, there's a ruling from the late 70s, supposedly Maryland that says telephone records are not protected by the fourth amendment, you don't have a fourth amendment right against your telephone records being obtained by the government because you've essentially turned over this information voluntarily and this is information the company keeps as a matter of course in its own business records. The FISA Court effectively reads that as, communications metadata is not protected. Again, the opinions that have been released are fairly heavily redacted but it doesn't appear to be anywhere where in okaying this kind of very broad collection that doesn't require particularized warrants based on probable cause, anyone who spoke up and said, well, Internet communication does not work like the old phone system.

Julian Sanchez:
All this traffic that is occurring over the network, when you send an email, Comcast does not keep a business record of what emails you sent. Maybe your employer or your email provider has a record like that, but Comcast, as a backbone provider, doesn't have that as a business record you can routinely obtain. You are collecting information that is, as far as the backbone provider is concerned, just content as much as the content of the email itself or the content of a phone conversation would be content. So there is this way in which this technological difference between how the phone network works and how packet switch networks like the Internet work, that is pretty clearly directly material to whether this important precedent applies and if this precedent doesn't apply, it makes a huge difference because it means what you're doing is essentially collection of content that is protected by the fourth amendment as opposed to collection of some kind of business record that, under this unfortunate precedent, is not protected by the fourth amendment.

Julian Sanchez:
And it's not that you can't imagine some kind of potential argument they would make about this, but what's disturbing is that it didn't even look like the court had considered this. The court had not even factored in, there's actually this technological difference that calls into question whether this is the appropriate precedent. And it's one thing to say, they made a decision about that, that I don't approve of, but it's another thing to say, they have not even factored this in. They are not even questioning whether this technological difference makes an important legal difference because they don't seem to be even cognizant that these two networks operate in very different ways.

Cindy Cohn:
I'm a huge fan of metaphors, but sometimes you read these decisions and you realize that the court actually didn't go beyond the metaphor level to figure out whether that's actually what's going on and just because there are similarities between phone networks and the way emails work doesn't mean that they are actually the same. I wanted to just summarize some of the ideas we've had because, again, we're trying to fix things here and I think that the fixes that we have talked through are perhaps get rid of the secret court all together and let the regular courts handle these cases is definitely worth thinking about.

Cindy Cohn:
Certainly that all of the court's decisions and the material presented to the court would eventually be made public and that the burden is on the government to say why they shouldn't be made public. There is certainly stuff that can be redacted if you need to protect people's personal privacy but the government needs to demonstrate why these things should be private and I would argue they need to do that periodically, that it's just not one and done and then it stays secret forever.

Cindy Cohn:
I think we've talked a little bit about making sure that the judges are chosen differently. That the choice by the chief justice causes real dangers and hazards in the ability of the court over time to really be ... to hold the government to its word and make the government do its work. I certainly think that the, personally, I don't put words in yours, but that the rule of the amici is small but mighty and needs to get bigger so that the court really does have something, especially in cases ... one of the things that we've lost is the adversarial process at the end that we have in the case of regular warrants. If we're not going to have that adversarial process at the end when we decide whether the evidence is admissible, we need to have more of an adversarial process in the beginning so that there is more of a shake out of what they get to do at the beginning since there isn't going to be one at the end.

Danny O'Brien:
We have this to-do list of what to fix and taking notes. We also wanted to try and imagine what this better world would look like if we did manage to fix the Internet. But I want to narrow this down a bit. Julian, as somebody who's a journalist who writes about a secret court and has to do the research to try and map out what's going on there. If we did fix this process, how would your job change? What could you imagine writing about now and presenting to the public that maybe you can't or struggle to explain in the current situation?

Julian Sanchez:
It's already changed significantly. Again, for decades there were basically no FISA Court opinions that were public. And then there were a very tiny handful and now there are dozens of public FISC opinions since the passage of USA Freedom. It's possible to talk concretely about what the FISA Court says on a range of complicated questions as opposed to just merely speculating about the different ways a court might interpret a statute that is, again, often not super clear because it was written before the technologies that now applies to existed. But certainly to have a more adversarial back end would open up, I think, the possibility of evaluating how often essentially they get it right. We just have no sense currently of how often electronic surveillance approved by the FISC is actually generating intelligence useful enough to justify the intrusion.

Julian Sanchez:
We don't authorize wiretaps to catch jay walkers, as a rule. There is a list of fairly serious crimes that are eligible for wiretaps. But in the FISA case, you have a number of definitions of foreign intelligence. FISA orders have to be geared toward collecting foreign intelligence information and a lot of the definitions of that is rather complex multi-part definition are the kind of things you would think. Threats to the national security of the United States, but one of the rather broader ones is information that is relevant to the conduct of foreign affairs of the United States. And so when you're looking back and saying, did we get anything worthwhile out of this, there's a whole lot of communications between people who are not terrorists or spies or criminals that, if they are business people or government officials, or talking to business people or government officials might well in some sense be relevant to the conduct of foreign affairs of the United States, and because you don't have as you do on the, it's a different title three, the Omnibus Crime Control Act of 1969, ordinary criminal wiretaps are sometimes called title three orders.

Julian Sanchez:
In that case, at least, you can say, you did the wiretap, what percentage of these wiretap orders you got resulted in a prosecution, how many of those resulted in convictions and to the extent that you did a wiretap and then you convicted someone of a fairly serious crime you have at least a sense that it was not completely frivolous, that you didn't just invade people's privacy for no reason. We don't have anything like that on the FISA side really. Surveillance ends and then 99% of the time there is no prosecution. That's not the point of FISA or a foreign intelligence surveillance. But okay, they stopped at some point wiretapping someone. Did they get it right? Did they get it wrong? Was the information in the application a fair representation of the facts available? Were they diligent about trying to present a complete picture to the court or did they only present what supported their desired results? That's all a perspective that we'd be much more likely to have if effectively people who were surveilled but ultimately weren't doing anything wrong had the ability to drag that into the light.

Cindy Cohn:
Julian's point is really well taken. One of the things we've seen when we've lifted up the cover a little bit on some of these FISA Court investigations is how little they get out of some of them. Certainly, in the context of Section 215, which is the mass telephone records collection that, at the end of the day, there was one prosecution against a Somali guy who was sending money home. That was the only one where the FISA evidence was used. And then the Ninth Circuit just ruled in this case, which is called Maolin, a couple of weeks ago that frankly, the government was overstating how much the FISA Court information was being used and essentially was misleading Congress and the American people about the usefulness of it even in the very one case left standing.

Julian Sanchez:
There is absolutely a pattern we see when, whether it was foreign wiretapping, the one component of Stellar Wind was first disclosed. It turned out this had saved thousands of lives, absolutely essential in preventing terrorist attacks and then years later the inspectors general of the various intelligence agencies put out a report that says, actually we dug into this and we talked to the officials and they really could not come up with a concrete case of an intelligence success that depended on this warrantless surveillance that was part of Stellar Wind. With the metadata program after the Snowden disclosures, we heard "no, no, there are so many cases where terrorist plans have been disrupted as a result of this sort of surveillance." And then again a little bit later, not quite as long after the fact and that case happily, we get two different independent panels, the 'Privacy and Civil Liberties Oversight Board' and a handpicked presidential committee looking at this and concluding fairly quickly, no, that wasn't true. In fact, we just couldn't identify any cases where unique intelligence of operational value was derived from this frankly enormous intrusion on the communications privacy of American citizens that, in the rare cases where there was some useful information that was passed on, it was effectively duplicative of information that the FBI already had under traditional lawful targeted orders for a particular person's records.

Cindy Cohn:
That takes me to the last one on our list of things that would be great if we fixed the FISA Court, which is some real accountability for the people who are affected by what happens in the FISA Court. And I appreciate the inspectors general, they have done some good work uncovering the problems, but that's just not the same as really empowering the people affected to be able to have standing, whether it's in a secret court or a regular court and be able to say this information has come out that I was spied on and I want to have some recompense and there's a whole set of legal doctrines that are currently boulders on our way to getting that kind of relief in our NSA spying cases that I think that some more clarity in the FISA Court and some more reforms of the FISA Court would really help get out of the way.

Danny O'Brien:
So this is: "see you in court,in a court that I can see."

Cindy Cohn:
Exactly.

Julian Sanchez:
Exactly.

Danny O'Brien:
Julian, thank you so much for taking us through all of this. I look forward to your weekly column explaining exactly what happened every day in a new reformed FISA Court and look forward to seeing you on the Internet too.

Julian Sanchez:
I am always there.

Cindy Cohn:
Thank you so much, Julian. We really appreciate you joining us and your willingness to get as wonky as we do is greatly, greatly appreciated over here at EFF, not just on this podcast, but all the time.

Julian Sanchez:
Thank you so much for having me. I look forward to catching up with you guys when we can get on planes again.

Cindy Cohn:
Wow, that was really a fun interview. And boy, we went deep in that one.

Danny O'Brien:
I like it. I like it when you folks get nerdy on the laws.

Cindy Cohn:
The thing about the secret court is, even though you can get pretty wonky about it, everyone is impacted by what this court does. This court approved tapping into the Internet backbone. It approved the mass collection of phone records. And it approved the mass collection of Internet metadata. Two of those three programs have been stopped now, but they weren't stopped by the court, they were stopped Congress or by the government itself deciding that it didn't want to go forward with them.

Danny O'Brien:
After those things were made public, even though this whole system was designed to keep them secret.

Cindy Cohn:
Right. It took them going public before we were even able to get to the place where we saw that the court had approved a bunch of things that I think most Americans didn't want. And clearly Congress stopped two of the three of them and we're working on the third.

Danny O'Brien:
I do feel like I'm honing a talking point here and I feel that it is this contradiction with foreign intelligence surveillance court. It's not really a court because there aren't two parties discussing. It's just one effectively. It's not really about foreign data because it's brief has expanded for these programs that are taking place on US soil and can scoop up US persons' information. And I'm not going to say it's not intelligent, but it doesn't have the technical insider advise and intelligence that allows it to make the really right decisions about changing technology. I think that really just leaves surveillance out of its title. That's the only thing that's true about this name.

Cindy Cohn:
It is the surveillance court. I think that's certainly true, and I agree with you about the intelligence, that basically this court really isn't equipped to be doing the kinds of evaluations that it needs to be able to do in order to protect our rights.

Danny O'Brien:
Not without help. I mean, I think getting an amicus role into this and getting assistance and getting what Julian described as this ecosystem, this infrastructure of justice around it, super structure is the important thing.

Cindy Cohn:
And that's the thing that became so clear in the conversation with Julian, is just how fixable this is. The list is not very long and it's pretty straight forward about what we might need to be able to bring this into something that has accountability and it fixes some of the problems and that's really great since that's the whole thing we're trying to do with this podcast is we're trying to figure out how you fix things. And I think it's pretty clear that if we really do need to fix the Internet, we also need to fix, as a piece of that, we need to fix the FISA Court.

Danny O'Brien:
We'll both, after we finish recording here, go off and do that. And if you'd like to know more about that particular work that we do when we're not in the studio, you can go to EFF.org/podcast where we have links to EFF blog posts and work, but we also have full transcripts, links to the relevant court cases and other background info on this podcast. Bios on our amazing guests and also ways to subscribe to fix the Internet so you won't miss our next exciting episode.

Danny O'Brien:
Thanks for listening in and we'll see you next time.

Danny O'Brien:
Thanks again for joining us. If you'd like to support the Electronic Frontier Foundation, here are three things you can do today. One, you can hit subscribe in your podcast player of choice and if you have time, please leave a review, it helps more people find us. Two, please share on social media and with your friends and family. Three, please visit EFF.org/podcast where you will find more episodes, learn about these issues and donate to become a member and lots more.

Danny O'Brien:
Members are the only reason we can do this work. Plus you can get cool stuff like an EFF hat or an EFF hoodie or even a camera cover for your laptop. Thanks once again for joining us and if you have any feedback on this episode, please email podcast@eff.org. We do read every email. This podcast was produced by the Electronic Frontier Foundation with help from Stuga Studios. Music by Nat Keefe of Beat Mower.


This work is licensed under a Creative Commons Attribution 4.0 International License.

rainey Reitman

Podcast Episode: Why Does My Internet Suck?

2 weeks 1 day ago
Episode 002 of EFF’s How to Fix the Internet

Gigi Sohn joins EFF hosts Cindy Cohn and Danny O’Brien as they discuss broadband access in the United States – or the lack thereof. Gigi explains the choices American policymakers and tech companies made that have caused millions to lack access to reliable broadband, and what steps we need to take to fix the problem now. 

In this episode you’ll learn:

  • How the FCC defines who has broadband Internet and why that definition makes no sense in 2020;
  • How many other countries adopted policies that either incentivized competition among Internet providers or invested in government infrastructure for Internet services, while the United States did neither, leading to much of the country having only one or two Internet service providers, high costs, and poor quality Internet service;
  • Why companies like AT&T and Verizon aren’t investing in fiber;
  • How the FCC uses a law about telephone regulation to assert authority over regulating broadband access, and how the 1996 Telecommunication Act granted the FCC permission to forbear – or not apply – certain parts of that law;
  • How 19 states in the U.S. have bans or limitations on municipal broadband, and why repealing those bans is key to increasing broadband access
  • How Internet access is connected to issues of equity, upward mobility, and job accessibility, as well as related issues of racial justice, citizen journalism and police accountability;
  • Specific suggestions and reforms, including emergency subsidies and a major investment in infrastructure, that could help turn this situation around.

Gigi is a Distinguished Fellow at the Georgetown Law Institute for Technology Law & Policy and a Benton Senior Fellow and Public Advocate.  She is one of the nation’s leading public advocates for open, affordable and democratic communications networks. From 2013-2016, Gigi was Counselor to the former Chairman of the Federal Communications Commission, Tom Wheeler. She advised the Chairman on a wide range of Internet, telecommunications and media issues, representing him and the FCC in a variety of public forums around the country as well as serving as the primary liaison between the Chairman’s office and outside stakeholders. From 2001-2013, Gigi served as the Co-Founder and CEO of Public Knowledge, a leading telecommunications, media and technology policy advocacy organization. She was previously a Project Specialist in the Ford Foundation’s Media, Arts and Culture unit and Executive Director of the Media Access Project, a public interest law firm. You can find Gigi on her own podcast, Tech on the Rocks, or you can find her on Twitter at @GigiBSohn.

Below, you’ll find legal resources – including links to important cases, books, and briefs discussed in the podcast – as well a full transcript of the audio.

Please subscribe to How to Fix the Internet via RSSStitcherTuneInApple Podcasts, Google Podcasts, Spotify or your podcast player of choice. You can also find this episode as an MP3 on the Internet Archive. If you have any feedback on this episode, please email podcast@eff.org

Resources Current State of Broadband Fiber ISP Anti-Competitive Practices & Broadband Policy Net Neutrality Other Transcript of Episode 002: Why Does My Internet Suck?

Danny O'Brien:
Welcome to How to Fix the Internet with the Electronic Frontier Foundation, a podcast that explores some of the biggest problems we face online right now, problems whose source and solution is often buried in the obscure twists of technological development, societal change, and the subtle details of Internet law.

Cindy Cohn:
Hi, everyone. I'm Cindy Cohn, the Executive Director of the Electronic Frontier Foundation, and, for purposes of this podcast, I'm also a lawyer.

Danny O'Brien:
And I'm Danny O'Brien, and I work at the EFF, too, although they have yet to notice I'm not actually a lawyer. Welcome to How to Fix the Internet, a podcast that explores some of the more pressing problems facing the Internet today, and solves them, right then and there.

Cindy Cohn:
Well, or at least we're hoping to point the way to a better future with the help of some experts who can guide us and, sometimes, challenge our thinking.

Danny O'Brien:
This episode, we're tackling a problem that has been a blatant issue for years here in the United States, and yet no one seems able to fix. Namely, why does my broadband connectivity suck? Cindy, I live in San Francisco, supposedly the beating heart of the digital revolution, but I'm stuck with a slow and expensive connection. My video calls look like I'm filming them with a potato. What went wrong?

Cindy Cohn:
Well, maybe take the potato away, Danny. But, you know, it's a recurrent complaint that the home of the Internet, the United States, has some of the worst bandwidth, the highest costs in the developing world. And that's a problem that our guest today has been tackling for much of her career.

Cindy Cohn:
Gigi Sohn is one of the nation's leading advocates for open, affordable, and democratic communications networks. She is currently a distinguished fellow at the Georgetown Law Institute for Technology Law and Policy. Previously, she was counselor to the chairman of the Federal Communications Commission, and she co-founded and led the nonprofit Public Knowledge for 12 years. And I'm proud to say that she's currently a member of EFF's board of directors.

Danny O'Brien:
Welcome, Gigi. When we talk about broadband policy, what we're really talking about is fast Internet, home and business Internet that's speedy enough to do what we need to do these days online. Yet, I was looking and the FCC, the regulator in charge of such things in the U.S., Defines broadband as 25 megabits per second down and 3 megabits up. That seems a little low to me.

Gigi Sohn:
Yes, it is very slow. But before I start on my rant and rave, I just want to say how delighted I am to be with you guys today. Very socially distant, 3000 miles away, but also how proud I am to serve on EFF's board, so thank you, Cindy, for asking me to do that, and I love being part of this organization.

Gigi Sohn:
So, yes, 25 megabits per second down, three up. That is the definition that was set in 2014, when I worked at the FCC. And now we are in 2020 and we are in the middle of a pandemic, and it is quite clear that, if you, like me, have three people working from home, on Zoom calls, at least two of us on Zoom calls at the same time and another doing her homework, that 25 megabits per second down and, particularly, three up, which nobody ever focuses on the upload speed, is just wholly inadequate.

Gigi Sohn:
So, let me tell you a story. Up until about six weeks ago, I had 75 megabits per second symmetrical at the low, low price of $80 a month. I called my broadband ISP, Verizon, and I said, "There's three of us in the house and we're all working at the same time. I need 200 megabits per second symmetrical for an extra $30 a month." And the tech told me the truth and said, "Yeah, 75 symmetrical, that's not enough for three of you."

Gigi Sohn:
So, that'll tell you a bit about how outdated the FCC's definition of broadband is, when a company representative is telling you that 75 megabits per second symmetrical isn't enough for just three people.

Danny O'Brien:
And I mean, what's crazy to me, and we're going to be talking in this show primarily about the United States experience, but I use what bandwidth I have to talk to people in the rest of the world, and it seems most countries, or a lot of countries, I should say, have far better connectivity at a far lower price. So, it seems crazy that the United States, which is certainly one of the origins of the Internet, has struggled to provide that Internet to its own citizens.

Gigi Sohn:
Well, I think there's a very simple explanation for that. In the other countries, the countries have either made, like South Korea, a major investment in broadband. They consider it infrastructure. They consider it, if not a public utility, like a public utility. Or, in places like England, the policy permits great competition. And we have neither of that.

Gigi Sohn:
The investment that this government has made in our infrastructure, in our broadband infrastructure, has been nominal. Now, there's some proposals out there I'm happy to talk about to up that number considerably. But perhaps even more importantly, the policy that we had, which promoted competition in the narrow band world, in the dial up world in the late 90s and the early 00s, the average American had access to an average of 13 different ISPs. Today, you're lucky if you've got two.

Gigi Sohn:
It does amaze me how little competition there is in San Francisco. So, there's a recent study out from a group called the Institute for Local Self Reliance, and it showed that nearly 50 million Americans have a choice of only one broadband provider, and that's using the FCC's really lousy data, which grossly overstates who has access to broadband. And that Comcast and Charter, the two largest cable companies, have a monopoly over 47 million Americans and another 33 million on top of that have only digital subscriber line, or DSL, which is not even 25/3 most of the time, as their competitive choice.

Gigi Sohn:
So, because we got rid of policies that promoted competition, we now have a series of regional monopolies, and they can charge what they want. And they could serve who they want.

Cindy Cohn:
So, how did we get here, Gigi? How did we end up with this lack of choice in the United States?

Gigi Sohn:
I think it's two reasons. Again, we let the private sector take over what is essentially public infrastructure. The government said, this was Democrats and Republicans, this is not partisan, "We should let the free market, so to speak, flourish. We should let the market flourish."

Gigi Sohn:
And for a while there, again in the late 90s and early 00s, it did. But then the FCC deregulated broadband and eliminated the requirement that dominant telecom providers in a community had to open up their networks to competitors. And that was the beginning of the end. So, that's when we had a choice of 13 dial up ISPs per American. But as soon as the FCC said, "No, no, no, broadband Internet access is something different than dial up. It's different than phone service. We're going to deregulate it and we're not going to subject it to that requirement that the dominant provider open up their networks," that's when the entire competitive ISP industry shrunk to nothing.

Danny O'Brien:
So, I remember a time when, during the transition between dial up, it was dial up, which was slow, but we had competition, so you had all these mom and pop ISPs, and you could pick which one you wanted to use just by calling a different number. And then there was DSL, and DSL was provided by the phone companies. Correct me if I'm getting this wrong [crosstalk 00:08:18]

Gigi Sohn:
Correct.

Danny O'Brien:
But down the copper wire. And that was sort of competing with cable, which had already laid its wires and could provide something a little faster.

Gigi Sohn:
Not exactly. So, DSL came first, and the Federal Communications Commission, which regulated DSL, considered it just like telephone service. It did come over the same copper wire, and they regulated it like telephone service, and again, required the AT&Ts and the Verizons of the world to open up their networks to competitors. This was a result of the 1996 Telecommunications Act, which is a much derided, but I believe, actually, it was quite an excellent piece of legislation that really has almost no force and effect anymore.

Gigi Sohn:
Then cable modem service came along afterwards, and the cable industry went to the FCC and asked it to declare how it should be regulated. Should it be regulated like DSL, or should it be regulated like something else? Or unregulated, or deregulated? The FCC decided, this was in 2002, that cable modem service should be deregulated, not subject to the same requirements as DSL.

Gigi Sohn:
That case went all the way up to the Supreme Court, which said, "Well, we don't think the FCC's reading of the Communications Act of 1934, which is its organic statute, the statute that it is required to follow, is the best. But because it's the expert agency, they get deference." So, the FCC won, and then the FCC said, "Well, if we're not going to regulate cable modem service like a telephone service, we're certainly not going to regulate DSL that way, and we're certainly not going to regulate mobile broadband, or mobile wireless that way."

Gigi Sohn:
So, that's when, in 2002, well, 2005 really, after this Brand X decision came out of the Supreme Court, that's when everything came tumbling down, and this so-called free market in broadband was allowed to reign. And what you got, again, under both Democrats and Republicans, was intense consolidation, regional monopolies. And guess what happens with concentration and monopoly? High prices. We have some of the highest broadband prices in the world. We average about $79 a month for broadband, and again, that's the crummy broadband.

Danny O'Brien:
Yeah, I do remember that it was specifically around about this time, around 2005, when connectivity began to really suck here in the West Coast. I remember, really, before that, there were competitors in copper wire DSL, COVAD and Sonic were two of the challengers here on the West Coast. But after that decision by the FCC, they really seemed to struggle to compete with AT&T, the local phone incumbent whose wires they were using.

Danny O'Brien:
Still, all of those series of decisions you described did leave cable and the phone companies sort of dueling with each other. Why wasn't there enough to bring competition to the next stage of broadband?

Gigi Sohn:
They're not because the phone companies have been punished when they've invested in fiber. Right? So, Verizon Fios, when it came to market, everybody was really excited and Wall Street just pummeled its stock price. So, for all intents and purposes, Verizon Fios is not expanding. It's in really very limited areas. I don't know if you can get out there in San Francisco, but in a lot of places, you cannot.

Gigi Sohn:
Similarly, AT&T, I think, not wanting to follow Verizon's lead, hasn't invested in that either, and those two companies are far more interested in building out their mobile wireless capacity than they are in building their wire line fiber capacity. So, that's why you don't see Verizon Fios and AT&T's Uverse, that's the name of their fiber offering, which again, is very limited. And by the way, AT&T still offers DSL in a lot of places, particularly in inner cities.

Gigi Sohn:
That's why you don't see a lot of competition between the two of them. And really, that was the thinking behind the Telecommunications Act of 1996, was that you were going to have this kind of fervent competition between the cable companies and the telephone companies, and you would have fervent competition between cable companies themselves.

Gigi Sohn:
But what these companies did, good for their bottom line, was basically split up the country into different regions and become monopolies. But it said AT&T and Verizon, I think if they could sell off their fiber, they'd do it in a heartbeat and just focus on mobile.

Cindy Cohn:
So, Gigi, how do we break up this situation where we're stuck with a duopoly? And how does this conversation fit in with the ongoing, very public fights around network neutrality?

Gigi Sohn:
Yeah, so the first thing that the FCC needs to do, if we have a new FCC, is restore its authority to promote competition in the broadband market. And look, I'm glad about how many people know about net neutrality. My 15-year-old daughter and all her classmates know about net neutrality. My 86-year-old mother knows about net neutrality, and my relatives know about it.

Gigi Sohn:
But net neutrality, in my mind, is less about ISPs blocking and throttling and discriminating against traffic. Obviously that's something we really, really want to prevent. But it's more about is there somebody, is there a government agency that is overseeing an industry that is highly concentrated, that controls an incredibly essential resource, and that, without anybody to oversee them, is free to charge whatever they want and free to do whatever they want.

Cindy Cohn:
One thing that really shifted things for me was the 2014 DC Circuit decision that rejected the prior legal basis that the FCC was relying on to do network neutrality. As part of that, the DC Circuit told the agency that it couldn't even pass rules to target abuses by the ISPs. So, as a result of that decision, the FCC couldn't stop ISPs from blocking, it couldn't stop them from discriminating among applications, favoring its own or making a pay-to-play scheme, and it couldn't stop special access fees. This meant that we really weren't going to get a market correction here, and we had to do something. And ultimately, what we did was the Open Internet Order.

Gigi Sohn:
Yeah. Look, here's the problem. The part of the Communications Act, what is known colloquially as Title II, or Chapter II, in plain English, right now, is all the FCC has to assert its regulatory authority over broadband. Now, should Congress pass a new chapter, a new title, that really is just super focused on broadband? Yeah, I think that would be a great idea. But we don't have that right now.

Gigi Sohn:
And that's why, when I was at the FCC in 2015, we reversed that 2002 decision that I talked about some time ago, and said, "No, no. We're going to regulate broadband like a telephone service," although not entirely like a telephone service. And this is where it gets a little complicated. Because obviously, a law that was written in 1934, every jot and tittle shouldn't necessarily apply to broadband.

Gigi Sohn:
But the good news is, in that same 1996 Telecommunications Act, the FCC was given permission to forebear or not apply parts of Title II that it didn't believe to be in the public interest. So, what we did was said, "Look, the only game in town for us to protect consumers and promote competition," and this is really important, and I'll talk about that in a minute, "Is Title II." But I think we didn't apply 75% or 80% of the Title II provisions because they didn't make sense to apply to broadband.

Cindy Cohn:
I know. I remember when that fight was going on and our activism team was like, "Title II plus Forbearance." Doesn't really lend itself to a slogan or something we could put on T-shirts or anything. But it really was a way that I think, and you were inside the FCC at this time, a way to really ensure that we were able to think about regulating broadband in a way that was consistent with how broadband is, that we weren't straitjacketed into things. I mean, the whole thing would be better if Congress actually just did its job and thought about how to regulate broadband.

Danny O'Brien:
I feel like a lot of the theme of our conversations about fixing the Internet is that the most obvious solution is somehow blocked in some way, because, given that it's so obvious, why don't we do it? And looking at the fights that have gone on about broadband, regulation and encouraging competition, the obvious thing to do is not to have a law written in 1996 based on a law written in 1934, but to write a new one.

Danny O'Brien:
And it just so happens, in the United States, that Congress is so dysfunctional right now that we can't do that. So, what are the other, sneakier, skunkworksy kind of routes can we take to fix this?

Gigi Sohn:
Well, look, the fact of the matter is if we're going to close the digital divide in this country, it's not just about fast broadband, Danny. It's about over 140 million Americans that don't have broadband, either because they don't have any infrastructure or because they can't afford it. It's important to note that the affordability problem is far larger, like 2.5X larger, than the infrastructure problem.

Gigi Sohn:
So, at a time like today, like now, during this pandemic, where the only way you can work and your kids can learn, and you can communicate with others in a safe way is through the Internet, we've got to deal with the problem at hand, and that's the affordability problem. And that is not going to get solved by the private market.

Gigi Sohn:
What's interesting is, right now, you're seeing both the wire line and the wireless companies going to Congress and saying, "Can you provide a $50 a month credit for broadband for low income Americans?" And they're finally admitting two things. Number one, is that government must have a role, and they hate that, right? Because it's all about the "free market" for them. And number two is they cannot close the digital divide themselves. They've been boasting about how they're providing broadband free during the pandemic and they're not cutting people off, that they're not charging them late fees.

Gigi Sohn:
But the latest numbers I've seen is that, in the first two quarters of 2020, only 2.4 million people took up broadband that didn't have it before. And that doesn't necessarily mean they're low income. That still leaves... I testified in front of Congress that 141 million Americans don't have broadband either because of affordability, infrastructure. Microsoft estimates 162 million, almost 50% of Americans. Okay?

Gigi Sohn:
So, we're talking about a huge gap, and if all they've signed up at the beginning of the pandemic is 2.4 million, industry is not moving the needle. So, that takes us to who's going to fill that gap. It's got to be government and it would certainly help if the 19 states that have prohibited their communities from building their own broadband networks, those laws were repealed.

Danny O'Brien:
Wait, wait. Back up a bit because I want to get this down. Because when I said there must be someone else if the federal government is doing this, I was coughing under my breath and pointing out like the states could do it or maybe we've had rumblings in San Francisco for many years that maybe that San Francisco might build out its own broadband. But you're saying that the states actually prohibit cities from creating their own competition.

Gigi Sohn:
Yeah, so 19 states either totally ban local communities from building their own broadband networks or limit them in some way, put hurdles over them. So, for example, in Colorado, if you're a local community and you want to build a broadband network in that community, you have to have a ballot initiative. Now, as it turns out, something like 70 Colorado communities have had that, but think about if you're a low income community. It's expensive to have a ballot initiative, and who are you fighting? You're fighting the resources of a Comcast or a Charter or an AT&T and a Verizon, who are trying to block you.

Gigi Sohn:
So, there are either enormous hurdles or they're flat out bans. Now, when I was at the FCC, we tried to preempt those state laws and we were struck down. Our decision was struck down by the 6th Circuit. So, it's either going to take Congress to pass a law, and in fact, there is one law that actually was passed by the House of Representatives, the Accessible, Affordable Internet for All Act, that would preempt those state laws, or states themselves.

Gigi Sohn:
I've urged communities. I say to them, "Get every mayor that you know, get every chamber of commerce, get every university, and go to your state legislators and say, "You are killing us and you are killing the state economy. You need to repeal this law."

Cindy Cohn:
Yeah, it's a disaster. Now, we do have some good news. One of the things that happened with the last DC Circuit ruling around network neutrality is that the circuit freed up the states to be able to do some of this work.

Gigi Sohn:
The Communications Act of 1934 does explicitly note that it is both the duty of the states and the federal government to provide connectivity for all. Obviously, they weren't thinking about broadband. They were thinking about telephony, but again, this is the telephone of the 21st century. There's always been a dual role.

Gigi Sohn:
Now, what happened, again, this was around the late 90s and early 00s, was that the cable and telephone companies went to state legislators and they said, "You know, the feds got this Internet regulation thing. You don't need to do it. You can deregulate yourself." And that's what they did, and indeed, Governor Brown signed a largely deregulatory bill in California. So, the states got out of the business of protecting consumers, protecting competition in their own states. And when you have a state as large as California, the notion that the state government would have nothing to do with this vital resource is kind of a crazy idea.

Danny O'Brien:
I think one of the things that we got, I got to spend some time a few years ago doing that thing where you have a focus group and you get to hear people actually talking about your issues. We were behind one of those two-way glasses. And the funny thing was, of course, that our topic of interest is surveillance.

Danny O'Brien:
So, there are all these people talking about surveillance, and then occasionally looking over at the two-way mirror and wondering who exactly was listening to this. But the thing that came out of it, for me, was people were freaked out about surveillance. People were particularly mad, though, at the cable companies and the phone companies, out of all the people that were.

Danny O'Brien:
What was interesting to me is that this was sort of before the Facebooks and the Googles began to attract the venom that they have now. People really don't like their cable companies. And this turns out, politically, too. I think particularly after the pandemic. Every single person who has a child need broadband right now because otherwise they can't comply with the education requirements of this day.

Danny O'Brien:
So, I think there's a real political moment here, and I think, tell me if I'm wrong, but I've seen politicians actually pick this up as an easy issue that isn't being addressed by, really, either side of the political divide effectively. And I think that it can work at every level. It can work at the city level, it can work at the state level, and the federal level. What should we be telling those politicians who, maybe, realize that this is a vote winner?

Gigi Sohn:
So, again, let's start at the state level. If you have a law that severely limits or prohibits local communities from deciding whether or not to build their own broadband networks, repeal it. Repeal it today, repeal it tomorrow. That is, to me, the number one target, in my mind, that is limiting competition, is limiting the closing of the digital divide. It is terribly anti-competitive and anti-consumer. So, that's number one.

Gigi Sohn:
At the local level, I would say consider building your own broadband network. There are so many cities and towns where, if you live just outside the city limits, you have to buy satellite. You have to buy three different services. You get DSL, satellite, it costs like $300-400 a month. Those are places that the private sector don't want to serve because there's not an economic return that's big enough for them. That's where community will serve.

Gigi Sohn:
And at the federal level, look, the Feds have to do a couple things. Number one, they have to immediately, first on an emergency basis, and then permanently, pass what I call a monthly broadband benefit of at least $50 a month. Because these local community broadband builds are not going to happen overnight. So, you've got to make a dent in the affordability gap. And the way you do that is either you could call it a voucher or a credit. I don't care. Now we've got industry on board.

Gigi Sohn:
The only thing that's holding this up right now is that Republicans don't want to pass a COVID-19 relief bill that's anything but a skinny bill that deals with some of the employment problems. I think this is definitely a COVID-19 problem, but the Republican Party doesn't agree. So, they need to do that, number one. First on a temporary basis, second on a permanent basis.

Gigi Sohn:
They need to preempt the states to the extent that the states don't do it themselves, the federal government has to preempt those prohibitive state laws on municipal broadband. And third, they need to make a big bet on infrastructure, at least between $80-100 billion for infrastructure in those places where there is no broadband. And just to say, everybody likes to focus on rural America, rural America, rural America. There are lots of places in urban and suburban America that don't have infrastructure either.

Gigi Sohn:
But what's important is the government has to do a better job of making sure that they get a return on that investment. We have spent tens of billions of dollars over the last decade on building infrastructure. And what's happened? It's happened in California. You get a company like Frontier that goes to the government trough, and doesn't build what it promised. And now it's going into bankruptcy.

Gigi Sohn:
So, what's critical is for both federal and state governments working together as opposed to being adversaries, which they have been for the last three years, to make sure that, if my taxpayer dollars go into Frontier's pocket or CenturyLink's pockets, or anybody else's pockets, that we get the networks that we were promised.

Cindy Cohn:
Gigi, let's go to the question that we kind of started with. What does the world look like if we get this right? How does our world get better if we get this right?

Gigi Sohn:
If we get this right, every American who wants to be connected will be connected, and that's pretty much every American. One other thing that drives me absolutely nuts is people who say, "Well, there's lots of causes for the digital divide. Relevance is one of them." People don't think it's relevant.

Gigi Sohn:
Well, all you need to do is go see the lines to use the computers at the library to know that is false, and that relevance means a lot of different things to different people. It's another way of saying, "I can't afford it." It's another way of saying, "I don't have the digital literacy to be able to use a computer." So, every American is connected at robust speeds of minimum, in my opinion, of 100 symmetrical, and that the government money is going to build future-proof infrastructure, not stuff that we're going to have to upgrade again in another 10 years, and that means fiber.

Gigi Sohn:
Everything that allows for full participation in our society and our economy is now dependent on a robust broadband Internet access connection. So, that's what the world looks like, and I think we can get there, but we are so far from it right now, and it's shocking. The first national broadband plan was written in 2010, by my friend, Blair Levin, who was, at the time, coordinated this process at the FCC. And we have not even come close to fulfilling 90% of what he proposed in that report, and that is really sad.

Cindy Cohn:
There's so much that we're going to get if we fix this. It's kids, it's work, it's flexibility for everyone to be able to set their lives up in a way that matches them better. In this time of the pandemic, we're seeing how important it is to some people to be able to support their families. Robust broadband everywhere gives people so many more choices.

Cindy Cohn:
And I think there's an equity point under this, as well. Right now, it's pretty expensive to live in some of the places where people have to live to make a living. If we end up with robust broadband everywhere, we're going to free up people to do good work and do it from wherever they happen to be. I just don't know how many good works and excellent memes and good organizing and groundbreaking ideas we're missing because the only people who really get to participate are people who can live in places where there's really strong broadband. There's just so much we can gain from this.

Gigi Sohn:
Think about the moment we're in right now, where people are protesting in the streets every day for racial and social justice. The digital divide disproportionately impacts people of color, regardless of income. And that's because of systemic racism. That's because of unjust credit practices, unjust and discriminatory housing practices. You name it.

Gigi Sohn:
And years ago, in the 60s, Lyndon Johnson dictated something called the Kerner Commission. He basically had a guy named Otto Kerner, I don't remember what Kerner did, but he basically looked at the causes for social unrest and racial inequality in this country. One of the causes was the lack of access to what was the only medium at the time, broadcasting. The way that broadcasters covered the protests, the Civil Rights protests, and how they covered communities of color. And needless to say, it was not a positive.

Gigi Sohn:
So, access to the means of communication is a way of pulling one's self up and being equal in society, having an equal voice in society. So, it's much more than, can somebody in a garage invent something. It's, can all Americans have equal rights and equal access to the main means of communication in this country and, frankly, in this world.

Cindy Cohn:
I think that's such an important point, Gigi. We have to understand the role of technology in lifting people up and giving them access to information, and uniting people from different backgrounds. Lots of people have talked about that for years, but what we spend less time talking about, and what I think is equally important, is how technology is being used every day to document abuses of people in power, including police abuses against people of color.

Cindy Cohn:
And once those abuses are documented, how easily they can be widely and immediately shared, accessed and discussed. This ability to see what is actually going on in the streets in nearly real time has helped to shift the conversation about equity in our country. We have so far to go, but we're not going to get there without people across the country, and honestly across the globe, being able to participate by sharing what they see and accessing what other people see on their phones and computers, reading the articles, commenting on social media, organizing and reaching out to their representatives.

Cindy Cohn:
Internet access is just vital to all of these things. It is the infrastructure of democracy in our time, and also of social change. We have to understand that vital role and begin to think about broadband in that perspective.

Danny O'Brien:
I remember in the 90s, arguing with someone about broadband, and what was fast and what wasn't. I said, "Well, what about the upload speed? We've got to have a fast upload speed." And I remember this, he worked for British telecom, he sort of said, "What are they going to upload? Video? And are they going to create? We have the BBC."

Danny O'Brien:
Of course, that's what starts revolutions, is the ability to upload what you see around you and show that to the rest of the world, and you need fast Internet to do that.

Gigi Sohn:
Yeah, absolutely.

Cindy Cohn:
Well, thank you so much, Gigi. This has been a lot of fun, and I think we can build that better world, and I'm so glad you're a part of helping make it happen.

Danny O'Brien:
That was super interesting and I think one of the positive elements that I got out of it was this vision of people getting the chance to build or contribute to their own Internet connectivity. Though it seems to me that part of the reason why people get frustrated is because they don't feel they have any power, and the idea that you might have a municipality or a community or a local business providing you Internet connectivity is very inspiring because it'll mean that you literally have a connection to the people providing you the connection.

Danny O'Brien:
And also good for technologists, too, because I sometimes get frustrated, but it's not like I can go to Comcast headquarters. Whereas, if it was just down the road or my local city, I might be able to make a difference.

Cindy Cohn:
Yeah, I think that's right. The theme of a lot of this is how do we bring back user control, and what was exciting to me is Gigi's really talking about giving users control of the very means in which they get to the Internet, which is the very first step. And I think the other thing that was really important from this is that we had a reasonable market in the late 1990s. We had a lot of choices for ISPs, and maybe a lot of people who came online later than that may not realize that.

Cindy Cohn:
This was something that we had kind of gotten done pretty well, and then we broke it. This is something that got broken. It got broken, in part, because of FCC deciding that it didn't want to regulate anymore. That decision being confirmed by a Supreme Court case called Brand X in 2005. Then we had a regulator that wanted to regulate again, which is when Gigi worked there. And now, we have, under Ajit Pai, an FCC that doesn't want to regulate again.

Cindy Cohn:
But the good news in all of that is that we do know what a good answer looks like. It's not an all or nothing in terms of regulation, as if, once you're regulating, you're all the way to a public utility. That the Open Internet Order that we had in the last years of the Obama administration had a balance, basically, requiring some regulation in order to spur competition, but also something called forbearance, with the regulators saying, "We want to regulate in this way, but we don't need to do everything that we do for broadband in the same way we did for telephone."

Danny O'Brien:
Right. I feel like there's just no way you can not regulate the telecom industry because it's already tied up in so much red tape. And not just in the U.S., To be honest. This was a very American-specific conversation that we had here, but I end up working with a lot of people all around the world and I know that I said that lots of countries have better connectivity than the U.S. On average, but a lot of countries have much worse connectivity as well.

Danny O'Brien:
And when I sit and talk to them, folks working there, they have exactly that same frustration. It always seems to be the same combination. It's always how do we break through a lack of competition, or the fact that the telcos have come to this agreement with governments that isn't working.

Cindy Cohn:
Yeah, it's interesting because sometimes this gets framed as regulation or not regulation, and first, as I mentioned, you can have smart regulation that really helps, but also, a lot of what Gigi was talking about was actually the law getting in the way, regulation. And she was talking about the things that we need to do to fix it. The first thing on her list was we need to get the 19 states that have said that people can't have municipal broadband or can't build their own competitors to the giants. We need to get those laws repealed. That's regulation as well, but it's regulation that's disempowering users, rather than empowering them.

Danny O'Brien:
What did you think about the idea of giving everybody $50 to get decent Internet?

Cindy Cohn:
Well, I think it's worth thinking of in the short term. And she said that. This was a short term subsidy. She basically said we're not going to be able to build out the infrastructure we need, especially for, and I thought it was important that she pointed out that we need infrastructure built, not just in rural places, which is where we think of immediately, but lots of urban places. We need to build that infrastructure.

Cindy Cohn:
So, I think the thought that we needed to give people a subsidy so that they could get broadband now, because people need broadband now, especially during this pandemic time, that would be a bridge towards a time in which we had competition actually helping us have more options and the prices go low.

Cindy Cohn:
I'm open to that. I think we're in a time in which we need to think a little more broadly about how the government can support people. And certainly, the concerns that she raised about some of the ISPs, Frontier, for instance, taking a whole lot of government money, saying that they were going to build out infrastructure, and then not building it out and going bankrupt. That's just a horrible situation. And at least if you give money to the end users to buy connectivity for themselves, you avoid that kind of problem, which frankly is a lot more money lost.

Danny O'Brien:
Right. So, the idea is that, at least if you're giving the money to the users, they're going to expect and hopefully get something from those companies, rather than just giving the money directly to the companies. And yeah, I agree with you. It seems like the biggest fix here, it's the thing that stood out for me, was we need to get those 19 states that actually prohibit community and municipal broadband involvement. We need to get those laws off the book.

Cindy Cohn:
Yeah, and I guess the good news/bad news about that is it seems very clear that everybody hates their broadband providers. They hated them before the pandemic, and the pandemic has just made it worse. So, as an activist organization, that's our opportunity. There's a lot of public support for making sure everybody's kid can get an education while staying safe. And that sense, I think, from a lot of people, that they've been ripped off by their broadband providers for a very long time. We need to harness that energy towards a movement to basically fix this, to give us the broadband that we deserve.

Danny O'Brien:
Well, on that slightly mixed note of taking people's hatred of broadband providers and turning it into political action, we should wrap up. Thanks very much, Cindy, and thanks to our guest, Gigi Sohn.

Danny O'Brien:
Thanks again for joining us. If you'd like to support the Electronic Frontier Foundation, here are three things you can do today. One, you can hit subscribe in your podcast player of choice, and if you have time, please leave a review. It helps more people find us.

Danny O'Brien:
Two, please share on social media and with your friends and family. Three, please visit eff.org/podcasts, where you will find more episodes, learn about these issues, and donate to become a member and lots more. Members are the only reason we can do this work, plus you can get cool stuff like an EFF hat or an EFF hoodie, or even a camera cover for your laptop. Thanks once again for joining us, and if you have any feedback on this episode, please email podcast@eff.org. We do read every email.

Danny O'Brien:
This podcast was produced by the Electronic Frontier Foundation, with help from Stuga Studios. Music by Nat Keefe of BeatMower.


This work is licensed under a Creative Commons Attribution 4.0 International License.

rainey Reitman

Asleep at the Wheel: Why Didn't Carmakers Prepare for Massachusetts' Right to Repair Law?

2 weeks 1 day ago

The people of Massachusetts demanded their right to repair this month, passing a ballot initiative to allow independent repair shops to access critical information about their cars by an overwhelming 74.9% majority. Now, automakers—whose scare tactics and false privacy and security claims did not fool Massachusetts voters—are expected to use another known tactic from their playbook and ask the legislature to delay implementing that law for years to come by saying the timeline is too tight.

That’s simply unacceptable. EFF stands behind the right to repair: If you bought it, you own it. You have the right to fix it yourself or take it to the repair shop of your choosing. Manufacturers often want to keep their customers tied to them long after a sale is done, and clearly are not above using whatever tactics they can to keep it that way. The people of Massachusetts didn’t fall for it. Neither should the legislature.

A Little History

Massachusetts has a long history of standing up for the right to repair. In 2012, the state voted by initiative to protect the right of drivers to take their cars to independent mechanics, rather than forcing everyone to go to the service depots run by manufacturers. Yet, since that law went into effect, manufacturers worked to erode that right to repair, redesigning their products in ways that took repair choices from drivers yet again.

This year, voters hit back by approving a new ballot initiative that requires vehicles with a telematics platform—software that collects and transmits diagnostic information about your car—to install an open data platform.

This initiative was popular throughout the election season and passed with overwhelming support from the people of Massachusetts, with nearly three-quarters of voters saying loud and clear that they demand a choice over who repairs their vehicles. If automakers were taken by surprise, it’s simply because they weren’t paying attention.

It’s been nearly three years since the Auto Care Association unveiled its secure vehicle interface standard, which combines three well-established ISO standards to create a secure, interoperable means of gathering vehicle telemetry and sharing it with the mechanic of the driver’s choosing.

The original Right to Repair measure was voted in by Bay Staters in 2012 with a 74% majority. The manufacturers immediately set about redesigning their vehicles to subvert the popular will. The State of Massachusetts has been boiling with rage at this maneuver ever since. It was clear from the start that the new initiative would pass as well.

And yet, the manufacturers are crying that they’ve been asleep at the wheel all this time, and will need years to retool to comply with this incredibly popular law.

Big Car should have been supporting interoperable, standardized access to telemetry from the start. The companies have no one to blame but themselves if they find themselves unprepared for this moment. We should treat their tactical claims of surprise with skepticism.

But even if you accept that the car makers have neglected to prepare for this moment, we should still be skeptical of their claim that it will take years to retool their vehicles to support the standard. This is not new hardware—it’s a new, well-specified data-format for existing sensors in vehicles. If Tesla can provide over-the-air software updates for its vehicles' suspension and FordPass can adjust engine parameters in real time, then there’s no good reason that car makers can’t output a standardized datastream from their sensor-packages.

We’re willing to believe that bringing new cars into compliance with the law will cost the manufacturers, and clearly any manufacturer that chose not to prepare for this absolutely foreseeable situation will have to shell out for overtime and other additional expenses. To the extent that those additional costs are a penalty, it’s a penalty the car makers have imposed on themselves—and it’s a just penalty.

Stop Pumping the Brakes

Question 1 takes another good step toward affirming the right to repair in Massachusetts, and EFF looks forward to building on this law in the coming years. Automakers and legislatures should not delay that progress, but rather work with these communities to craft policy that encourages responsible data collection, improves access, and strengthens security.

The repair community has worked in good faith with car manufacturers over the years to find solutions that give consumers choices without compromising in ways that hurt their safety or privacy. We urge the legislature to do their jobs and listen to their voters, rather than rewarding automakers who subverted an incredibly popular law, extracted undeserved profits from the people of  Massachusetts, and then mulishly refused to prepare for the absolute pasting they were obviously going to get when Bay Staters went back to the ballot box in 2020. 

Hayley Tsukayama

Join Us for 2020's Virtual Aaron Swartz Day Hackathon

2 weeks 2 days ago

EFF is excited to participate this weekend in a virtual version of the annual Aaron Swartz Day and International Hackathon—a day dedicated to celebrating the continuing legacy of activist, programmer, and entrepreneur Aaron Swartz. 


Join EFF Senior Researcher Dave Maass and privacy advocate Madison Vialpando as they lead a virtual session on the Atlas of Surveillance project. Participants will gather news articles, press releases, and public records about law enforcement agencies using surveillance technologies such as social media monitoring, automated license plate readers, and body-worn cameras. EFF Special Advisor Cory Doctorow, Director of Strategy Danny O'Brien, and Senior Activist Elliot Harmon are also scheduled to speak about Aaron's legacy and how his work lives on today.

Aaron Swartz was a brilliant champion of digital rights, dedicated to ensuring the Internet remained a thriving ecosystem for open knowledge. EFF was proud to call him a close friend and collaborator. His life was cut short in 2013, after he was charged under the notoriously draconian Computer Fraud and Abuse Act for systematically downloading academic journal articles from the online database JSTOR.

Federal prosecutors stretch this law beyond its original purpose of stopping malicious computer break-ins, reserving the right to push for heavy penalties for any behavior they don't like that happens to involve a computer. This was the case for Aaron, who was charged with eleven counts under the CFAA. Facing decades in prison, Aaron died by suicide at the age of 26. He would have turned 34 this year, on November 8.

In addition to EFF projects, the hackathon will focus on projects including SecureDrop, Open Library, and the Aaron Swartz Day Police Surveillance Project. The full lineup of speakers includes Aaron Swartz Day co-founder Lisa Rein, SecureDrop lead Mickael E., researcher Mia Celine, Lucy Parsons Lab founder Freddy Martinez, and Brewster Kahle—co-founder of Aaron Swartz Day and the Internet Archive.

Aaron Swartz Day will start at 10 a.m. PT, and is free to everyone. Those interested participating can register on Eventbrite to have the streaming links emailed to you in the morning on Saturday, November 14. You can also follow along on Twitter at @AaronSwartzDay, or tune into the Aaron Swartz Day Facebook and YouTube channels.

Can't make it this weekend? You can still carry on Aaron's work. The organizers of Aaron Swartz Day are going to start hosting sessions to work on projects every Saturday, at 2 p.m. PT. Visit www.aaronswartzday.org for more information.

Hayley Tsukayama

Introducing “How to Fix the Internet,” a New Podcast from EFF

2 weeks 2 days ago

Today EFF is launching How to Fix the Internet, a new podcast mini-series to examine potential solutions to six ills facing the modern digital landscape. Over the course of 6 episodes, we’ll consider how current tech policy isn’t working well for users and invite experts to join us in imagining a better future. Hosted by EFF’s Executive Director Cindy Cohn and our Director of Strategy Danny O’Brien, How to Fix the Internet digs into the gritty technical details and the case law surrounding these digital rights topics, while charting a course toward how we can better defend the rights of users.  

It’s easy to see all the things wrong with the modern Internet, and how the reality of most peoples’ experience online doesn’t align with the dreams of its early creators. How did we go astray and what should we do now?  And what would our world look like if we got it right? This podcast mini-series will tackle those questions with regard to six specific topics of concern: the FISA Court, U.S. broadband access, the third-party doctrine, barriers to interoperable technology, law enforcement use of face recognition technology, and digital first sale. In each episode, we are joined by a guest to examine how the current system is failing, consider different possibilities for solutions, and imagine a better future. After all, we can’t build a better world unless we can imagine it.

We are launching the podcast with two episodes: The Secret Court Approving Secret Surveillance, featuring the Cato Institute’s specialist in surveillance legal policy Julian Sanchez; and Why Does My Internet Suck?, featuring Gigi Sohn, one of the nation’s leading advocates for open, affordable, and democratic communications networks. Future episodes will be released on Tuesdays.

We’ve also created a hub page for How to Fix the Internet. This page includes links to all of our episodes, ways to subscribe, and detailed show notes. In the show notes, we’ve included all the  books mentioned in each podcast, as well as substantial legal resources—including key opinions in the cases we talk about, briefs filed by EFF, bios of our guests, and a full transcript of every episode. 

You can subscribe to How to Fix the Internet via RSSStitcherTuneInApple Podcasts, Google Podcasts, and Spotify and through any of the other podcast places. You can find our archive of MP3s from the podcast at the Internet Archive. If you have feedback on How to Fix the Internet, please email podcasts@eff.org.

rainey Reitman

Election Security: When to Worry, When to Not, and the Takeaway from Antrim County, Michigan

2 weeks 3 days ago

Everyone wants an election that is secure and reliable. With technology in the mix, making sure that the technology supports this is critical. EFF has long-warned against blindly adopting technologies that can be easily manipulated or fail without having systems in place to test, secure, and catch problems, including through risk limiting audits. At the same time, not every problem is worth pulling the fire alarm about—we have to look at the bigger story and context.  And we have to stand down when our worst fears turn out to be unfounded.

A story out of Michigan last week in Antrim County provides a good opportunity to apply this. What seems to have happened is that a needed software update was not applied to a system that helps collect and report digital vote information—the county has paper ballots that are scanned—from the county. As a result, it appeared that 6,000 votes shifted from Republicans to Democrats in the unofficial reports.

This Michigan story isn’t worrisome after all, but that doesn’t mean that our elections are as secure as they need to be.

That is very worrisome. However, when the update was applied, the votes shifted back because the actual tabulation figures were correct. Of course there were paper ballots too, that would have been cross-checked under Michigan’s processes had this not been caught so early.  Our longtime election security friend and partner Professor Alex Halderman of the University of Michigan has a more technical rundown on his Twitter feed.

This story should be one that takes what could have been a big worry and instead gives us cause for relief. Instead of just direct-recording electronic voting machines (DREs) and election systems that don’t have fail-safes for errors, Michigan had good error-checking, and the error was caught quickly. Even if it hadn’t been, it is very likely that it would have been caught later, as the results shifted from unofficial to official. And it wasn’t even a computer or software error; it was a human one. But, of course, systems should take steps to protect against errors by humans running them too.

Bottom line: No fire alarm needed. Whew! We should see this story a win for election security. It must not be promoted further as evidence of a fraud. It is, in fact, evidence of the safeguards against fraud working.

What Can We Learn From This Incident?

First, and most importantly, we can learn that it is critical to have systems in place to support election technology and the election officials who run it. Failing to apply a software update is a predictable kind of mistake. The election officials were able to see what had happened and correct it because the system didn’t assume that everything would go smoothly. This is unfinished work. We and our friends continue to push for more transparency in election systems, more independent testing and red-team style attacks, and most importantly, for Risk Limiting Audits of election results.

Second, that voting on paper ballots continues to be extremely important and the most secure strategy. The situation in Michigan was much less concerning because there were hand-marked paper ballots available. With that backstop, most serious problems could be resolved without having to re-run the election. We still have many states and localities that do not have sufficient paper ballots, and we need to keep pushing.

Third, it is important to have the entire voting technical system under the control of election officials so that they can investigate any potential problems, which is one of the reasons why Internet voting remains a bad, bad idea.

Fourth, that we should continue to be vigilant. Election officials have come a long way from when we started raising concerns about electronic voting machines and systems. But the public should keep watching and, when warranted, not be afraid to raise or flag things that seem strange.

There may be more claims of computer glitches and other forms of manipulation in the days and weeks ahead. Knowing when to worry and when NOT to worry will continue to be important. When there is no cause for worry, the story should stop, which is what should happen with this Michigan story now.

But most importantly, the work of securing our elections must continue. This Michigan story isn’t worrisome after all, but that doesn’t mean that our elections are as secure as they need to be. And that’s the biggest challenge—continuing to support and fund the work to secure our elections, even when the bright glare of a hotly contested election has faded.

Cindy Cohn

10 Years of HTTPS Everywhere

2 weeks 4 days ago

It’s been 10 years since the beta release of EFF’s HTTPS Everywhere web browser extension. It encrypts your communications with websites, making your browsing more secure. HTTPS has journeyed it’s way from an urgent recommendation to a main component of traffic of our everyday web experience. In 2018, we discussed the importance of HTTPS Everywhere and our ongoing effort to encrypt the web. We have come far and still have more work to do. This post gives a snapshot into the landscape of HTTPS Everywhere today.

HTTPS Everywhere and Friends

Since the launch of HTTPS Everywhere, other projects have also taken on the task of helping users browse securely. These more recent projects include DuckDuckGo’s Smarter Encryption and Smart HTTPS. The biggest difference is that HTTPS Everywhere still operates a community-curated list of rules for particular sites. Many users who add to our list have intimate knowledge of the sites they are contributing. Examples of such reports include subdomains of a site that have misconfigurations, insecure cookies, or CDN buckets to account for.

Many users wanted dynamic upgrades to HTTPS, so we developed the Encrypt All Sites Eligible (E.A.S.E) mode in HTTPS Everywhere.

EASE automatically attempts to upgrade connections from insecure HTTP to secure HTTPS for all sites, and prevents unencrypted connections from being made. This parallels the features of the more recent projects listed. EASE mode also assists in preventing downgrade attacks, where malicious actors attempt to redirect your browser to an insecure HTTP connection to the site. This is handled slightly differently by other projects, but we want to emphasize that our rulesets also apply to sub resources on the page as well. Meaning, if there are images and scripts that link to another domain, such as a Content Delivery Network (CDN), our rules can apply to those as well. We are not only adding rulesets, but amending them as websites change. HTTPS Everywhere’s maintainers and contributors have done a fantastic job over the years maintaining this aspect of the project.

HTTPS Everywhere and DNS over HTTPS (DoH)

A common question is whether HTTPS Everywhere is still helpful if “DNS over HTTPS” (DoH) is enabled? Absolutely. The Domain Name System (DNS) looks up a site’s IP address when you type the site’s name into your browser. A DNS request occurs before the site’s server connection is made; DoH occurs at this layer. After the DNS request has been made, the connection to the site’s server is next. That is where HTTPS Everywhere comes in: it is able to secure your traffic to the requested site.

DNS request = request for I.P. site’s address

HTTP request = request communication with site's server/website content

DoH & HTTPS = encrypted request for site’s I.P. & encrypted request with site's server/website content respectively

Progress in the Browsers

Many browsers have made important strides in adopting HTTPS at a more aggressive rate. For example:

  • Some browsers further block mixed content, that is, page resources that aren’t encrypted (HTTP).
  • Some browsers now deploy and support Transport Layer Security (TLS) 1.3, the latest version of the protocol that is supported in HTTPS. 
  • Some browsers visually indicate to users whether a site is secure or insecure.
  • Chrome blocks HTTP third party cookies and Firefox completely allows blocking third party cookies through their “enhanced tracking protection
  • DNS over HTTPS (DoH) is available in both Chrome and Firefox
  • Firefox Nightly quietly deployed an HTTPS Only Mode (go to about:preferences#privacy in the browser’s URL).

We hope to see these developments, especially the option to be HTTPS by default, in both Firefox and Chrome.

In the coming decade, we hope browsers will further help to encrypt the web. It’s time for browsers to close these remaining gaps and give users the choice to upgrade to HTTPS. We hope our HTTPS Everywhere project will eventually not be needed in its current state, because the browsers themselves will close these gaps. This will take a strong commitment by all major browsers to provide comprehensive HTTPS options for their users.

HTTPS Everywhere Innovation

In addition to encrypting your web traffic, HTTPS Everywhere also provides extended features that have made way for some exciting developments in internet privacy. 

Human Readable Onions

Our update channels provide a secure way for other parties to load their own rulesets. For example, SecureDrop partnered with Tor to use HTTPS Everywhere Update channels to have human-readable onions in Tor Browser! As SecureDrop explains:

“SecureDrop uses onion services—accessible only via the Tor network—to protect sources sending tips to news organizations. When you visit an onion service (address ends with “.onion”), all traffic to and from the service is encrypted and anonymized.”

We are excited to be able to provide a platform for easily shared AND secure tips to newsrooms. A very big hat tip to SecureDrop and Tor Browser.

Rust + Web Assembly

HTTPS Everywhere’s ruleset rewrites are very useful, but can be memory heavy in comparison to most extensions. To alleviate this, we have a ruleset redirect engine written in Rust that compiles to Web Assembly. If Web Assembly isn’t supported, then Javascript is the fallback for rewrites. We picked Rust because it is a memory safe language that is lightweight and manageable. Also, one need not rewrite existing parts of the code base in order to take part in more modern developments of web applications.

Learn more about Rust + Web Assembly: https://rustwasm.github.io/docs/book/introduction.html

HTTP Everywhere 2030 - Archived

This project and it’s extended features were created to make privacy and security not only accessible but easily obtainable to everyone. Anonymity and privacy on the web shouldn’t be limited to people with highly technical knowledge. Hopefully when we write an update a decade from now, HTTPS Everywhere will be retired, because its encryption safeguards will have been fully integrated as a common feature of “the net”.

Thank you for using HTTPS Everywhere. If you haven’t installed it, do so today!

Alexis Hancock

Clearview’s Faceprinting is Not Sheltered from Biometric Privacy Litigation by the First Amendment

3 weeks 1 day ago

Clearview AI extracts faceprints from billions of people, without their consent, and uses these faceprints to offer a service to law enforcement agencies seeking to identify suspects in photos. Following an exposé by the New York Times this past January, Clearview faces more than ten lawsuits, including one brought by the ACLU, alleging the company’s faceprinting violates the Illinois Biometric Information Privacy Act (BIPA). That watershed law requires opt-in consent before a company collects a person’s biometrics. Clearview moved to dismiss, arguing that the First Amendment bars this BIPA claim.

EFF just filed an amicus brief in this case, arguing that applying BIPA to Clearview’s faceprinting does not offend the First Amendment. Following a short summary, this post walks through our arguments in detail. 

Above all, EFF agrees with the ACLU that Clearview should be held accountable for invading the biometric privacy of the millions of individuals whose faceprints it extracted without consent. EFF has a longstanding commitment to protecting both speech and privacy at the digital frontier, and the case brings these values into tension. But our brief explains that well-settled constitutional principles resolve this tension.

Faceprinting raises some First Amendment interests, because it is collection and creation of information for purposes of later expressing information. However, as practiced by Clearview, this faceprinting does not enjoy the highest level of First Amendment protection, because it does not concern speech on a public matter, and the company’s interests are solely economic. Under the correct First Amendment test, Clearview may not ignore BIPA, because there is a close fit between BIPA’s goals (protecting privacy, speech, and information security) and its means (requiring opt-in consent).

Clearview’s Faceprinting Enjoys Some Protection

The First Amendment protects not just free expression, but also the necessary predicates that enable expression, including the collection and creation of information. For example, the U.S. Supreme Court has ruled that the First Amendment applies to reading books in libraries, gathering news inside courtrooms, creating video games, and newspapers’ purchasing of ink by the barrel.

Thus, courts across the country have held that the First Amendment protects our right to use our smartphones to record on-duty police officers. In the words of one federal appellate court: “The right to publish or broadcast an audio or audiovisual recording would be insecure, or largely ineffective, if the antecedent act of making the recording is wholly unprotected.” EFF has filed many amicus briefs in support of this right to record, and published suggestions about how to safely exercise this right during Black-led protests against police violence and racism.

Faceprinting is both the collection and creation of information and therefore involves First Amendment-protected interests. It collects information about the shape and measurements of a person’s face. And it creates information about that face in the form of a numerical representation. 

First Amendment protection of faceprinting is not diminished by the use of computer code to collect information about faces, or of mathematics to represent faces. Courts have consistently held that “code is speech,” because, like a musical score, it “is an expressive means for the exchange of information and ideas.” EFF has advocated for this principle from its founding through the present, in support of cryptographers, independent computer security researchers, inventors, and manufacturers of privacy-protective consumer tech.

Clearview’s Faceprinting Does Not Enjoy The Strongest Protection

First Amendment analysis only begins with determining whether the government’s regulation applies to speech or its necessary predicates. If so, the next step is to select the proper test. Here, courts should not apply “strict scrutiny,” one of the most searching levels of judicial inquiry, to BIPA’s limits on Clearview’s faceprinting. Rather, courts should apply “intermediate scrutiny,” for two intertwined reasons.

First, Clearview’s faceprinting does not concern a “public issue.” The Supreme Court has repeatedly held that the First Amendment is less protective of speech on “purely private” matters, compared to speech on public matters. It has done so, for example, where speech allegedly violated a wiretapping statute or the common law torts of defamation or emotional distress. The Court has explained that, consistent with the First Amendment’s core protection of robust public discourse, the universe of speech that involves matters of public concern is necessarily broad, but it is not unlimited.

Lower courts follow this distinction when speech allegedly violates the common law tort of publication of private facts, and when collection of information allegedly violates the common law tort of intrusion on seclusion. The courts held that that these privacy torts do not violate the First Amendment as long as they do not restrict discussion of matters of public concern.

Second, Clearview’s interests in faceprinting are solely economic. The Supreme Court has long held that “commercial speech,” meaning “expression related solely to the economic interests of the speaker and its audience,” receives “lesser protection” compared to “other constitutionally guaranteed expression.” Thus, when faced with First Amendment challenges to laws that protect consumer data privacy from commercial data processing, lower courts apply intermediate judicial review under the commercial speech doctrine. These decisions frequently focused not just on the commercial motivation, but also the lack of a matter of public concern.

To be sure, faceprinting can be the predicate to expression that is relevant to matters of public concern. For example, a journalist or police reform advocate might use faceprinting to publicly name the unidentified police officer depicted in a video using excessive force against a protester. But this is not the application of faceprinting practiced by Clearview.

Instead, Clearview extracts faceprints from billions of face photos, absent any reason to think any particular person in those photos will engage in a matter of public concern. Indeed, the overwhelming majority of these people have not and will not. Clearview’s sole purpose is to sell the service of identifying people in probe photos, devoid of journalistic, artistic, scientific, or other purpose. It makes this service available to a select set of paying customers who are contractually forbidden from redistribution of the faceprinting. 

In short, courts here should apply intermediate First Amendment review. To pass this test, BIPA must advance a “substantial interest,” and there must be a “close fit” between this interest and how BIPA limits speech.

Illinois Has Substantial Interests

BIPA advances three substantial government interests.

First, Illinois has a substantial interest in protecting biometric privacy. We have a fundamental human right to privacy over our personal information. But everywhere we go, we display a unique and indelible marker that can be seen from a distance: our own faces. So corporations can use face surveillance technology (coupled with the ubiquity of digital cameras) to track where we go, who we are with, and what we are doing.

Second, Illinois has a substantial interest in protecting the many forms of expression that depend on privacy. These include the rights to confidentially engage in expressive activity, to speak anonymously, to converse privately, to confidentially receive unpopular ideas, and to confidentially gather newsworthy information from undisclosed sources. Police use faceprinting to identify protesters, including with Clearview’s help. Government officials can likewise use faceprinting to identify who attended a protest planning meeting, who visited an investigative reporter, who entered a theater showing a controversial movie, and who left an unsigned pamphlet on a doorstep. So Clearview is not the only party whose First Amendment interests are implicated by this case.

Third, Illinois has a substantial interest in protecting information security. Data thieves regularly steal vast troves of personal data. Criminals and foreign governments can use stolen faceprints to break into secured accounts that can be opened by the owner’s face. Indeed, a team of security researchers did this with 3D models based on Facebook photos.

There Is A Close Fit Between BIPA and Illinois’ Interests 

BIPA requires private entities like Clearview to obtain a person’s opt-in consent before collecting their faceprint. There is a close fit between this rule and Illinois’ substantial interests. Information privacy requires, in the words of the Supreme Court, “the individual’s control of information concerning [their] person.” The problem is our lost control over our faceprints. The solution is to restore our control, by means of an opt-in consent requirement.

Opt-in consent is far more effective at restoring this control, compared to other approaches like opt-out consent. Many people won’t even know a business collected their faceprint. Of those who do, many won’t know they have the right to opt-out or how to do so. Even an informed person might be deterred because the process is time-consuming, confusing, and frustrating, as studies have shown. Indeed, many companies use “dark patterns” to purposefully design the user’s experience in a manner that manipulates so-called “agreement” to data processing.

Thus, numerous federal appellate and trial courts have upheld consumer data privacy laws like the one at issue here because of their close fit to substantial government interests.

Next Steps

Moving forward, EFF will continue to advocate for strong biometric privacy laws, and robust judicial interpretations of those laws. We will also continue to support bans on government use of face surveillance, including (as here) acquisition of information from corporations that wield this dangerous technology. More broadly, Clearview’s faceprinting is another reminder of the need for comprehensive federal consumer data privacy legislation. Finally, EFF will continue to oppose poorly taken First Amendment challenges to such laws, as we’ve done here.

You can read here our amicus brief in ACLU v. Clearview AI.

Adam Schwartz

RIAA Abuses DMCA to Take Down Popular Tool for Downloading Online Videos

3 weeks 2 days ago

"youtube-dl" is a popular free software tool for downloading videos from YouTube and other user-uploaded video platforms. GitHub recently took down youtube-dl’s code repository at the behest of the Recording Industry Association of America, potentially stopping many thousands of users, and other programs and services, that rely on it.

On its face, this might seem like an ordinary copyright takedown of the type that happens every day. Under the Digital Millennium Copyright Act (DMCA), a copyright holder can ask a platform to take down an allegedly infringing post and the platform must comply. (The platform must also allow the alleged infringer to file a counter-notice, requiring the copyright holder to file a lawsuit if she wants the allegedly infringing work kept offline.) But there’s a huge difference here with some frightening ramifications: youtube-dl doesn’t infringe on any RIAA copyrights.

youtube-dl doesn’t use RIAA-member labels’ music in any way. The makers of youtube-dl simply shared information with the public about how to perform a certain task—one with many completely lawful applications.

RIAA’s argument relies on a different section of the DMCA, Section 1201. DMCA 1201 says that it’s illegal to bypass a digital lock in order to access or modify a copyrighted work. Copyright holders have argued that it’s a violation of DMCA 1201 to bypass DRM even if you’re doing it for completely lawful purposes; for example, if you’re downloading a video on YouTube for the purpose of using it in a way that’s protected by fair use. (And thanks to the way that copyright law has been globalized via trade agreements, similar laws exist in many other jurisdictions too.) RIAA argues that since youtube-dl could be used to download music owned by RIAA-member labels, no one should be able to use the tool, even for completely lawful purposes.

This is an egregious abuse of the notice-and-takedown system, which is intended to resolve disputes over allegedly infringing material online. Again, youtube-dl doesn’t use RIAA-member labels’ music in any way. The makers of youtube-dl simply shared information with the public about how to perform a certain task—one with many completely lawful applications.

We've put together an explainer video on this takedown, and its implications for free speech online:

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2Fck7utXYcZng%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

Please share this video with others who use YouTube and other video uploading services. And if you use youtube-dl for lawful purposes, we want to hear from you. Email us at info@eff.org and include “youtube-dl” in the subject line.

Elliot Harmon

Ink-Stained Wretches: The Battle for the Soul of Digital Freedom Taking Place Inside Your Printer

3 weeks 2 days ago

Since its founding in the 1930s, Hewlett-Packard has been synonymous with innovation, and many's the engineer who had cause to praise its workhorse oscillators, minicomputers, servers, and PCs. But since the turn of this century, the company's changed its name to HP and its focus to sleazy ways to part unhappy printer owners from their money. Printer companies have long excelled at this dishonorable practice, but HP is truly an innovator, the industry-leading Darth Vader of sleaze, always ready to strong-arm you into a "deal" and then alter it later to tilt things even further to its advantage.

The company's just beat its own record, converting its "Free ink for life" plan into a "Pay us $0.99 every month for the rest of your life or your printer stops working" plan.

Plenty of businesses offer some of their products on the cheap in the hopes of stimulating sales of their higher-margin items: you've probably heard of the "razors and blades" model (falsely) attributed to Gillette, but the same goes for cheap Vegas hotel rooms and buffets that you can only reach by running a gauntlet of casino "games," and cheap cell phones that come locked into a punishing, eternally recurring monthly plan.

Printers are grifter magnets, and the whole industry has been fighting a cold war with its customers since the first clever entrepreneur got the idea of refilling a cartridge and settling for mere astronomical profits, thus undercutting the manufacturers' truly galactic margins. This prompted an arms race in which the printer manufacturers devote ever more ingenuity to locking third-party refills, chips, and cartridges out of printers, despite the fact that no customer has ever asked for this.

Lexmark: First-Mover Advantage

But for all the dishonorable achievements of the printer industry's anti-user engineers, we mustn't forget the innovations their legal departments have pioneered in the field of ink- and toner-based bullying. First-mover advantage here goes to IBM, whose lawyers ginned up an (unsuccessful) bid to use copyright law to prevent a competitor, Static Controls, from modifying used Lexmark toner cartridges so they'd work after they were refilled.

A little more than a decade after its failure to get the courts to snuff out Static Controls, Lexmark was actually sold off to Static Controls' parent company. Sadly, Lexmark's aggressive legal culture came along with its other assets, and within a year of the acquisition, Lexmark's lawyers were advancing a radical theory of patent law to fight companies that refilled its toner cartridges.

HP: A Challenger Appears

Lexmark's fights were over laser-printer cartridges, filled with fine carbon powder that retailed at prices that rivaled diamonds and other exotic forms of that element. But laser printers are a relatively niche part of the printer market: the real volume action is in inkjet printers: dirt-cheap, semi-disposable, and sporting cartridges (half-) full of ink priced to rival vintage Veuve-Clicquot.

For the inkjet industry, ink was liquid gold, and they innovated endlessly in finding ways to wring every drop of profit from it. Companies manufactured special cartridges that were only half-full for inclusion with new printers, so you'd have to quickly replace them. They designed calibration tests that used vast quantities of ink, and, despite all this calibration, never could quite seem to get a printer to register that there was still lots of ink left in the cartridge that it was inexplicably calling "empty" and refusing to draw from.

But all this ingenuity was at the mercy of printer owners, who simply did not respect the printer companies' shareholders enough to voluntarily empty their bank accounts to refill their printers. Every time the printer companies found a way to charge more for less ink, their faithless customers stubbornly sought out competitors who'd refill or remanufacture their cartridges, or offer compatible cartridges.

Security Is Job One

Shutting out these rivals became job one. When your customers reject your products, you can always win their business back by depriving them of the choice to patronize a competitor. Printer cartridges soon bristled with "security chips" that use cryptographic protocols to identify and lock out refilled, third-party, and remanufactured cartridges. These chips were usually swiftly reverse-engineered or sourced out of discarded cartridges, but then the printer companies used dubious patent claims to have them confiscated by customs authorities as they entered the USA. (We’ve endorsed legislation that would end this practice.)

Here again, we see the beautiful synergy of anti-user engineering and anti-competition lawyering. It's really heartwarming to see these two traditional rival camps in large companies cease hostilities and join forces.

Alas, the effort that went into securing HP from its customers left precious few resources to protect HP customers from the rest of the world. In 2011, the security researcher Ang Cui presented his research on HP printer vulnerabilities, "Print Me If You Dare."

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FnjVv7J2azY8%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

Cui found that simply by hiding code inside a malicious document, he could silently update the operating system of HP printers when the document was printed. His proof-of-concept code was able to seek out and harvest Social Security and credit-card numbers; probe the local area network; and penetrate the network's firewall and allow him to freely roam it using the compromised printer as a gateway. He didn't even have to trick people into printing his gimmicked documents to take over their printers: thanks to bad defaults, he was able to find millions of HP printers exposed on the public Internet, any one of which he could have hijacked with unremovable malware merely by sending it a print-job.

The security risks posed by defects in HP's engineering are serious. Criminals who hack embedded systems like printers and routers and CCTV cameras aren't content with attacking the devices' owners—they also use these devices as botnets for devastating denial of service and ransomware attacks.

For HP, though, the "security update" mechanism built into its printers was a means for securing HP against its customers, not securing those customers against joining botnets or having the credit card numbers they printed stolen and sent off to criminals.

In March 2016, HP inkjet owners received a "security update available" message on their printers' screens. When they tapped the button to install this update, their printers exhibited the normal security update behavior: a progress bar, a reboot, and then nothing. But this "security update" was actually a ticking bomb: a countdown timer that waited for five months before it went off in September 2016, activating a hidden feature that could detect and reject all third-party ink cartridges.

HP had designed this malicious update so that infected printers would be asymptomatic for months, until after parents had bought their back-to-school supplies. The delay ensured that warnings about the "security update" came too late for HP printer owners, who had by then installed the update themselves.

HP printer owners were outraged and told the company so. The company tried to weather the storm, first by telling customers that they'd never been promised their printers would work with third-party ink, then by insisting that the lockouts were to ensure printer owners didn't get "tricked" with "counterfeit" cartridges, and finally by promising that future fake security updates would be clearly labeled.

HP never did disclose which printer models it attacked with its update, and a year later, they did it again, once again waiting until after the back-to-school season to stage its sneak attack, stranding cash-strapped parents with a year's worth of useless ink cartridges for their kids' school assignments.

You Don't Own Anything

Other printer companies have imitated HP's tactics but HP never lost its edge, finding new ways to transfer money from printer owners to its tax-free offshore accounts.

HP's latest gambit challenges the basis of private property itself: a bold scheme! With the HP Instant Ink program, printer owners no longer own their ink cartridges or the ink in them. Instead, HP's customers have to pay a recurring monthly fee based on the number of pages they anticipate printing from month to month; HP mails subscribers cartridges with enough ink to cover their anticipated needs. If you exceed your estimated page-count, HP bills you for every page (if you choose not to pay, your printer refuses to print, even if there's ink in the cartridges).

If you don't print all your pages, you can "roll over" a few of those pages to the next month, but you can't bank a year's worth of pages to, say, print out your novel or tax paperwork. Once you hit your maximum number of "banked" pages, HP annihilates any other pages you've paid for (but continues to bill you every month).

Now, you may be thinking, "All right, but at least HP's customers know what they're getting into when they take out one of these subscriptions," but you've underestimated HP's ingenuity.

HP takes the position that its offers can be retracted at any time. For example, HP's “Free Ink for Life” subscription plan offered printer owners 15 pages per month as a means of tempting users to try out its ink subscription plan and of picking up some extra revenue in those months when these customers exceeded their 15-page limit.

But Free Ink for Life customers got a nasty shock at the end of last month: HP had unilaterally canceled their "free ink for life" plan and replaced it with "a $0.99/month for all eternity or your printer stops working" plan.

Ink in the Time of Pandemic

During the pandemic, home printers have become far more important to our lives. Our kids' teachers want them to print out assignments, fill them in, and upload pictures of the completed work to Google Classroom. Government forms and contracts have to be printed, signed, and photographed. With schools and offices mostly closed, these documents are being printed from our homes.

The lockdown has also thrown millions out of work and subjected millions more to financial hardship. It's hard to imagine a worse time for HP to shove its hands deeper into its customers' pockets.

Industry Leaders

The printer industry leads the world when it comes to using technology to confiscate value from the public, and HP leads the printer industry.

But these are infectious grifts. For would-be robber-barons, "smart" gadgets are a moral hazard, an irresistible temptation to use those smarts to reconfigure the very nature of private property, such that only companies can truly own things, and the rest of us are mere licensors, whose use of the devices we purchase is bound by the ever-shifting terms and conditions set in distant boardrooms.

From Apple to John Deere to GM to Tesla to Medtronic, the legal fiction that you don't own anything is used to force you to arrange your affairs to benefit corporate shareholders at your own expense.

And when it comes to "razors and blades" business-model, embedded systems offer techno-dystopian possibilities that no shaving company ever dreamed of: the ability to use law and technology to prevent competitors from offering their own consumables. From coffee pods to juice packets, from kitty litter to light-bulbs, the printer-ink cartridge business-model has inspired many imitators.

HP has come a long way since the 1930s, reinventing itself several times, pioneering personal computers and servers. But the company's latest reinvention as a wallet-siphoning ink grifter is a sad turn indeed, and the only thing worse than HP’s decline is the many imitators it has inspired.

Cory Doctorow

How to Identify Visible (and Invisible) Surveillance at Protests

3 weeks 2 days ago

UPDATE Nov. 5, 2020.  Want a crash course in how to identify surveillance technologies at protests? Watch EFF’s new video presentation on How to Observe Police Surveillance at Protests. The 25-minute video, taught by Senior Investigative Researcher Dave Maass, explains how you can identify various police surveillance technologies, like body-worn cameras, drones, and automated license plate readers, which may be used to surveil demonstrations. In the video, you will learn:

  • Where to look for these devices
  • How these technologies look
  • How these technologies function
  • How they are used by police
  • What kind of data they collect
  • Where to learn more about them

The video focuses on surveillance technologies common in the United States, but many of these technologies have been deployed by police around the world.

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube-nocookie.com%2Fembed%2FoGscYgR7bXc%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%C2%A0%20%20%20%20%20%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube-nocookie.com

Original post published June 4, 2020. 

The full weight of U.S. policing has descended upon protesters across the country as people take to the streets to denounce the police killings of Breonna Taylor, George Floyd, and countless others who have been subjected to police violence. Along with riot shields, tear gas, and other crowd control measures also comes the digital arm of modern policing: prolific surveillance technology on the street and online.

For decades, EFF has been tracking police departments’ massive accumulation of surveillance technology and equipment. You can find detailed descriptions and analysis of common police surveillance tech at our Street-Level Surveillance guide. As we continue to expand our Atlas of Surveillance project, you can also see what surveillance tech law enforcement agencies in your area may be using. 

If you’re attending a protest, don’t forget to take a look at our Surveillance Self-Defense guide to learn how to keep your information and digital devices secure when attending a protest. 

Here is a review of surveillance technology that police may be deploying against ongoing protests against racism and police brutality.


Surveillance Tech that May be Visible
Body-Worn Cameras

Officers wearing new body cams for the first time. Source: Houston Police Department

Unlike many other forms of police technology, body-worn cameras may serve as both a law enforcement and a public accountability function. Body cameras worn by police can deter and document police misconduct and use of force, but footage can also be used to surveil both people that police interact with and third parties who might not even realize they are being filmed. If combined with face recognition or other technologies, thousands of police officers wearing body-worn cameras could record the words, actions, and locations of much of the population at a given time, raising serious First and Fourth Amendment concerns. For this reason, California placed a moratorium on the use of face recognition technology on mobile police devices, including body-worn cameras. 

                                       Axon Flex camera system. Source: TASER Training Academy presentation for Tucson Police Department

Body-worn cameras come in many forms. Often they are square boxes on the front of an officers chest. Sometimes they are mounted on the shoulder. In some cases, the camera may be partially concealed under a vest, with only the lens visible. Companies also are marketing tactical glasses that includes a camera and face recognition; we have not seen this deployed in the United States--yet. 

A body-worn camera lens is visible between the buttons on a Laredo Police officer's vest. Source: Laredo Police Department Facebook

Drones

Sahuarita Police Department display its drones on a table. Source: Town of Sahuarita YouTube

Drones are unmanned aerial vehicles that can be equipped with high definition, live-feed video cameras, thermal infrared video cameras, heat sensors, automated license plate readers, and radar—all of which allow for sophisticated and persistent surveillance. Drones can record video or still images in daylight or use infrared technology to capture such video and images at night. They can also be equipped with other capabilities, such as cell-phone interception technology, as well as back-end software tools like license plate readers, face recognition, and GPS trackers. There have been proposals for law enforcement to attach lethal and less-lethal weapons to drones.

Drones vary in size, from tiny quadrotors (also known as Small Unmanned Aerial Vehicles or sUAVs) to large fixed aircraft, such as the Predator Drone. They are harder to spot than airplane or helicopter surveillance, because they are smaller and quieter, and they can sometimes stay in the sky for a longer duration. 

Activists and journalists may also deploy drones in a protest setting, exercising their First Amendment rights to gather information about police response to protestors. So if you do see a drone at a protest, you should not automatically conclude that it belongs to the police.

Automated License Plate Readers

Photo by Mike Katz-Lacabe (CC BY)

Automated license plate readers (ALPRs) are high-speed, computer-controlled camera systems that can be mounted on street poles, streetlights, highway overpasses, mobile trailers, or attached to police squad cars. ALPRs automatically capture all license plate numbers that come into view, along with the location, date, and time. The data, which includes photographs of the vehicle and sometimes its driver and passengers, is then uploaded to a central server.

Photo by Mike Katz-Lacabe (CC BY)

At a protest, police can deploy ALPRs  to identify people driving toward, away from, or parking near a march, demonstration, or other public gathering. For example, CBP deployed an ALPR trailer at a gun show attended by Second Amendment supporters. Used in conjunction with other ALPR’s around the city, police could track protestors’ movement as they traveled from the demonstration to their homes.

Mobile Surveillance Trailers/Towers

A 'Mobile Utility Surveillance Tower' at San Diego Comic-Con and a mobile surveillance pole in New Orlean's French Quarter

Hundreds of police departments around the country have mobile towers that can be parked and raised a number of stories above a protest. These are often equipped with cameras, spotlights, speakers, and sometimes have small enclosed spaces for an officer. They also often have ALPR capabilities. 

Common towers include the Terrahawk M.U.S.T. which looks like a guard tower mounted on a van and the Wanco surveillance tower, which is a truck trailer with a large extendable pole. 

FLIR Cameras

Forward-looking infrared (FLIR) cameras are thermal cameras that can read a person’s body temperature and allow them to be surveyed at night. These cameras can be handheld, mounted on a car, rifle, or helmet, and are often used in conjunction with aerial surveillance such as planes, helicopters or drones. 

Surveillance Tech That May Not Be Visible Face Recognition (or other Video Analytics) 

Face recognition in the field from a San Diego County presentation

Face recognition is a method of identifying or verifying the identity of an individual using their face. Face recognition systems can be used to identify people in photos, video, or in real-time. Law enforcement may also use mobile devices to identify people during police stops.

At a protest, any camera you encounter may have face recognition or other video analytics enabled. This includes police body cameras, mounted cameras on buildings, streetlights, or surveillance towers. 

Also, some police departments have biometric devices, such as specialized smartphones and tablets, that show the identity of individuals in custody. Likewise, face recognition can occur during the booking process at jails and holding facilities. 

Social Media Monitoring 

Social media monitoring is prevalent, especially surrounding protests. Police often scour hashtags, public events, digital interactions and connections, and digital organizing groups. This can be done either by actual people or by an algorithm trained to collect social media posts containing certain hashtags, words, phrases, or geolocation tags. 

EFF and other organizations have long called on social media platforms like Facebook to prohibit police from using covert social media accounts under fake names.  Pseudonyms such as “Bob Smith” have long allowed police to infiltrate private Facebook groups and events under false pretenses. 

Cell-Site Simulators

Cell-site simulators, also known as IMSI catchers, Stingrays, or dirtboxes, are devices that masquerade as legitimate cell-phone towers, tricking phones within a certain radius into connecting to the device rather than a tower

Police may use cell-site simulators to identify all of the IMSIs (International Mobile Subscriber IDs) at a protest or other physical place. Once they identify the phones’ IMSIs, they can then try to identify the protesters who own these phones. In the non-protest context, police also use cell-site simulators to identify the location of a particular phone (and its owner), often with greater accuracy than they could do with phone company cell site location information. 

Real-time Crime Centers 

Fresno Police Department's Real-time Crime Center. Source: Fresno PD Annual Report 2015

Real-time crime centers (RTCCs) are command centers staffed by officers and analysts to monitor a variety of surveillance technologies and data sources to monitor communities. RTCCs often provide a central location for analyzing ALPR feeds, social media, and camera networks, and offer analysts the ability to use predictive algorithms. 

Matthew Guariglia

In an Uncertain World, EFF Will Always Support the Users

3 weeks 2 days ago

EFF turned thirty this year. In our three decades of work, we’ve seen huge shifts in the way technology and the Internet help, harm, and otherwise influence the lives of nearly everyone on the planet—and that includes its enormous influence on electoral politics. Our thirty-year view has allowed us the insight that regardless of who is in power, technology can be wielded in the service of justice and democracy, or it can be used as a cudgel to disempower and disenfranchise. 

Right now, the results of the U.S. Congressional and Presidential elections are still being finalized, and votes are still being tallied.

But whatever the final result of this election, we’ll continue to have work to do. During the Clinton era, the government unveiled a vision of the Internet dominated by copyright holders, fought to outlaw strong encryption (but then surrendered), and passed draconian Internet censorship bills. The George W. Bush administration liberalized encryption regulations, but pushed through the Patriot Act, and dangerously expanded NSA spying and government secrecy. Obama voted against NSA reform as a Senator, and as President, further expanded the U.S.’s surveillance apparatus, but ultimately also stopped several secret programs. The current Trump administration has launched an assault on crucial protections for online speech and innovation, as well as encryption, and continues to defend mass surveillance in court.

We fought all of these efforts, holding firm to our core value: Technology must work for the user. No matter what party is in power in the White House or Congress, or who is on the Supreme Court, EFF will work to ensure that when you use technology, and when you go online, your rights go with you. We won your right to encrypt, successfully advocated for an uncensored Internet, and began the long battle to ensure your devices weren’t locked-down and spied on by copyright lobby decree. We exposed the domestic mass surveillance program under Bush, and fought back in the courts and in Congress under Bush, Obama, and Trump.

Join EFF Now

Start a Monthly or Annual Recurring Contribution

The focus of our work in the next few years will vary, depending on the shape of these election results. But the opportunities and temptations for politicians to undermine or support your rights will remain. We can expect more attempts to undermine end-to-end encryption, more bi-partisan attacks on Section 230, more government  surveillance, including when we organize, protest, and travel, and more demands from corporations and state actors to bend the Internet into a form more friendly to their demands, and less centered on the civil liberties of its users. Elected officials will also have the chance to make the digital world better: by encouraging fiber roll-outs, busting the Big Tech monopolies, and supporting real privacy protections, and decentralised, open systems, just to name a few. All of these battles will be against a backdrop of increased polarization, climate change, and unresolved issues of social injustice. But those wider debates will depend on a secure, free Internet for their successful resolution.

These elections don’t just matter at a national level. The fifty states and many localities are playing an increasingly important role in determining our online future. Proposition 24, a wide-ranging privacy initiative, has passed in California. Before the vote, EFF helped inform voters about its mix of good and bad provisions. Now that it has passed, we’ll do the necessary work of crafting effective privacy laws with real teeth and no unintended consequences. That’s what we do—even when no one is watching. 

And of course, it’s important to remember that the fight for privacy, free expression and innovation goes beyond the U.S. The Internet connects each and every one of us, and privacy laws like the GDPR, and the passage of the Copyright Directive in the EU, are important across the globe. That’s why, while we know the U.S.’s election has global consequences, we’ve already expanded our work in Europe, Latin America and the world. Everyone wants to fix the Internet. Not every idea is a good one: and we’re here to stop the worst, and promote the best. 

Policies that arise out of the pandemic continue to be a priority for us. Many of our digital rights are impacted by COVID-19, from the potential expansion of government surveillance to the clear need to expand fiber access for all, ensure the tools we use to stay connected are secure, and push back against apps that would invade the privacy of workers and students. This has been a focus for EFF the last eight months, and will continue to be. 

The right to vote is precious and hard-won—like all of the rights we defend here at EFF. And its exercise gives individuals the chance to change the world. Whatever the result, we’re ready to keep fighting alongside our members to steer those changes toward a better, freer future.

Keep EFF Going Strong

Start a Monthly or Annual Recurring Contribution

Cindy Cohn

Turkey Doubles Down on Violations of Digital Privacy and Free Expression

3 weeks 3 days ago

Turkey's recent history is rife with human rights-stifling legislation and practices. The Internet Law, its amendments, and the recent decision of Turkey’s regulator (BTK) further cemented that trend. The Internet Law and amendments require large platforms to appoint a local representative, localize their data, and speed up the removal of content on-demand from the government. Turkey has also adopted a data protection law; however, it has failed to protect fundamental rights in practice. For instance, Turkey implemented emergency surveillance decrees, after the 2016 coup, that granted the Turkish government unrestricted access to communications data without a court order—a carte blanche for government spying. Platforms should stand with their users and uphold international human rights law and standards that protect privacy and free expression. We fear that platforms instead might knuckle under pressure from the Turkish government.

All these legal changes are happening in the midst of Turkey's rule of law and democracy deficit, and lack of independence in the judiciary. Turkey has dismissed and forced the removal of more than 30 Turkish judges and prosecutors, which the European Commission explained has led to self-censorship in the judiciary, further undermining its independence and impartiality. The government has also jailed political opponents, which the European Court of Human Rights has recognized, beyond any reasonable doubt, that the extensions of detention of a political opposition leader had pursued the predominant ulterior purpose of stifling pluralism and limiting freedom of political debate, a core component of a democratic society.

Forced Appointment of Turkish Representatives or Face Hefty Fines

In Turkey, foreign companies with a large social media presence in Turkey are required to appoint a local representative by November 2nd--30 days after BTK sent them a first warning. The law requires this of companies with “daily access” of one million; but it is unclear how “daily access” is measured. This representative must be a duly incorporated company under Turkish law or a Turkish citizen. But the appointment of a legal representative is a complex decision that can make companies vulnerable to domestic legal actions, including potential arrest and criminal charges.  Appointing a local representative requires difficult line-drawing choices that are hard to get right when legally mandated.

Prior to this requirement, the Turkish government sent takedown content or access-blocking demands to platforms' headquarters in the EU or the U.S.  Some local representatives may now respond to such government demands, and can potentially be subject to retaliation for non-compliance with a disproportionate order. 

Facebook has communicated to the Turkish Government that they won’t comply with the law. Twitter, Google, and TikTok have not made any official statement about their intentions on appointing Turkish representatives, though the deadline expired on November 2nd. To date, only the Russian social media company, VKontakte appointed a local representative in Turkey. The Turkish regulator BTK announced on November 4th that they imposed a first-time fine of TRY 10 million (more than 1 million USD) on social network providers who have not appointed local representatives, including Facebook, Instagram, Twitter, Periscope, YouTube and TikTok. 

The law creates draconian fines for those companies that do not appoint a representative and second-time fines of TRY 30 million (more than 3.5 million USD). If the provider still chooses not to comply, the BTK will prohibit Turkish taxpayers from placing ads on the provider’s platform and making payments to them. Even worse, if the provider still chooses not to appoint a representative, the BTK can apply to the Peace Criminal Court to throttle (slow down) the provider’s bandwidth by 50%. If the provider hasn’t still appointed a representative, BTK can apply for a further  bandwidth reduction; this time the judge can decide to throttle the provider’s bandwidth anywhere between 50%-90%. Throttling can make sites practically inaccessible within Turkey, fortifying Turkey’s censorship machine and silencing speech--a very disproportionate measure that practically censors users’ ability to access online content within Turkey. 

Forced Data Localization

The amendments to the Internet Law and BTK decision also force tech giants to take “all necessary measures” to keep within Turkey the data of people based in Turkey. These data localization obligations raise significant concerns about user privacy, free speech, and information security. Forcing platforms to localize the data can run afoul of the users' expectation and freedom to sign up to a service hosted outside Turkey--a reason they may have taken  into account when choosing such service.

Once the data is stored in the country, the company has less ability to control the exploitation of vulnerabilities and unauthorized access. If companies keep users’ data within Turkey, that nation will have an easier time accessing the data. This forces companies to comply with government demands in a country with a poor human rights record. Here, too, some local representatives may over-comply with government demands out of fear of reprisal. 

Overall, this measure seeks to strengthen the Turkish government's ability to control content published in social media and make it easier for it to obtain users’ data, which will expose their associations and locations. 

Wide-ranging Privacy Rules, with No Balancing For Free Expression

The new amendments create a number of powerful new tools for those who wish to remove personal information off the Internet: to de-link search engine entries, order hosts to delete information, and to block foreign hosts that refuse to comply. While dealing with the consequences to privacy of an open Internet is a matter that every country is now struggling with, Turkey’s provisions are overbroad, and fail to balance the equally important right of Internet users to send and share information.

De-indexing Contested Search Results for Violation of Personal Rights

The new amendments to the Internet law and BTK decisions would require providers (via its local representative) to de-index from their search results users’ names from the internet address of a publication that violates users’ personal rights upon a court order--a similar provision to that of the 2014 European Right to be Forgotten. The EU standard, however, asks whether an individual’s privacy rights outweigh the public’s interest in having continued access to the data. No matter how carefully a de-indexation provision is drafted, conflicting principles of due process and free expression inevitably render it a complex and contested task. The problem is exacerbated due to Turkey’s lack of independence of the judiciary. Turkey is also infamous for its internet censorship machine, and its government’s obscuring of facts from Turkish online historical records including denunciations of government corruption. (Check several examples of news censored based on “personal rights.” ).

Content Removal and Blocking for Violation of Personal Rights

The Internet Law and BTK decision already compel content providers (via their local representative) to take down online content (post, photos, and other comments) that the user claims violates their personal rights. If the providers can not be reached, then the hosting provider must comply with such requests. This provision would encourage social media platforms like Facebook and Twitter, in their haste to avoid hefty fines, TRY 5 million (more than 500,000 USD), to remove perfectly legal expression. It also deputizes platforms to police speech at the behest of the government. 

Users can also directly ask the Peace Criminal Court to order the Union of Access Providers to remove content or block access within 24 hours. According to the new amendments, the Union of Access Providers (an association that reunites all access providers in the country) will notify hosting, access, and content providers (via the local representatives) to comply with the court order in four hours, and in Turkish. 

This quick turnaround would encourage providers to remove legal speech to avoid steep fines. Turkey lacks an independent judiciary and refuses to respect due process standards creates a fertile ground for meritless court orders. This can lead to the removal of speech that silenced voices that deserve to be heard, including those denouncing government corruption or other misconduct. 

Companies should push back against orders that are inconsistent with the permissible limitation test under international human rights law. In addition to legal pressure and hefty fines, the UN Special Rapporteur on Free Expression expressed concern that Internet Service Providers have also “faced extralegal intimidation in certain jurisdictions, such as threats to the safety of their employees and infrastructure in the event of non-compliance.” 

Access Blocking for Violation of the Right to Private Life

Under the Internet Law and BTK decision, any person based in Turkey can ask BTK to block access to an online publication that the user claims has violated the users’ right to private life. The BTK can decide to order the Union of Access Providers to comply within four hours or social network providers may face hefty fines of TRY 5 million (more than 500,000 USD). Like before, this speedy response would encourage platforms, in their haste to avoid steep fines, to block legal expression.

Further, the individual must also submit their request to the Peace Criminal Court within 24 hours. The judge will decide within 48-hour deadline, otherwise the access blocking ends automatically. In case of emergency, BTK can carry out the access blocking directly upon the BTK’s Chairman order and submit it to the Peace Criminal Court. The judge can review the request retroactively within 48 hours. 

Content Removal and Blocking under International Human Rights Standards

Article 19 of the International Covenant on Civil and Political Rights allows states to limit freedom of expression under select circumstances, provided they comply with a three-step test: be prescribed by law; have legitimate aim; and be necessary and proportionate.  Limitations must also be interpreted and applied narrowly. Permissible limitations, as explained by the UN Human Rights Committee, are generally content-specific; generic bans are incompatible with the permissible limitation test. 

Further, prohibiting a site or a dissemination system from publishing material that may be critical of the government or the political social system espoused by the government is also inconsistent with the three-step test. Nor can it be invoked as a justification for “the muzzling of any advocacy of multi-party democracy, democratic tenets and human rights. Nor, under any circumstance, can an attack on a person, because of the exercise of their freedom of expression, including such forms of attack as arbitrary arrest, torture, threats to life and killing.” The UN Special Rapporteur on Freedom of Expression has gone further recommending States to only seek to restrict content upon a court order issued by an independent and impartial judicial authority, with due process and full compliance with the legality, necessity, proportionality, and legitimacy principles.

When it comes to blocking, the Council of Europe has recommended that public authorities should not, through general blocking measures, deny access by the public to information on the Internet, regardless of frontiers. The four special mandates on freedom of expression explained that:

“Mandatory blocking of entire websites, IP addresses, ports, network protocols or types of uses (such as social networking) is an extreme measure – analogous to banning a newspaper or broadcaster – which can only be justified in accordance with international standards, for example where necessary to protect children against sexual abuse.” 

Blocking of websites is always an inherently disproportionate measure under international human rights law. It leads to over-blocking, false positives, false negatives, causes a serious interference with the Internet’s infrastructure, reduces internet traffic speed, and does not solve the root cause problem.

Press Freedom Crisis in Turkey and Podcasts as Alternative Channels

Amidst the state capture of media, the Internet plays a pivotal role. As we’ve said, journalists, academics, and writers who criticize the government risk criminal prosecution and harassment. Turkish citizens increasingly experience social and economic problems, too; the Turkish lira has hit record lows against the U.S. dollar. Amidst this atmosphere, Turkish citizens find it hard to obtain newsworthy information in a neutral, objective manner, or to voice their concerns. 

Podcasts have become a safe haven for communication of ideas in Turkey. However, a recent regulation threatens this last bastion of freedom. In August 2019, the government required platforms that offer radio, television, or on-demand publication services over the internet to obtain a license to continue operating in Turkey. Spotify was recently compelled to obtain the license, to avoid access blocking within Turkey. Netflix, upon obtaining the license, faced systematic censorship on its platform. Academics predict the licensing requirement will pave the way for censoring on Spotify as well.

Tech Platforms Must Respect Human Rights

Social media companies must respect international human rights law, even when it conflicts with local laws. The UN Special Rapporteur on free expression has called upon companies to recognize human rights law as the authoritative global standard for freedom of expression on their platforms, not domestic laws. We agree. Human rights law “gives companies the tools to articulate and develop policies and processes that respect democratic norms and counter authoritarian demands.” Likewise, the UN Guiding Principles on Business and Human Rights provides that companies must respect human rights and avoid contributing to human rights violations. Companies must also “seek to prevent or mitigate adverse human rights impacts that are directly linked to their operations ..., even if they have not contributed to those impacts.” 

According to the Global Network Initiative Human Rights Assessment, Google has responded to government demands under the GNI Principles, as follows:

“First, it carefully examines the domestic law cited to assess its specific requirements and application to the particular data access or removal requested. If the law is ambiguous, Google may interpret it in a narrow manner to avoid or restrict the government’s request. Next, its practice is to apply domestic law only to content and data within the scope of the issuing jurisdiction.”

GNI also explains that Google reaches out to the relevant government entity to seek clarification on how the content is violating local laws when the removal request is unclear. This may include, for example, where the content is precisely located (specific URLs) and which portion of the content allegedly infringes the law. The GNI report also explains how Google assesses risks to users in individual jurisdictions when determining where data should be physically collected and retained: 

“The company may vary the nature of data collected or processed in specific jurisdictions based on these risks. The company also uses encryption, and limits on internal access, to mitigate risks to data that is collected and stored.”

Google’s Transparency report for Turkey explains that Google assesses such government demands under its community rules. Google has reported that they refused to take down content including speech by Kurdish minorities and Gezi Parki protestors as well as corruption claims involving public officials and politicians. On the other hand, Google complied with requests when the content concerned national security interests or violated Google community rules. It remains to be seen if or to what extent Google will comply with the Social Media Law.

Turkey Data Protection Adequacy Standards After the Schrems Rulings

While, after a 35-year legislative process, Turkey adopted a data protection law  that mirrors the previous EU Data Protection Directive, Turkey has not yet obtained an adequate level of data protection equivalent to the European Union. The adequacy standard allows the transfer of personal data from the EU to Turkey (and vice versa) without any further safeguard being necessary. Turkish Data Protection Authority recently published a public announcement stating they are preparing for negotiations with the European Commission for an adequacy decision. 

However, the European Commission has already recommended Turkey to ensure that the Turkish data protection authority can act independently and that the activities of law enforcement agencies fall within the scope of the law. In the Court of Justice of the European Union judgement related to international transfers, namely Schrems II, the court directed the EU Commission which elements must be taken into account when assessing if the third-country legal framework provides an adequate level of protection:

“the rule of law, respect for human rights and fundamental freedoms, relevant legislation, both general and sectoral, including concerning public security, defence, national security and criminal law and the access of public authorities to personal data, as well as the implementation of such legislation…, case-law, as well as effective and enforceable data subject rights and effective administrative and judicial redress for the data subjects whose personal data are being transferred;”

 The Court of Justice of the European Union judgment in Schrems I. v. Data Protection Commissioner, also made clear that legal frameworks that grant public authorities access to data on a generalized basis compromise "the essence of the fundamental right to private life," as guaranteed by Article 7 of the EU Charter of Fundamental Rights.  In other words, any law that compromises the “essence to right private life” cannot ever be proportionate nor necessary. 

Government Surveillance

Turkey has also adopted more than thirty decrees during its two-year state of emergency, extended seven times.  Following the 2016 coup attempt in Turkey, the Executive Branch adopted these decrees without parliamentary approval or oversight. The decrees resulted in permanent legislative and structural changes and mass dismissal of public servants, falling short of EU human rights standards. One decree grants many unspecified institutions unfettered access to communications data without a court order. The surveillance decree was designed to be used against coup plotters and so-called “terrorist organizations.” Such unfettered power violates the rule of law and the Principle of Legality, necessity and proportionality under international human rights law. The decree also compels companies to comply with BTK requests. Failure to do so leads to hefty fines, and the possibility that the BTK will take over an ISP’s premises.  

EFF has not done a full assessment of Turkish surveillance laws and practice, however, we’ve learned from Citizen Lab  that Turkey’s largest ISP, Türk Telekom, (of which the Turkish government owns 30%) had used deep packet inspection to redirect hundreds of users in Turkey to nation-state spyware when those users attempted to download certain apps. Citizen Lab also found that DPIs were being used to block political, journalistic, and human rights content. Another leaked document revealed that Türk Telekom uses deep packet inspection (DPI) tools to spy on users and extract not only “usernames and passwords from unencrypted traffic, but also their IP addresses, what sites they’d visited and when.” These are just a tip of the iceberg of the real level of privacy and data protection in Turkey.

Conclusion

Turkey's poor human rights records should be a wake-up call for platforms to stand with their users and uphold international human rights law. Companies should not remove content that is inconsistent with the permissible limitation test. Blocking measures, in our opinion, are always inconsistent with the necessary and proportionate principles. Companies should legally challenge such blocking orders. They should also fight back strategically under any pressure from the Turkish government.

Katitza Rodriguez

Police Will Pilot a Program to Live-Stream Amazon Ring Cameras

3 weeks 3 days ago

Updated as of 11/5/2020: This blog post has been updated with a statement from Amazon in regards to the pilot program described in the Jackson Free Press. You can find their response at the bottom of the page.

This is not a drill. Red alert: The police surveillance center in Jackson, Mississippi, will be conducting a 45-day pilot program to live stream the security cameras, including Amazon Ring cameras, of participating residents. 

Since Ring first made a splash in the private security camera market, we’ve been warning of its potential to undermine the civil liberties of its users and their communities. We’ve been especially concerned with Ring’s 1,000+ partnerships with local police departments, which facilitate bulk footage requests directly from users without oversight or having to acquire a warrant. 

While people buy Ring cameras and put them on their front door to keep their packages safe, police use them to build comprehensive CCTV camera networks blanketing whole neighborhoods. This  serves two police purposes. First, it allows police departments to avoid the cost of buying surveillance equipment and to put that burden onto consumers by convincing them they need cameras to keep their property safe. Second, it evades the natural reaction of fear and distrust that many people would have if they learned police were putting up dozens of cameras on their block, one for every house. 

Now, our worst fears have been confirmed. Police in Jackson, Mississippi, have started a pilot program that would allow Ring owners to patch the camera streams from their front doors directly to a police Real Time Crime Center. The footage from your front door includes you coming and going from your house, your neighbors taking out the trash, and the dog walkers and delivery people who do their jobs in your street. In Jackson, this footage can now be live streamed directly onto a dozen monitors scrutinized by police around the clock. Even if you refuse to allow your footage to be used that way, your neighbor’s camera pointed at your house may still be transmitting directly to the police. 

Only a few months ago, Jackson stood up for its residents, becoming the first city in the southern United States to ban police use of face recognition technology. Clearly, this is a city that understands invasive surveillance technology when it sees it, and knows when police have overstepped their ability to invade privacy. 

If police want to build a surveillance camera network, they should only do so in ways that are transparent and accountable, and ensure active resident participation in the process. In the many cities that have enacted Community Control Over Police Surveillance (CCOPS) ordinances residents through their legislators have more say in whether or not police may  build a program like this. The choices you and your neighbors make as consumers should not be hijacked by police to roll out surveillance technologies. The decision making process must be left to communities. 

Here is the response we received from Amazon in regards to this post: "[Amazon and Ring] are not involved in any way with any of the companies or the city in connection with the pilot program. The companies, the police and the city that were discussed in the article do not have access to Ring’s systems or the Neighbors App. Ring customers have control and ownership of their devices and videos ,and can choose to allow access as they wish."

Matthew Guariglia

No Police Body Cams Without Strict Safeguards

3 weeks 4 days ago

EFF opposes police Body Worn Cameras (BWCs), unless they come with strict safeguards to ensure they actually promote officer accountability without surveilling the public. Police already have too many surveillance technologies, and deploy them all too frequently against people of color and protesters. We have taken this approach since 2015, when we opposed a federal grant to the LAPD for purchase of BWCs, because the LAPD failed to adopt necessary safeguards about camera activation, public access to footage, officer misuse of footage, and face recognition. Also, communities must be empowered to decide for themselves whether police may deploy BWCs on their streets.

Prompted by Black-led protests against police violence and racism, lawmakers across the country are exploring new ways to promote police accountability. Such laws are long overdue. A leading example is the federal Justice in Policing Act (H.R. 7120 and S. 3912). Unfortunately, this bill (among others) would expand BWCs absent necessary safeguards. We respectfully recommend amendments.

Necessary BWC safeguards

Police BWCs are a threat to privacy, protest, and racial justice. If worn by hundreds of thousands of police officers, BWCs would massively expand the power of government to record video and audio of what we are doing as we go about our lives in public places, and in many private places, too. The footage might be kept forever, routinely subjected to face surveillance, and used in combination with other surveillance technologies like stationary pole cameras. Police use of BWCs at protests could discourage people from making their voices heard. Given the many ongoing inequities in our criminal justice system, BWCs will be aimed most often at people of color, immigrants, and other vulnerable groups. All of this might discourage people from seeking out officers for assistance. In short, BWCs might undermine community trust in law enforcement.

So EFF opposes BWCs, absent the following safeguards, among others.

Mandated activation of BWCs. Officers must be required to activate their cameras at the start of all investigative encounters with civilians, and leave them on until the encounter ends. Otherwise, officers could subvert any accountability benefits of BWCs by simply turning them off when misconduct is imminent, or not turning them on. In narrow circumstances where civilians have heightened privacy interests (like crime victims and during warrantless home searches), officers should give civilians the option to deactivate BWCs.

No political spying with BWCs. Police must not use BWCs to gather information about how people are exercising their First Amendment rights to speak, associate, or practice their religion. Government surveillance chills and deters such protected activity.

Retention of BWC footage. All BWC footage should be held for a few months, to allow injured civilians sufficient time to come forward and seek evidence. Then footage should be promptly destroyed, to reduce the risks of data breach, employee misuse, and long-term surveillance of the public. However, if footage depicts an officer’s use of force or an episode subject to a civilian’s complaint, then the footage must be retained for a lengthier period. Stored footage must be secured from access or alteration by data thieves and agency employees.

No face surveillance with BWCs. Government must not use face surveillance, period. This includes equipping BWCs with facial recognition technology, or applying such technology to footage from BWCs. Last year EFF supported a California law (A.B. 1215) that placed a three-year moratorium on use of face surveillance with BWCs. Likewise, EFF in 2019 and 2020 joined scores of privacy and civil rights groups in opposing any federal use of face surveillance, and also any federal funding of state and local face surveillance.

Officer review of footage. If footage depicts use of force or an episode subject to a civilian complaint, then an officer must not be allowed to review the footage, or any department reports based on the footage, until after they make an initial statement about the event. Given the malleability of human memory, a video can alter or even overwrite a recollection. And some officers might use footage to better “testily.”

Public access to footage. If footage depicts a particular person, then that person must have access to it. If footage depicts police use of force, then all members of the general public must have access to it. If a person seeks footage that does not depict them or use of force, then whether they may have access must depend on a weighing by a court of (a) the benefits of disclosure to police accountability, and (b) the costs of disclosure to the privacy of a depicted member of the public. If the footage does not depict police misconduct, then disclosure will rarely have a police accountability benefit. In many cases, blurring of civilian faces might diminish privacy concerns. In no case should footage be withheld on the grounds it is a police investigatory record.

Enforcement of these rules. If footage is recorded or retained in violation of these rules, then it must not be admissible in court. If footage is not recorded or retained in violation of these rules, then a civil rights plaintiff or criminal defendant must receive an evidentiary presumption that the missing footage would have helped them. Members of a community should have a private right of action to enforce BWC rules when a police department or its officers violate them. And departments must discipline officers who break these rules.

Community control over BWCs. Local police and sheriffs must not acquire or use BWCs, or any other surveillance technology, absent permission from their city council or county board, after ample opportunity for residents to make their voices heard. This is commonly called community control over police surveillance (CCOPS). Likewise, federal and state law enforcement must not deploy BWCs absent notice to the public and an opportunity for opponents to object.

Many groups have published model BWC rules, including the ACLU, the Leadership Conference on Civil and Human Rights, the Constitution Project, and the Police Executive Research Forum. The safeguards discussed above are among the rules in some of these models.

Amending the Justice in Policing Act

We appreciate that the Justice in Policing Act’s section on federal BWCs (Sec. 372) contains safeguards discussed above. We respectfully request three amendments to the bill’s provisions on BWCs.

Federal grants for state and local BWCs. The bill provides federal grants to state and local police to purchase BWCs. See Sec. 382. But the bill’s rules on these BWCs are far weaker than the bill’s rules on federal BWCs, and lack safeguards discussed above. State and local BWCs are no less threatening to privacy, speech, and racial justice than federal BWCs. For too long, BWCs have flooded into our communities, often with federal funding, in the absence of adequate safeguards. Thus, please amend the bill to apply all of its rules for federal BWCs to any grants for state and local BWCs.

Also, please amend the bill to prohibit state and local agencies from obtaining federal grants for BWCs unless they first use a CCOPS process to obtain permission from their public and elected officials. If the residents of a community do not want their police to deploy BWCs, then the federal government must not fund BWCs in that community.

For federally funded BWCs used by state and local police, these federal rules should be a floor and not a ceiling. Thus, the bill must expressly not preempt state and local rules that ensure even more police accountability and civilian privacy then does the federal bill.

Face surveillance with BWCs. The bill allows the application of face recognition technology to footage from BWCs, provided there is judicial authorization. See Sec. 372(q)(2) & 382(c)(1)(E). But EFF opposes any government use of face surveillance, even with this limit. We especially oppose face surveillance in connection with police BWCs. Thus, please amend the bill to prohibit any equipping of police BWCs with facial recognition technology, and any application of such technology to footage from BWCs, as to both federal BWCs and federal funds for state and local BWCs.

Public access to BWC footage. For purposes of public access to federal BWC footage under the federal Freedom of Information Act (FOIA), the bill divides footage into three categories. If footage depicts an officer’s use of force, then anyone may obtain it. If footage depicts both use of force and resulting grievous injury, then such release must be expedited and occur within five days. We agree with these two rules.

The bill further provides that if footage does not depict use of force, then it may only be released with the written permission of the civilian depicted. See Secs. 372(a)(4), (j), & (l). This is a reasonable attempt to balance the accountability benefits and privacy harms of public disclosure of BWC footage. Still, we respectfully suggest a somewhat different approach. The civilian depicted should not have an absolute prerogative to veto public disclosure. Rather, the civilian’s opposition should be one factor in the larger balancing by a court of the privacy and accountability interests. Courts routinely conduct such balancing upon assertion of the FOIA exemptions for personal privacy.

For example, if footage shows an officer cussing at the mayor, the public should have access, even if the mayor believes release would be politically damaging to them. Likewise, if footage shows officers ransacking a car absent any suspicion, the public should have access, even if the police department conditioned a cash settlement with the driver on a non-disclosure agreement. In such cases, the accountability benefits of disclosure would outweigh the privacy harms. On the other hand, if footage does not depict officer misconduct, disclosure will rarely be justified.

Conclusion

The time is long past for new measures to end violence and racism in policing. But BWCs are not a panacea. Indeed, without necessary safeguards, BWCs will make the problem worse, by expanding police surveillance of our communities without improving police accountability. We urge policymakers: do not put more BWCs on our streets, unless they are subject to strict safeguards.

Adam Schwartz

The Cost of the “New Way to Message on Instagram”

3 weeks 4 days ago

If you are on Instagram, you have been probably bombarded by Instagram Stories and notifications about new features like emojis, chat themes, selfie stickers, and “cross-platform messaging” that will allow you to exchange direct messages with, and search for, friends who are on Facebook. But the insistent messages to “Update Messaging” minimize the extent of this change, which will blur the lines between the two apps in ways that might unpleasantly surprise users.

1_instagram_fb_changecolor.png

1_instagram_fb_express.png

1_instagram_fB_react.png

Images of notification promoting new features when updating the Messaging in Instagram

Even worse, if you choose to accept the update, you won’t be able to go back. Reading the fine print at the bottom raises questions about whether this is a mere update or an entirely new messaging system.

“When you update, the old version won’t be available.”

2_instagram_fb_new_way.jpg

Images of notification promoting new features with fine print at the bottom "When you update, the old version won’t be available."

The “old version” Facebook refers to is simply the Instagram Direct messaging app. And the “new version” is a new messaging system that links Instagram and Facebook.

To put it simply, you now have “ Instagram Messenger.” After the “update,” you have in essence Facebook Messenger inside of Instagram.  

Competition

Facebook announced their intention to merge Facebook Messenger, Instagram Direct, and WhatsApp in March 2019. The announcement promised new privacy and security features across the board, including end-to-end encryption, ephemeral messages, and reduced data retention. End-to-end encryption beyond WhatsApp hasn’t arrived yet, but cross-platform messaging between Messenger and Instagram is clearly one step in this grand plan. And its execution seems to be in step with Facebook’s custom of minimizing the extent—and invasiveness—of a business move in order to get users to click “ok.”

When Facebook acquired WhatsApp in 2014, antitrust enforcers already had concerns about what this meant for consumers. WhatsApp users had made the choice to use a service outside of Facebook, and many were concerned that this was not only an attempt to acquire the users’ data and crush competitors, but also to eventually merge WhatsApp into Facebook’s ecosystem. Facebook made promises to keep the two services operating separately, and Facebook broke those promises just two years later, with similarly unclear notice to users.

Facebook’s acquisition of Instagram in 2012 came the same story, concerns, and failure from enforcers to properly evaluate the acquisition. Now, this latest update represents another step towards merging Instagram into Facebook, and making it harder to distinguish one from the other.  It’s not a coincidence that the “updated” version of the Instagram messaging app does not use the Instagram logo or a new one, but the Facebook Messenger logo.

In the initial 2019 announcement on merging the three messaging apps, Facebook called it a step toward “interoperability, saying,  “We want to give people a choice so they can reach their friends across these networks from whichever app they prefer.” But communicating across services owned and operated by a single company isn't interoperability. If anything, Facebook isn’t trying to make its messengers interoperable; it’s trying to make them indistinguishable to regulators with competition and data-sharing concerns and to users. And, if this Instagram update is any indication, users are not getting the opportunity to make a clear, informed choice. 

Choice

Tech companies like Facebook have mastered the art of distorting choice and consent. Here, you have the choice to change to the new messaging system— but you don’t have the choice to go back to the previous Instagram non-linked version. More importantly, hiding this detail in the fine print makes it an unclear choice to users who are surrounded by notifications, pings, and banners pressuring them into the change. Not forgetting that Facebook's policy means that this change will eventually link to your real name.

This is not an update, but a new messaging system from which you can’t switch back. There is nothing especially innovative about having colors in chats, or new emojis. Facebook could have added these to the Instagram Direct messaging app without bringing Facebook Messenger cross-platform functionality along with it. These are simply features presented to artificially devalue one of the systems, distract from the real changes, and manipulate users into changing to a different system. This is not an “update,” but a forced change to blur the lines between Facebook-owned apps.

Andrés Arrieta

The Github youtube-dl Takedown Isn't Just a Problem of American Law

3 weeks 4 days ago

The video downloading utility youtube-dl, like other large open source projects, accepts contributions from all around the globe. It is used practically wherever there's an Internet connection. It's especially shocking, therefore, when what looks like a domestic legal spat–involving a take-down demand written by lawyers representing the Recording Industry Association of America (RIAA),  a U.S. industry group, to Github, a U.S. code hosting service, citing the Digital Millennium Copyright Act (DMCA), a U.S. law–can rip a hole in that global development process and disrupt access for youtube-dl users around the world.

Those outside the United States, long accustomed to arbitrary take-downs with "DMCA" in their subject line, might reasonably assume that the removal of youtube-dl from Github is yet another example of the American rightsholders' grip on U.S. copyright law. Tragically for Internet users everywhere, the RIAA was not citing DMCA Section 512, the usual takedown route, but DMCA Section 1201, the ban on breaking digital locks. And the failures of that part of American law that can allow a rightsholder to intimidate an American company into an act of global censorship are coded into more than just the U.S. legal system.

The RIAA's letter against youtube-dl cites the DMCA 1201's criminalization of the distribution of technology that can bypass DRM: what’s called the “circumvention of technical protection measures”. It also mentions German law, which contains similar language. Here's the core of the relevant U.S. statute, in 1201(b): 

1201 (b) Additional Violations.—

  1. No person shall manufacture, import, offer to the public, provide, or otherwise traffic in any technology, product, service, device, component, or part thereof, that—
    1. is primarily designed or produced for the purpose of circumventing protection afforded by a technological measure that effectively protects a right of a copyright owner under this title in a work or a portion thereof;
    2. has only limited commercially significant purpose or use other than to circumvent protection afforded by a technological measure that effectively protects a right of a copyright owner under this title in a work or a portion thereof; or 
    3. is marketed by that person or another acting in concert with that person with that person’s knowledge for use in circumventing protection afforded by a technological measure that effectively protects a right of a copyright owner under this title in a work or a portion thereof. 

(While the law also has some important and hard-fought exceptions, they mostly apply only to using a circumvention tool, not to creating or distributing one.)

DMCA 1201 is incredibly broad, apparently allowing rightsholders to legally harass any "trafficker" in code that lets users re-take control of their devices from DRM locks.

EFF has been warning against the consequences of this approach even before the DMCA was passed in 1998. That's because DMCA 1201 is not the first time the U.S. considered adopting such language. DMCA 1201 is the enactment of the provisions of an earlier global treaty: the World Intellectual Property Organization (WIPO)'s Copyright Treaty of 1996. That treaty's existence is itself largely due to American rightsholders' abortive attempt to pass a similar anti-circumvention proposal devised in the Clinton administration's notoriously pro-industry 1995 White Paper on Intellectual Property and the National Information Infrastructure.

Stymied at the time by campaigns by a coalition of early Internet users, librarians, technologists, and civil libertarians in the United States, supporters of U.S. rightsholders laundered their proposal through the WIPO, an international treaty organization controlled by enthusiastic intellectual property maximalists with little understanding of the fledgling Net. The Clinton White Paper proposals failed, but the WIPO Copyright Treaty passed, and was later enacted by the U.S. Senate, smuggling back the provisions that had been rejected years before.

Since 1996, over 100 countries have signed onto the WIPO Copyright Treaty. The Treaty itself uses notably less harsh language in what it requires from its signatories than the DMCA. It says, more simply:

Contracting Parties shall provide adequate legal protection and effective legal remedies against the circumvention of effective technological measures that are used by authors in connection with the exercise of their rights under this Treaty or the Berne Convention and that restrict acts, in respect of their works, which are not authorized by the authors concerned or permitted by law.

But rightsholders ratcheted up the punishments and scope of the Treaty when it was incorporated in U.S. law. 

Most countries adopted the far stronger DMCA 1201 language in their own implementations. That was partly because the U.S. was one of the earliest adopters, and it's much easier to simply copy-and-paste another nation's implementation than craft your own. But it's also because it has been the continuing policy of the United States Trade Representative to pressure other countries to mirror the DMCA 1201 language, either through diplomatic lobbying, or by requiring it as a condition of signing trade agreements with the U.S.

DMCA 1201 has been loaded with terrible implications for innovation and free expression since the day it was passed. For many years, EFF documented these issues in our "Unintended Consequences" series; we continue to organize and lobby for temporary exemptions to its provisions for the purposes of cellphone unlocking, restoring vintage videogames and similar fair uses, as well as file and defend lawsuits in the United States to try and mitigate its damage. We look forward to the day when it is no longer part of U.S. law.

But due to the WIPO Copyright Treaty, the DMCA’s anti-circumvention provisions infest much of the world's jurisdictions too, including the European Union via the Information Society Directive 2001/29/EC, which stipulates:

Member States shall provide adequate legal protection against the manufacture, import, distribution, sale, rental, advertisement for sale or rental, or possession for commercial purposes of devices, products or components or the provision of services which:

(a) are promoted, advertised or marketed for the purpose of circumvention of, or

(b) have only a limited commercially significant purpose or use other than to circumvent, or

(c) are primarily designed, produced, adapted or performed for the purpose of enabling or facilitating the circumvention of, any effective technological measures.

The EU directive already mirrors the worst of U.S. law in that it apparently prohibits the possession and distribution of anti-circumvention components (the language that led to ridiculous spectacle in the 2000s of legal threats against anyone who posted the DeCSS algorithm online.) Transpositions into domestic European law, and their domestic interpretations, have had the opportunity to make it even worse. 

Fortunately this time, hosts and developers in Germany were confident enough in their rights under German law to reject the RIAA's take-down demands. But if rightsholders’ organizations wish to continue to misuse the provisions of the Copyright Treaty to go after tools like youtube-dl in yet more countries, they will have to be fought in every country, under the terms of each countries' version of the WIPO anti-circumvention provisions.

EFF has a long-term plan to beat the anti-circumvention laws, wherever they are, which we call Apollo 1201. But we need help from a global movement to finally revoke this ongoing attack on the world's creators, innovators, and consumers. You can do your part by examining and understanding your own country's anti-circumvention provision–and prepare and organize for the moment when your local RIAA comes knocking on your door.

Danny O'Brien

Now and Always, Platforms Should Learn From Their Global User Base

3 weeks 5 days ago

The upcoming U.S. elections have invited broad attention to many of the questions with which civil society has struggled for years: what should companies do about misinformation and hate speech? And what, specifically, should be done when that speech is coming from the world’s most powerful leaders?

Silicon Valley companies and U.S. policymakers all too often view these questions through a myopic lens, focusing on troubles at home as if they are new—when there are countless lessons to be learned from their actions in the rest of the world. The U.S. is not the first nation to deal with election-related misinformation; nor are this or past U.S. elections the only times the major platforms have had to deal with it.

When false news leads to false takedowns

As we noted recently, even the most well-meaning efforts to control misinformation often have the effect of silencing key voices, as happened earlier this year when Facebook partnered with a firm to counter election misinformation emanating from Tunisia. At the time, we joined a coalition in asking Facebook to be transparent about their decision-making, and explain how they had mistakenly identified some of Tunisia’s key thought leaders as bots—but we’re still waiting for answers.

More recently, Nigerians using Instagram to participate in the country’s #ENDSARS movement found their speech removed without warning—again, a victim of overzealous moderation of misinformation. Facebook, which owns Instagram, partners with fact checkers to counter misinformation—a good idea in theory, but in practice, strong independent oversight seems increasingly necessary to ensure that mistakes like this don’t become de rigeur.

Dangerous speech

Many observers in the U.S. fear violence in the event of a contested election. Social media platforms have responded by making myriad policy changes, sometimes with little clarity or consistency and, for some critics, with too little meaningful effect. For example, Twitter announced last year that it would no longer serve political ads, while in May the company began labeling tweets from President Trump that contained misinformation, after years of criticism.  

Meanwhile, social media users elsewhere are subjected to even more dangerous misinformation from their countries’ leaders, with little or slow response from the platforms that host it. And that inaction has proven dangerous—in the Philippines, where politicians regularly engage in disinformation on social media platforms (and where Facebook is the most popular virtual space for political discourse), a phenomenon called “red tagging” has emerged, in which alleged communists are falsely labeled on a list put out by the country’s Justice department and circulated in social media. Although the Philippine DOJ rolled back many of the accusations, they continued to circulate—with one recent incident ending in the accused.

We support the right of platforms to curate content as they see fit, and it is understandable for them to want to remove violent incitement—which can cause real-world harm rapidly if allowed to proliferate, particularly when it comes from public figures and politicians. But if they are going to take those steps, they should do so consistently. While the media has spent months debating whether Trump’s tweets should be removed, very little attention has been paid to those places in which violent incitement from politicians is resulting in actual violence—including in the Philippines, India, Sri Lanka, and Myanmar.

Of course, when the violence is coming from the state itself, then the state cannot be trusted to mitigate its harms—which is where platforms can play a crucial role in protecting human rights. Former Special Rapporteur on Freedom of Expression David Kaye proposed guidelines to the UN General Assembly in 2019 for companies dealing with dangerous speech: Any decisions should fit the framework of necessity and proportionality, and should be dealt with within the context of existing human rights frameworks. 

There are numerous projects researching the impact of violent incitement on social media. The Dangerous Speech Project focuses on speech that has the ability to incite real-world violence, while the Early Warning Project conducts risk assessments to provide early warnings of where online rhetoric may lead to offline violence. There is also ample research to suggest that traditional media in the U.S. is the biggest vector for misinformation.

A key lesson here is that the current strategy of most Silicon Valley platforms—that is, treating politicians as a class apart—is both unfair and unhelpful. As we’ve said before, politicians’ speech can have more severe consequences than that of the average person—which is why companies should apply their rules consistently to all users.

Companies must listen to their users...everywhere

But perhaps the biggest lesson of all is that companies need to listen to their users all over the world and work with local partners to look for solutions that suit the local context. Big Tech cannot simply assume that ideas that work (or, that don’t work) in the United States will work everywhere. It is imperative that tech companies stop viewing the world through the lens of American culture, and start seeing it in all its complexity.

Finally, companies should adhere to the Santa Clara Principles on Transparency and Accountability in Content Moderation and provide users with transparency, notice, and appeals in every instance, including misinformation and violent content. With more moderation inevitably comes more mistakes; the Santa Clara Principles are a crucial step toward addressing and mitigating those mistakes fairly.

Jillian C. York

When Academic Freedom Depends on the Internet, Tech Infrastructure Companies Must Find the Courage to Remain Neutral

3 weeks 5 days ago

And universities must stand up for the rights of their faculty and students.

During the past eight months of the pandemic, we have collectively spent more time online than ever before. Many of us are working and/or learning from home, and staying in touch with friends and family through social media and other proprietary services.

Universities are no exception.  Given the risks of in-person meetings, universities are relying on online services to fulfill traditional educational functions—and not just “remote” classes, but also the critical function of providing forums for controversial speech. But while companies like Zoom are happy to take university dollars, they have refused to support one of the foundational principles of that mission: academic freedom.

In the past month, Zoom has refused to support several events, at three universities, ostensibly because one of the speakers, Leila Khaled, participated in two plane hijackings 50 years ago and is today associated with a Palestinian group—the Popular Front for the Liberation of Palestine, or PFLP—that the U.S. government has labeled a terrorist organization

It all began on September 23, 2020, when Zoom blocked a San Francisco State University online classroom event featuring Khaled and prominent activists from Black and South African liberation movements and Jewish Voice for Peace, part of a two-part series focusing on gender and sexual justice in Arab, Muslim, and Palestinian communities. Facebook and YouTube followed suit.

An uproar followed, but Zoom insisted it had simply enforced its terms of service, which includes a promise to uphold “anti-terror laws.” When an organization supporting an academic and cultural boycott of Israel called for Oct 23 online protests of this cancellation of the SFSU event, Zoom pulled service from events at New York University and the University of Hawaii as well that were also to have featured Khaled.

This follows an incident in June when Zoom canceled accounts and shut down conference calls between activists in the U.S. and China regarding the annual June 4 Tiananmen Square Massacre commemoration. In that case, Zoom cited Chinese law as requiring the censorial actions.

Improper takedowns are nothing new to anyone familiar with private censorship online. But three things are particularly disturbing here.

First, Zoom is at the infrastructure layer of the Internet speech stack—like an ISP—but is choosing take on the moderation role more commonly and appropriately reserved for technologies at the user-end, like social media. In a moment when people around the world are depending on Zoom to learn, work and organize, that should be terrifying. Universities have built their entire curricula around Zoom classes and have little leverage when Zoom says, essentially, “cancel the event or we’ll cancel our contract with you.” Imagine this same event occurring in the offline world, i.e., a university theater in a building rented from a private owner, where there are few other adequate spaces available. If the owner demanded cancellation on pain of losing the lease altogether, we’d be appalled, and rightly so. In the midst of a pandemic, Zoom did the equivalent.

Infrastructure level takedowns are always worrisome. Conferencing services are just one of many types of intermediaries upon which online speech depends. Others include domain name registrars, certificate authorities (such as Let’s Encrypt), content delivery networks (CDNs), email services, and ISPs. EFF has a handy chart of some of those key links between speakers and their audience here. These infrastructure companies are ill-suited to consider and balance the consequences their decisions may have. Many have only the most tangential relationship to their users; faced with a complaint, takedown will be much easier and cheaper than a nuanced analysis of a given user’s speech. Infrastructure takedowns also represent a dramatic departure from the expectations of most users. If users have to worry about satisfying not only their host’s terms and conditions but also those of every service in the chain from speaker to audience—even though the actual speaker may not even be aware of all of those services or where they draw the line between hateful and non-hateful speech—many users will simply avoid sharing controversial opinions altogether. More broadly, infrastructure level takedowns move us further toward a thoroughly locked-down, highly monitored web, from which a speaker can be effectively ejected at any time, without any path to address concerns prior to takedown.

The firmest, most consistent, defense these potential weak links can take is to simply decline all attempts to use them as a control point. They can act to defend their role as a conduit, rather than a publisher. And just as law and custom developed a norm that we might sue a publisher for defamation, but not the owner of the building the publisher occupies, we are slowly developing norms about responsibility for content online. Companies like Zoom have an opportunity to shape those norms—for the better or for the worse.

Second, in this case, Zoom has apparently decided to adopt the major social media platforms’ erroneous approach to speech connected with groups targeted by U.S. antiterrorism laws even though no court has held that supporting such speech violated those laws (a whitepaper we co-authored last year explains the legal situation). Given who else is on the “prohibited” list—including Lebanese political party Hezbollah, which holds seats in that country’s parliament, and the Kurdistan Worker’s Party, which has fought against terror group ISIS—that broad approach inevitably sweeps up all kinds of speech, stifling conversations, peace initiatives, and education when they are most desperately needed. Moreover, it means private companies are signing up to serve as government censors, taking action without legal process. That choice was especially egregious here, given that Khaled had not even intended to speak about her role as a member of the PFLP.

Last, but far from least, while all private censorship implicates free expression, Zoom’s decision to block this speech, in this context, is also an attack on academic freedom. Academics are up in arms about these takedowns, and rightly so. They are demanding that their universities find and use conference and webinar providers that will uphold academic freedom. These responses include multiple letters of protest and a video of faculty and students reading Khaled’s intended statement, which highlights the absurdity of treating a speech at a university as equivalent to terrorist activity. The universities in question were doing what higher education always does: providing a space for faculty, students and the public to learn about and discuss all kinds of views, including controversial opinions. As the president of the American University of University Professors put it: “Academic freedom means that both faculty members and students can engage in intellectual debate without fear of censorship or retaliation.” Zoom shut down such a debate, and in so doing made it clear that it cannot be trusted to be a partner for higher education.

Particularly now, when so much intellectual debate depends on Internet communication, we need Internet services willing to let that debate happen. And if those service don’t exist, now would be a good time to create them—and for universities to commit to using them. University budgets are pressed more than ever, but no university dollars should go to providers that won’t support core academic values. That, in turn, could be an opportunity for service providers—offer a real alternative, and you’ll have a ready customer base.

To be clear, neither the Internet nor higher education have ever been fully free or open. But, at root, the Internet still represents and embodies an extraordinary idea: that anyone with a computing device can connect with the world, anonymously or not, to tell their story, organize, educate and learn. And academic freedom still represents an equally important idea: that “the common good depends upon the free search for truth and its free exposition." These takedowns, at this time, threaten both. All of the companies involved, but especially Zoom, should be ashamed.  Other companies should take heed, and offer alternatives.

Corynne McSherry
Checked
2 hours 41 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed