Podcast Episode: Fixing a Digital Loophole in the Fourth Amendment

3 months 2 weeks ago
Episode 003 of EFF’s How to Fix the Internet

Jumana Musa joins EFF hosts Cindy Cohn and Danny O’Brien as they discuss how the third-party doctrine is undermining our Fourth Amendment right to privacy when we use digital services, and how recent court victories are a hopeful sign that we may reclaim these privacy rights in the future.

In this episode you’ll learn about:

  • How the third-party doctrine is a judge-created legal doctrine that impacts your business records held by companies, including metadata such as what websites you visit, who you talk to, your location information, and much more;
  • The Jones case, a vital Supreme Court case that found that law enforcement can’t use continuous location tracking with a GPS device without a warrant;
  • The Carpenter case, which found that the police must get a warrant before accessing cell site location information from a cell phone company over time;
  • How law enforcement uses geofence warrants to scoop up the location data collected by companies from every device that happens to be in a geographic area during a specific period of time in the past;
  • How getting the Fourth Amendment right is especially important because it is part of combatting racism: communities of color are more frequently surveilled and targeted by law enforcement, and thus slipshod legal standards for accessing data has a disproportionate impact on communities of color;
  • Why even a warrant may not be an adequate legal standard sometimes, and that there are circumstances in which accessing business records should require a “super warrant” – meaning law enforcement could only access the data for investigating a limited number of crimes, and only if the data would be important for the crime. 

Jumana Musa is a human rights attorney and racial justice activist. She is currently the Director of the Fourth Amendment Center at the National Association of Criminal Defense Lawyers. As director, Ms. Musa oversees NACDL's initiative to build a new, more durable Fourth Amendment legal doctrine for the digital age. The Fourth Amendment Center educates the defense bar on privacy challenges in the digital age, provides a dynamic toolkit of resources to help lawyers identify opportunities to challenge government surveillance, and establishes a tactical litigation support network to assist in key cases. Ms. Musa previously served as NACDL's Sr. Privacy and National Security Counsel.

Prior to joining NACDL, Ms. Musa served as a policy consultant for the Southern Border Communities Coalition, a coalition of over 60 groups across the southwest that address militarization and brutality by U.S. Customs and Border Protection agents in border communities. Previously, she served as Deputy Director for the Rights Working Group, a national coalition of civil rights, civil liberties, human rights, and immigrant rights advocates where she coordinated the “Face the Truth” campaign against racial profiling. She was also the Advocacy Director for Domestic Human Rights and International Justice at Amnesty International USA, where she addressed the domestic and international impact of U.S. counterterrorism efforts on human rights. She was one of the first human rights attorneys allowed to travel to the naval base at Guantanamo Bay, Cuba, and served as Amnesty International's legal observer at military commission proceedings on the base. You can find Jumana on Twitter at @musajumana.

Please subscribe to How to Fix the Internet via RSSStitcherTuneInApple PodcastsGoogle PodcastsSpotify or your podcast player of choice. You can also find the Mp3 of this episode on the Internet Archive, and embedded below. If you have any feedback on this episode, please email podcast@eff.org.

%3Ciframe%20src%3D%22https%3A%2F%2Farchive.org%2Fembed%2Feff-podcast-closing-a-digital-loophole-in-the-fourth-amendment%22%20width%3D%22500%22%20height%3D%22140%22%20frameborder%3D%220%22%20webkitallowfullscreen%3D%22true%22%20mozallowfullscreen%3D%22true%22%20allowfullscreen%3D%22%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from archive.org

Below, you’ll find legal resources – including links to important cases, books, and briefs discussed in the podcast – as well a full transcript of the audio.

Resources

3rd Party Doctrine & Metadata

Third-Party Doctrine and DNA/Genetic Privacy

SCOTUS Cases and Decisions re. Third Party Doctrine

Cases re. Location Data, Privacy, and Warrant Requirements

Black Lives Matter, the 4th Amendment, and Surveillance

Transcript of Episode 003: Fixing a Digital Loophole in the Fourth Amendment

Danny O'Brien:
Welcome to How to Fix the Internet with the Electronic Frontier Foundation, the podcast that explores some of the biggest problems we face online right now, problems whose source and solution is often buried in the obscure twists of technological development, societal change, and the subtle details of Internet law.

Cindy Cohn:
Hi, everyone. I'm Cindy Cohn. I'm a lawyer, and I'm the Executive Director of the Electronic Frontier Foundation.

Danny O'Brien:
I'm Danny O'Brien. I'm also at EFF, and I guess I'm the opposite of a lawyer, whatever that is. Without giving anything away, I hope, the focus on this week's episode is how to fix the third-party doctrine. While not everyone even knows what the third-party doctrine is, I can absolutely declare that when I learned about it, the very first thing I thought was, "Wow, this really needs to be fixed," and yet here we are.

Cindy Cohn:
Oh, yes. We'll go into this in much more detail with our guest. But briefly, the third-party doctrine is why courts have held that you have no Fourth Amendment protections in your metadata when it's held by a third party, like your phone company or your bank.

Danny O'Brien:
Or a tech company, like Facebook, Google, or, of course, Amazon, which has a lot of metadata about me.

Cindy Cohn:
Yes, exactly. So, again, it's not the content, but it's all the other stuff, which is things like who you talk to, the websites you visit, where you are when you visit them, and how long you were there.

Danny O'Brien:
Okay. Now pretend I know nothing, and all my civic lessons at school were solely about the Magna Carta and the treacherousness of Americans. What are your Fourth Amendment protections of which you speak, Cindy?

Cindy Cohn:
Well, my British friend, I'm tempted to cue King George in Hamilton right now, because that's kind of what you sound like. But the Fourth Amendment governs your privacy relationship with the government and specifically law enforcement's right to grab you, and for us here today, it also governs when they get to dig through your stuff. It requires the cops to go before a judge and get a warrant and show probable cause in order to get permission to do so, and they only get to do so for some very serious crimes. The third party doctrine suspends your Fourth Amendment rights when it comes to your metadata. But clearly the person you need to talk to is our guest, Jumana Musa.

Danny O'Brien:
Jumana is the Director of the Fourth Amendment Center at the National Association of Criminal Defense Lawyers. The Fourth Amendment Center provides materials, training, and direct assistance to defense lawyers who are handling cases involving new surveillance tools, technologies, and tactics in order to create a new legal doctrine that protects constitutional rights in the digital age.

Cindy Cohn:
Jumana, thanks so much for joining us. So tell us more about the third-party doctrine and how it relates to the Fourth Amendment and why it's such a priority for you folks at the National Association of Criminal Defense Lawyers.

Jumana Musa:
Well, thank you for having me on. I want to wish EFF a happy 30th birthday. I'm thrilled to be able to do this in the context of this particular milestone for all of you. I think EFF for so long has been at the forefront of this issue, which even before people sort of recognized it as a fundamental issue, the idea of what happens with these advances in technology, how do they impact people's privacy rights, and so congratulations to you all for this milestone.

Jumana Musa:
So why do we care about the third-party doctrine? I guess in a nutshell, I will say it like this. We are now at a place where, because of the way things have been digitized, because of the technology that we rely on in our day-to-day life, law enforcement is able to investigate people, to accumulate information, and to utilize that kind of data and information against people in ways they've never been able to before.

Jumana Musa:
The issue with that is whereas previously if law enforcement decided they wanted to know where John Doe was going on any given day or to follow them to see, were they involved in X, Y, or Z crime. They would actually have to go through the process of thinking about, "Is this serious enough? Do we want to expend the resources? Do we have enough people on the force to put two or three or four officers on this to follow them around constantly 24/7?," whereas now all they need to do sometimes is just requisition a company and say, "Can we have all the records of where John Doe has been?" or "Can we just put something on their card? Can we just find another way of doing this?", where the technology has made it so easy for this information to both be utilized, to be scanned, to be sort of put together all kinds of different ways that it almost makes the Fourth Amendment moot, which is supposed to be not the sort of ...

Jumana Musa:
I know people always think of the Constitution as your affirmative rights, like my right to privacy. But what it really is, it's a restriction on state power, and it's supposed to be the thing that protects you against a government who just says, "I could just decide that I want to know what John Doe or Jane Doe is up to, and I kind of feel like they're up to no good. So I'm just going to fish through everything until I can find something to pin on them." It's what we used to call a general warrant, right? Which was the idea that you're just going to pick somebody and search everything until you can find something to pin on them. That is almost the state of affairs when you look at the amount of data that comes from all the technologies and all the different surveillance tools that are out there.

Cindy Cohn:
So the third-party doctrine is judge-created, created by the Supreme Court idea that certain information that you have or that is about you is placed outside of the protection of the Fourth Amendment. The argument is that because you've given this information, or this information is being held not by you, but by someone else, it loses the constitutional protection. But right now, we're living in a time, between cloud computing and our phones and the way we live our lives, that some very, very detailed information about us is held by third parties and is subject to the doctrine, everything from the telephone records to the websites you visit online to what you read online to the books you read if you use a Kindle or Audible.

Cindy Cohn:
Your ISP has metadata, too. So, it's not just when you go to read your Gmail, but it's the ISP that hosts you on the way. It also can include your car, if your car is connected, Internet of things. If you've got a smart refrigerator, what your refrigerator knows about you could be subject to the third party doctrine. It's just a huge amount of information, and it can reveal extremely sensitive information about you and your loved ones and your community, which is why it's on the top of our list.

Danny O'Brien:
So Jumana, just to clarify for me, so all of this data that's stored by third parties is now stripped of its Fourth Amendment protections. Is there any kind of block there? Is there any protection, once that goes away? You don't have to apply for a warrant anymore, but do the companies have ways of saying you only get this data?

Jumana Musa:
In theory, there's some restrictions and guardrails. In reality, they just don't always come through, and even with a warrant, I will say that is true. The reason for that is this. I think there are times, particularly with warrants ... Law enforcement goes before a magistrate, and they say, "This is what I need." They may not always be clear on what they're asking, or they may just get such a broad warrant, because the magistrate may not fully comprehend what it is they're being asked for.

Jumana Musa:
So to give an example, there was a period of time where law enforcement was using devices called ... Well, people commonly call them stingrays. They're cell site simulators. Essentially, they act like a cell tower. So it's a device that could actually get all the cell phones in an area to, instead of going straight to the cell tower to get a signal, route first through this device that would help law enforcement locate you.

Jumana Musa:
They were going to magistrates and saying, "We need a pen register warrant," which is basically ... A pen register is like you go to the phone company and say, "I want all the to and from numbers, every number that this phone has dialed and every number that has been dialed into this phone." That's a very different thing than a stingray, which even has the opportunity to take the content of calls, right? But they were sort of hiding that information.

Jumana Musa:
So they may be hiding the information, or you may have people sign off on warrants where they say, "Of course. You can take all the devices and search everything," and they sign off on that warrant, right? So even though there's a warrant, it is so broad that it should be impermissible. So I think that's one factor, even with a warrant.

Jumana Musa:
When it comes to companies and records, there is broad leeway in terms of the types of records that people can get with a subpoena. There are opportunities for companies to push back and say, "I think this is too broad. I don't want to do this." But there's a lot involved in that, in terms of making that call, how far do you push it, the question of what's the reason they're being asked for it. It puts companies in a very difficult position to be the ones defending the sort of privacy rights of the person who is likely not even aware that this search is happening.

Cindy Cohn:
So just to clarify a little bit and lift this up a little bit, we think warrants are needed for this kind of metadata information, but law enforcement is able to get that information through legal processes, like subpoenas and other things. The problem is that that's just too low a standard and often gets abused. So we think that moving metadata up into the category where a warrant is required, and I think both Jumana and I are concerned that even the warrant standard is too low for some times, but moving it from the subpoenas, which you get by pushing print on a printer, to a warrant, where you actually have to go in front of a judge, is an important step along the way to protecting your privacy.

Cindy Cohn:
So I want to talk a little bit about some of the more recent things we're seeing. I specifically wanted to ask you about these things we're seeing called geofence warrants, Jumana, because I think they're particularly troubling, and they're troubling not just from a Fourth Amendment context, but I think also from a First Amendment free speech context as well.

Jumana Musa:
Absolutely. So we have been involved in geofence cases at the Fourth Amendment Center, and I think people don't fully understand the way in which their information is being utilized. So to give people a sense of what is what we're calling a geofence warrant, it is when there's a crime that law enforcement is investigating. Somebody stole widgets from a factory, and in order to investigate this crime, they're trying. They're looking. They have no leads. They have no suspects, and they have no avenue towards a lead or a suspect.

Jumana Musa:
So what they do in that moment is they say, "Okay, we're going to go to Google, and we're going to say, 'Please tell us all of the phones that have connected in this geographic region, say 150, 250 feet, within this hour or two hour-span of time.'" So that maybe sounds not invasive, but you could actually go on our website. We have a series of documents in the Chatrie case, which is one that we've been working on.

Jumana Musa:
In one of them, Google actually filed an affidavit where they said that in order to go through that process, in order to figure out what phones may have been in this small geographic area in this couple of hour timeframe, they first have to search numerous of tens of millions of records. So the first step in this process is to actually search across all of their location history database of all of the people have connected anywhere to be able to identify who's been connected in that one geographic area.

Danny O'Brien:
Just to heighten this, right, so you talked a bit about general warrants, which I understand King George did, and I'm very sorry about that. But the difference here is that the Fourth Amendment warrant is aimed at a particular ... It's specific to a particular person, and that's to try and stop this fishing expedition idea. But when you talk about geofencing, if someone was to use this geofencing warrant, say, at a protest, right, that would mean that they would be essentially scooping up the identities of everyone who was at that protest, right?

Jumana Musa:
Absolutely. So, I mean, I think there's two different ways that people can get everybody at a protest, right? In this context, I think the first step is already you have to search numerous tens of millions of records to figure out who's connected within that timeframe. But in the context of a protest, you're absolutely correct in the sense that they can say, "This thing happened, and that was a serious crime," whatever the thing is. "In order to charge a serious crime, we need to identify who was there," and to identify who was there, I will also tell you, in the context of this back and forth with Google, they're supposed to hand over information that is anonymized and then go through a back and forth with law enforcement to get to a place where they may de-anonymize a few number of people.

Jumana Musa:
But having seen it up close, the anonymization is not so anonymous, and the idea that you can go and get the information of everybody who's connected in the context of a demonstration because somebody may have burned something or something may have been vandalized is extremely concerning, because that's a hugely powerful tool that can be really dissuasive to people who are feeling like they should be able to go out and exercise their First Amendment rights for whatever it is.

Cindy Cohn:
Yep. I think that's right. Well, our goal today is to talk about fixing the Internet and how we fix things. So let's switch a little bit our focus. I want to talk a little bit about, you mentioned earlier that we're chipping away at the third party doctrine. I actually even started out this by saying that I was quite confident that we were going to chip it down even further in the next few years. So where are we in terms of what the third party doctrine reaches right now and where we've won some victories?

Jumana Musa:
We've seen it come up in a few different ways, and it's sort of evolving. So the three cases that we always talk about are the Jones case, which was 2012, where essentially what they were looking at was, they did get a warrant to put a tracker on someone's car. They had ten days to get the tracker on the person's car. They didn't put it on until the eleventh day, so you're already outside of the window of the warrant, and then they left it on there for 28 days.

Jumana Musa:
Part of the argument is, "Well, the car was out driving around on public roads. That is not private. You don't have a right to privacy on public roads. Anybody can see you." That is certainly true. At the same time, what the court found was doing it outside this window meant you were outside of the warrant, and you did it for 28 days, which is, you do have an interest in your location over time, because that is very revealing, right? That's one of the things the court came to. They were very focused in the majority opinion, which was unanimous. It was an unanimous opinion, but in the majority, they were very focused on the trespass of having put the tracker on the car.

Jumana Musa:
So if we fast-forward a couple of years, there was another case which was not location tracking, but it was a question of the amount of data that is gathered with digital devices, and that's the Riley case in 2014. What that case basically said, at the end of the day, there used to be the idea that if you're arresting someone, maybe you stop the car, you decide you saw contraband, something happened, you're now arresting the person who was driving the car.

Jumana Musa:
What this case was about was the idea that if you arrest someone in that scenario, can you then open their phone and start to go through their phone? This is when smartphones are really starting to be widely used. What the court said is no, that is not the same thing. It is not a container. In fact, it contains all of the privacies of life. It has your emails and your photos and all this other information. As such, it is treated differently. So that was sort of the next step.

Jumana Musa:
The most recent stuff we've seen is the Carpenter case in 2018. So this case was a case where they were trying to tie people to a series of robberies, and they went through and looked for their historical cell site location information. So what that means is everywhere you go with your smartphone, it pings off of towers. It pings off of all kinds of things and creates a little digital trail of where you've been. It's not exactly where you've been. It doesn't say, "You were exactly in the spot, and then you walked ten steps over here," but it can locate you over time.

Jumana Musa:
The argument was, this was third party records, right? I mean, this is the phone company's records. You don't have an interest in that. There's no privacy interest. So what the court found in that was actually, you do, and they did not say there was no longer a third party doctrine. They said there is. It just doesn't apply here. So basically what they're saying is tracking you all over the place gives a lot of information about your very personal things. If you worship, it will say where you've been, what kind of doctor you've been to, if you go to AA meetings. It can locate you at a lot of sensitive places.

Jumana Musa:
But one of the arguments that was being made was, "Well, the technology back at the time this case happened wasn't that precise. It only could generally locate people." But the court said, "We hear that, but it's already better, and it's only going to get better. So the idea that we're going to sort of decide this, looking back at the old technology, is not of use to us."

Cindy Cohn:
I think that's exactly right. So when we think about the third party doctrine, I think we're making great strides in terms of protecting your location, especially your historical location over time. We're taking strides to say that just because you have a phone in your hand doesn't mean everything that's on that phone and everything you can get through that phone, like going to Facebook or any of those kinds of things, is not available to you. Then we've got both the cell phone towers and the car case to indicate this idea that where you travel over time should be protected. So that's what I mean, I think, when we talk about when we're chipping away at it.

Cindy Cohn:
So let's fast forward. We're into, now we're fixing it. So what's the world going to look like if we fix the third party doctrine, Jumana? How is my world going to change? How are your clients' worlds going to change? How does a protestor who wants to go out in the street ... How's our world going to be better if we fix this thing?

Jumana Musa:
So I think we're going to be better because we are going to reclaim some of our anonymity, right? I don't think that's something that people think about consciously, but part of it is if I just go walk down the street and I'm not in my neighborhood where everybody might know me, I might run into someone I know, but I might not see anybody I know, right? I could just be wandering down the street, looking in windows, looking at other people, thinking about life, doing whatever I'm doing. Nobody necessarily knows where I am.

Jumana Musa:
Historically, that's how it's been, right? You just walk off somewhere. Unless you physically run into somebody, there isn't necessarily a thought of where you are, and clearly that's not going to be possible in the digital age, where it's comprehensively like that. But to get some measure of that back, of that sort of anonymity, that control over your location, your movement, your idea of privacy from the government I think is really critical.

Jumana Musa:
So sort of looking forward, what does it look like? It looks like restricting government from being able to access these things writ large. I know sometimes people talk about, "Get a warrant." I've often said, "I know we say that, and it's great when at least they get a warrant, because there is that place where at least there's a judge or a magistrate," because the magistrate honestly doesn't actually have to be a judge in every state. It's not the same, but they may just have to have a college degree, right? So I don't want to make assumptions. But there is at least a person that may stand in the way and say, "Wait a minute, this doesn't look right. This looks too broad. You have to scale this back."

Danny O'Brien:
One of the visions I have for the future that is different from where we are now is that I feel that people have a generalized blanket anxiety about the data that they're giving to companies, and I think part of that anxiety comes from not knowing what's going to happen to it. I think one of the protections that a warrant gives you is you don't feel like data is going to be dug up on you if you're innocent or an innocent passerby, and I would like some clarity in the law that surrounds me that that isn't going to be the case.

Jumana Musa:
Well, coming from where I'm coming from, I'm going to say just because they're digging up the data, that doesn't speak to your innocence or non-innocence at all, right? It just speaks to their desire to investigate it. But I think that's true. I think that's very true, and I think we have sort of competing problems. One is it is hard to know just how much of your data is being gathered, right? I mean, I think some people who are deep in the weeds may have a really good sense. Most people don't really know, and I think when you compound that with the fact that there aren't really laws that restrict or govern that very well and then you add on top of that the fact that there's not a lot of things you can get anymore that aren't gathering data.

Jumana Musa:
For me, I use the example of, I drive a ten-year old Subaru, and it is low-tech. My kids tell me that all the time, right? I can't connect my phone to my car. I can't do this. I can't do that. I can't do anything that their friends' parents' cool cars do. What I know is right now, it's a Subaru. So it's going to last a long time. I appreciate that. It's got 100,000 miles on it. Eventually, I'm going to have to replace it, and by then, it is highly unlikely I'm going to be able to find a car that isn't connected in that way, that doesn't gather more data in that way, and it's true of all the things we're getting, smart appliances. You can't get a home security system that's sort of the old school that tells you if someone has opened the door or broken a window. So all of these things, the way they're developing that have positive aspects, they're developing ways to gather data, and data is really what companies are seeking.

Cindy Cohn:
Well, I think so. I would say, to me, this vision that you're bringing out around especially specifically the third party doctrine is really one of the presumption of innocence and, as you said, the presumption of anonymity, that what I read on what websites, what social media I have, who I'm friends with, who I'm not friends with, who I might spend the night with, who I don't spend the night with, what books I read, who I talk to, which way I talk to them, this is all information that ought to be under my control and that law enforcement needs to have a darn good reason to get access to. By darn good reason, I mean a darn good reason presented to somebody in a black robe who's going to evaluate this.

Cindy Cohn:
So to me, the end of the third party doctrine really resets our relationship with the government first. I think you're right. We still have to talk about companies, and we will do that as well. But this is about reclaiming the right of people to be secure in their papers and their effects against unreasonable searches and seizures. What we do in our lives, who we talk to, where we go, whether we're window shopping or seriously buying or whether we're just talking to a friend or whether we're researching an illness that we've heard a loved one had, we deserve to have a zone of protection against the government rummaging around in that information, because we might've made somebody mad or because we happen to have a friend who made somebody mad. I often say to people that just because you're never going to face ... Maybe law enforcement isn't going to come looking after you doesn't mean that you don't know anybody who is at risk. I think especially for people of color in our society right now, it doesn't need to be said.

Jumana Musa:
So Cindy, actually, I'm glad you said that. I think it needs to be said out loud, and I think the thing that people need to remember is that surveillance isn't new in society. Surveillance has been happening as long as there's been society, and it's been targeted largely at people of color, at people who dissent, at people who don't sort of go with the mainstream power structure. So people of color have been under scrutiny in this country since there've been people of color in this country, and particularly black people, but we can't sort of let that piece off.

Jumana Musa:
As we're in this moment where we're looking at policing in America, where Black Lives Matter is at the forefront, as it should be, we should also recognize when we're talking about these surveillance tools and technologies they are always going to be more heavily implemented in these communities, in communities of color, in low-income communities. They're going to be targeted towards black people. They're going to be targeted towards immigrant communities. That doesn't mean that there is no spillover effect into more affluent communities, into white communities, but the breakdown is no different than it is anywhere else in our criminal justice system.

Jumana Musa:
So I think that's a particularly acute point, even when you're talking about First Amendment rights, right, and the ability to protest. So I think that that needs to be a fundamental part of this conversation. Even if it never touches you or someone you know, if you care about those things, you should still care about this.

Cindy Cohn:
I think this is exactly right. Setting the Fourth Amendment right is part of standing up for Black Lives Matter. It's part of standing up for fairness in our society, because we know that the people who need these protections, the people who end up being overwhelmingly targeted by law enforcement are people of color. So standing up for protecting people's rights to just go around in the world, free of being vulnerable to surveillance is really a piece of the broader part of our efforts to try to make society less racist.

Danny O'Brien:
I'm hearing from both of you is that there is real progress happening on the court side, that we have this progressive recognition that the third party doctrine has to be reformed, and actual kind of concrete steps to that at the Supreme Court level. It sounds to me that this is a race between the courts coming to terms with new technology and also the advance of that technology itself.

Danny O'Brien:
One of the things that I remember from listening to the lawyers talk about this at EFF was an incident where the companies were getting so tired of getting these requests, the telcos in particular, that they wrote some tools for law enforcement to get this information more easily, right? They automated the process of getting this data. For me, that's one of those terrible kind of downhill progressions, where it's inevitable that if there's no legal speed bumps to getting this data, the take is that geeks like me are just going to grease that path, right? We're going to spiral from these arguments that are sort of like this is a specific warrant, but it's a little non-specific to a world where mass surveillance is just presumed and these companies actively are helping out the governments with it.

Cindy Cohn:
Yeah, I think it's a tremendously important point. It's one of the reasons why the third party doctrine has been on our hit list for a long time, because, again, I completely agree with Jumana that simply requiring a warrant doesn't get us everywhere we need to go. But when you get rid of the idea that a judge needs to be in the middle of it, you do end up with things like this portal where you could upload a recipe and it would open the portal to letting law enforcement have access to people's phone records.

Cindy Cohn:
We know from the Snowden documents on down that telephone records can be tremendously sensitive. They know if you're standing on the Golden Gate Bridge calling the suicide hotline, or whether you're calling the Planned Parenthood, or whether you're calling the local gun shop. Your phone records, even without knowing what you say, your telephone records, the websites you visit, the social media, all of your metadata can be tremendously revealing. Making sure that there's a lot of friction for law enforcement, such that they have to have a good reason and be able to demonstrate it, and demonstrate it to somebody other than themselves, before they get information about you is one of the ways that we keep the balance between us and our government in the right place.

Danny O'Brien:
Jumana, can I just ask, what is the next step? So what comes after Carpenter, what are organizations like you doing in the public litigation space to move this ahead?

Jumana Musa:
Well, I think one of the things we're doing is looking at all the parameters that were put into Carpenter and trying to operationalize them in other circumstances, right? Because it's a question of, do you have to have all of those things? Does it have to be of a deeply revealing nature, and the depth, breadth, and comprehensive reach of it all and the inescapable and automatic nature of the collection? Can it be two of those things? Can it just be one of those things? So we're trying to look at it in every aspect, in terms of whether it's a tower dump, where they say, "Something happened in this area, and we want to get the information on all the cell phones or devices that have connected to this cell tower within this period of time," or is it a geofence warrant, or is it some other way that they're gathering it to try and take it and start to apply these? Of course, one of the high ones on the hit list, they looked at historical cell site location in Carpenter, but how does it apply to real-time tracking?

Jumana Musa:
So, I mean, I think it's really important to think creatively about all the places this may apply. Of course, the end goal is what Cindy said. It is to get rid of the third party doctrine, which really has limited utility in the digital age. So I think in that context, really sort of for us in this space, that is one of the end games, but really, it's about trying to carve out what privacy means in the digital age, right, the question of, do you have privacy in public? It was a very different assessment years ago, when you said, "Of course you don't. You're out, and you're walking around. People can see you." But now if you're out and you're walking around and your phone can track you and you're showing up in surveillance cameras, and maybe they're connected to face recognition and something else, it's sort of gotten to be such a comprehensive surveillance that we really need to fight to claw back what privacy means, what privacy is protected, and how we can go about our lives in a way that is free of government intrusion.

Cindy Cohn:
Yep. Thank you so much, Jumana. Of course, EFF will be with you guys every step of the way. One of the big things that NACDL does is make sure that all of the defense attorneys across the country, who you might need someday, have access to these arguments and these skills. We love working with you, and we're all together in this effort to try to keep chipping away at this doctrine until it is just a tiny little remnant of another time when phone records were not nearly as invasive as they are now. So thank you so, so much for taking the time to talk with us. Third party doctrine, definitely need to fix it. Now we know why.

Jumana Musa:
Well, thank you for having me. I'll say it's a mutual love affair. We are frequently referring people to EFF and utilizing the information that you all put out. So thank you very much.

Danny O'Brien:
Thank you.

Danny O'Brien:
Okay, I found that really fascinating. I think one of the bits that leapt out for me is how, actually, technology, by removing friction, by making particular processes easier, including getting access to this data, actually transforms how invasive it can become, with the government being able to just kind of press a few buttons and then pull out as much metadata as it wants without a warrant.

Cindy Cohn:
Yeah, I think that's right. I mean, one of the reasons why we really want to get rid of the third party doctrine is because we need law enforcement to basically do the work and make the showing before they get access to this information, because it's far more revealing than it was when this doctrine was first created, and there's a lot more of it.

Cindy Cohn:
One of the things that Jumana mentioned that I think is important as well is that she said sometimes we may need to get more than a warrant. A warrant might not be enough. Lawyers like us are talking a lot more, and there are situations already when you have to get a super warrant, which is basically much more limited in the crimes that it can apply to, and the data has to be important to the crime. So I think we're beginning to move a lot of things towards warrants, but I think also in this age, when so much of our information is available and in the hands of third parties, we might need to think beyond warrants as well. I think that was a good point she made.

Danny O'Brien:
I think the other thing that comes out of this conversation is that ... You pointed this out, that pervasive surveillance is not a theoretical threat. It's in particular a threat that is already being felt by disenfranchised groups, right? Groups that don't get to speak up traditionally in the sort of political debate, and that includes, in the United States, communities of color and so forth.

Cindy Cohn:
Yeah. I mean, I think it's really clear that if we care about Black Lives Matter, that means we have to get the Fourth Amendment right, because people of color are disproportionately targeted by this kind of surveillance. Even if they're not targeted, they're disproportionally impacted by it.

Danny O'Brien:
That's a really good point. I think it's even more important when we realize that the presumption of privacy, I think, has been flipped because of the amount of metadata that is collected about us. If I walked down the street in the 1970s, I think it would have been pretty unusual for me to be followed around by someone or data about me to be collected in any way. Now every moment we spend in public is surveyed and recorded in some way. That data is just sitting there, waiting to be accessed by a company, but then indirectly by the government asking that company to hand over the data.

Cindy Cohn:
Yeah, I think that this is one of the situations in which the realities of the world have really changed and a doctrine and that used to be kind of annoying and innocuous has become a really, really big problem. I think the fundamental problem at the bottom of the third party doctrine is it confuses secrecy and privacy. It really takes the position that if even one other entity knows this, something about you, in these instances, your ISP in order to make sure that your phone rings where you are, that that somehow waives your Fourth Amendment rights and is equated with you kind of taking out a billboard and putting it on the side of the highway. But secrecy and privacy are not the same things, and there are many situations in which we need to stand up for privacy, even when something isn't completely secret. To me, I think the third party doctrine is one of those situations.

Danny O'Brien:
So are you optimistic or pessimistic about where we'll be with the third party doctrine?

Cindy Cohn:
I think this was a hopeful conversation, and it was a hopeful conversation because, as Jumana laid out, we have three solid Supreme Court decisions moving away from this kind of absolute rule that the third party doctrine had represented, or at least had been argued by the Justice Department. It's a judge-made doctrine. The third party doctrine doesn't exist in statute. So the judges can take it away, can decide that it is no longer applicable. Again, we've got three solid Supreme Court decisions where the third party doctrine was argued by the government on the other side, and the Supreme Court rejected that argument and said, "No, we need to care about privacy more than that." So that's very hopeful to me, and it's why I think that the third party doctrine is one of the things that needs to be fixed on the Internet, but it's the one where I'm quite hopeful that we're going to get it fixed.

Danny O'Brien:
Well, I always like to end to on an optimistic note. So I think I'll declare that's all we've got time for. See you next time.

Danny O'Brien:
Thanks again for joining us. If you'd like to support the Electronic Frontier Foundation, here are three things you can do today. One, you can hit subscribe in your podcast player of choice, and if you have time, please leave a review. It helps more people find us. Two, please share on social media and with your friends and family. Three, please visit eff.org/podcasts, where you will find more episodes, learn about these issues, you can donate to become a member, and lots more. Members are the only reason we can do this work. Plus, you can get cool stuff like an EFF hat or an EFF hoodie or even a camera cover for your laptop.

Danny O'Brien:
Thanks once again for joining us, and if you have any feedback on this episode, please email podcast@eff.org. We do read every email. This podcast was produced by the Electronic Frontier Foundation with help from Stuga Studios. Music by Nat Keefe of Beat Mower. 


This work is licensed under a Creative Commons Attribution 4.0 International License

rainey Reitman

GitHub Reinstates youtube-dl After RIAA’s Abuse of the DMCA

3 months 2 weeks ago

GitHub recently reinstated the repository for youtube-dl, a popular free software tool for downloading videos from YouTube and other user-uploaded video platforms. GitHub had taken down the repository last month after the Recording Industry Association of America (RIAA) abused the Digital Millennium Copyright Act’s notice-and-takedown procedure to pressure GitHub to remove it.

By shoehorning DMCA 1201 into the notice-and-takedown process, RIAA potentially sets a very dangerous precedent.

The removal of youtube-dl’s source code caused an outcry. The tool is used by journalists and activists to save eyewitness videos, by YouTubers to save backup copies of their own uploaded videos, and by people with slow or unreliable network connections to download videos in high resolution and watch them without buffering interruptions, to name just a few of the uses we’ve heard about. youtube-dl is a lot like the videocassette recorders of decades past: a flexible tool for saving personal copies of video that’s already accessible to the public.

Under the DMCA, an online platform like GitHub is not responsible for the allegedly infringing activities of its users so long as that platform follows certain rules, including complying when a copyright holder asks it to take down infringing material. But unlike most DMCA takedowns, youtube-dl contained no material belonging to the RIAA or its member companies. RIAA’s argument hinges on a separate section of the DMCA, Section 1201, which says that it’s illegal to bypass a digital lock in order to access or modify a copyrighted work—or to provide tools to others that bypass digital locks. The RIAA argued that since youtube-dl could be used to infringe on copyrighted music, GitHub must remove it. By shoehorning DMCA 1201 into the notice-and-takedown process, RIAA potentially sets a very dangerous precedent, making it extremely easy for copyright holders to remove software tools from the Internet based only on the argument that those tools could be used for copyright infringement.

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2Fck7utXYcZng%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

youtube-dl’s code did contain the titles and URLs of certain commercial music videos as a part of a list of videos to use to test the tool’s functionality. Of course, simply mentioning a video’s URL is not an infringement, nor is streaming a few seconds of that video to test a tool’s functionality. As EFF explained in a letter to GitHub on behalf of youtube-dl’s team of maintainers:

First, youtube-dl does not infringe or encourage the infringement of any copyrighted works, and its references to copyrighted songs in its unit tests are a fair use. Nevertheless, youtube-dl’s maintainers are replacing these references. Second, youtube-dl does not violate Section 1201 of the DMCA because it does not “circumvent” any technical protection measures on YouTube videos.

Fortunately, after receiving EFF’s letter, GitHub has reversed course. From GitHub’s announcement:

Although we did initially take the project down, we understand that just because code can be used to access copyrighted works doesn’t mean it can’t also be used to access works in non-infringing ways. We also understood that this project’s code has many legitimate purposes, including changing playback speeds for accessibility, preserving evidence in the fight for human rights, aiding journalists in fact-checking, and downloading Creative Commons-licensed or public domain videos. When we see it is possible to modify a project to remove allegedly infringing content, we give the owners a chance to fix problems before we take content down. If not, they can always respond to the notification disabling the repository and offer to make changes, or file a counter notice.

Again, although our clients chose to remove the references to the specific videos that GitHub requested, including them did not constitute copyright infringement.

RIAA’s letter accused youtube-dl of being a “circumvention device” that bypasses a digital lock protected by section 1201 of the DMCA. Responding on behalf of the developers, EFF explained that the “signature” code used by YouTube (what RIAA calls a “rolling cipher”) isn’t a protected digital lock—and if it were, youtube-dl doesn’t “circumvent” it but simply uses it as intended. For some videos, YouTube embeds a block of JavaScript code in its player pages. That code calculates a number called “sig” and sends this number back to YouTube’s video servers as part of signaling the actual video stream to begin. Any client software that can interpret JavaScript can run YouTube’s “signature” code and produce the right response, whether it’s a standard web browser or youtube-dl. The actual video stream isn’t encrypted with any DRM scheme like the ones used by subscription video sites.

It’s no secret that EFF doesn’t like Section 1201 and the ways it’s used to shut down innovation and competition. In fact, we’re challenging the law in court. But in the case of youtube-dl, Section 1201 doesn’t apply at all. GitHub agreed, and put the repository back online with its full functionality intact.

GitHub recognized that accusations about software projects violating DMCA 1201 don’t make valid takedowns under section 512. GitHub committed to have technical experts review Section 1201-related accusations and allow code repository owners to dispute those accusations before taking down their code. This is a strong commitment to the rights of GitHub’s developer community, and we hope it sets an example.

EFF is proud to have helped the free software community keep this important tool online. Please consider celebrating this victory with us by making a donation to EFF.

Donate to EFF

Defend Innovation and Free Speech

Elliot Harmon

Computer Security Experts Urge White House to Keep Politics Out of Election Security

3 months 2 weeks ago
Elections Are Partisan Affairs - Election Security Isn't

San Francisco - The Electronic Frontier Foundation (EFF) has joined more than three dozen cybersecurity experts and professional security organizations in calling for the White House to keep politics out of securing this month’s election. Election security officials and computer security experts must be able to tell the truth about the security of Americans’ votes without fear of retribution.

The experts and organizations were moved to action after reports that the White House is pressuring the Cybersecurity and Infrastructure Security Agency (CISA), and its director Chris Krebs, to change CISA’s reports on election security. CISA has pushed back against baseless allegations of voter fraud and security problems—including many promoted by President Trump— through its “Rumor Control” website, and recently published a statement renouncing “unfounded claims and opportunities for misinformation about the process of our elections.”

“Elections are partisan by their very nature, but the workings of the machinery that helps us cast and count votes should be completely independent,” said EFF Deputy Executive Director Kurt Opsahl. “Election security is vital to our right to choose our government, and we can’t let the White House stop experts from telling the truth about where we stand.”

Just yesterday, another group of cybersecurity and election security experts issued an open letter, warning that claims of voter fraud in this month’s election are “unsubstantiated or are technically incoherent.” Some of today’s letter signers also joined yesterday’s effort.

“Voting is the cornerstone of our democracy. Americans must be able to trust the experts when they say there is—or isn’t—a problem,” said Opsahl. “The White House should reverse course and support election security, as well as the processes and people who safeguard our vote.”

For the full open letter:
https://www.eff.org/deeplinks/2020/11/elections-are-partisan-affairs-election-security-isnt

Contact:  KurtOpsahlDeputy Executive Director and General Counselkurt@eff.org
Rebecca Jeschke

Elections Are Partisan Affairs. Election Security Isn't.

3 months 3 weeks ago

An Open Letter on Election Security

Voting is the cornerstone of our democracy. And since computers are deeply involved in all segments of voting at this point, computer security is vital to the protection of this fundamental right.  Everyone needs to be able to trust that the critical infrastructure systems we rely upon to safeguard our votes are defended, that problems are transparently identified, assessed and addressed, and that misinformation about election security is quickly and effectively refuted.  

While the work is not finished, we have made progress in making our elections more secure, and ensuring that problems are found and corrected. Paper ballots and risk-limiting audits have become more common.  Voting security experts have made great strides in moving elections to a more robust system that relies less on the hope of perfect software and systems.

This requires keeping partisan politics away from cybersecurity issues arising from elections. Obviously elections themselves are partisan. But the machinery of them should not be.  And the transparent assessment of potential problems or the assessment of allegations of security failure—even when they could affect the outcome of an election—must be free of partisan pressures.  Bottom line: election security officials and computer security experts must be able to do their jobs without fear of retribution for finding and publicly stating the truth about the security and integrity of the election. 

We are profoundly disturbed by reports that the White House is pressuring Chris Krebs, director of the Cybersecurity and Infrastructure Security Agency (CISA), to change CISA’s reports on election security. This comes just after Bryan Ware, assistant director for cybersecurity at CISA, resigned at the White House’s request. Director Krebs has said he expects to be fired but has refused to join the effort to cast doubt on the systems in place to support election technology and the election officials who run it. (Update Nov 18: Chris Krebs was fired on November 17.) Instead, CISA published a joint statement renouncing “unfounded claims and opportunities for misinformation about the process of our elections.”  The White House pressure threatens to introduce partisanship, and unfounded allegations, into the expert, nonpartisan, evaluation of election security. 

We urge the White House to reverse course and support election security and the processes and people necessary to safeguard our vote.  

Signed,

(Organizations and companies)

Electronic Frontier Foundation

Bugcrowd

Center for Democracy & Technology

Disclose.io

ICS Village

SCYTHE, Inc.

Verified Voting

(Affiliations are for identification purposes only; listed alphabetically by surname.)

William T. Adler, Senior Technologist, Elections & Democracy, Center for Democracy & Technology
Matt Blaze, McDevitt Chair of Computer Science and Law, Georgetown University
Jeff Bleich, U.S. Ambassador to Australia (ret.)
Jake Braun, Executive Director, University of Chicago Harris Cyber Policy Initiative
Graham Brookie, Director and Managing Editor, Digital Forensic Research Lab, The Atlantic Council
Emerson T. Brooking, Resident Fellow, Digital Forensic Research Lab of the Atlantic Council.
Duncan Buell, NCR Professor of Computer Science and Engineering, University of South Carolina
Jack Cable, Independent Security Researcher.
Joel Cardella, Director, Product & Software Security, Thermo Fisher Scientific
Stephen Checkoway, Assistant Professor of Computer Science, Oberlin College
Larry Diamond, Senior Fellow, Hoover Institution and Principal Investigator, Global Digital Policy Incubator, Stanford University
Renée DiResta, Research Manager, Stanford Internet Observatory
Kimber Dowsett, Director of Security Engineering, Truss
Joan Donovan, Harvard Kennedy’s Shorenstein Center on Media, Politics and Public Policy
Casey Ellis, Chairman/Founder/CTO, Bugcrowd
David J. Farber. Distinguished Career Professor of Computer Science and Public Policy, Carnegie Mellon University
Michael Fischer, Professor of Computer Science, Yale University
Camille François, Chief Innovation Officer, Graphika
The Gruqq, Independent Security Researcher
Joseph Lorenzo Hall, Senior Vice President for a Strong Internet at The Internet Society (ISOC)
Candice Hoke, Founding Co-Director, Center for Cybersecurity & Privacy Protection, Cleveland State University
David Jefferson, Computer Scientist, Lawrence Livermore National Laboratory (retired)
Douglas W. Jones, Associate Professor of Computer Science, University of Iowa
Lou Katz, Commissioner, Oakland Privacy Advisory Commission
Joseph Kiniry, Principal Scientist, Galois, CEO and Chief Scientist, Free & Fair
Katie Moussouris, CEO, LutaSecurity
Peter G. Neumann, Chief Scientist, SRI International Computer Science Lab
Brandie M. Nonnecke, Director, CITRIS Policy Lab, CITRIS and the Banatao Institute, UC Berkeley
Sean O’Connor, Threat Intelligence Researcher
Marc Rogers, Director of Cybersecurity, Okta
Aviel D. Rubin, Professor of Computer Science, Johns Hopkins University
John E. Savage, An Wang Emeritus Professor of Computer Science, Brown University
Bruce Schneier, Cyber Project Fellow and Lecturer, Harvard Kennedy SchoolAlex Stamos, Director, Stanford Internet Observatory
Barbara Simons, IBM Research (retired)
Philip B. Stark, Associate Dean, Mathematical and Physical Sciences, University of California, Berkeley
Camille Stewart, Cyber Fellow, Harvard Belfer Center
Megan Stifel, Executive Director, Americas; and Director, Craig Newmark Philanthropies Trustworthy Internet and Democracy Program, Global Cyber Alliance
Sara-Jayne Terp, CEO Bodacea Light Research
Cris Thomas (Space Rogue), Global Strategy Lead, IBM X-Force Red
Maurice Turner, Election Security Expert
Poorvi L. Vora, Professor of Computer Science, The George Washington University
Dan S. Wallach, Professor, Departments of Computer Science and Electrical & Computer Engineering, Rice Scholar, Baker Institute for Public Policy, Rice University
Nate Warfield, Security Researcher
Elizabeth Wharton, Chief of Staff, SCYTHE, Inc.
Tarah Wheeler, Belfer Center Cyber Fellow, Harvard University Kennedy School, and member EFF Advisory Board
Beau Woods, Founder/CEO of Stratigos Security and Cyber Safety Innovation Fellow at the Atlantic Council.
Daniel M. Zimmerman, Principal Researcher, Galois and Principled Computer Scientist, Free & Fair

(Updated November 18 to add news of Chris Krebs being fired, and to add six signers) 

Kurt Opsahl

EFF Publishes New Research on Real-Time Crime Centers in the U.S.

3 months 3 weeks ago

EFF has published a new report, "Surveillance Compounded: Real-Time Crime Centers in the United States," which profiles seven surveillance hubs operated by local law enforcement, plus data on dozens of others scattered across the country. 

Researched and written in collaboration with students at the Reynolds School of Journalism at the University of Nevada, Reno, the report focuses on the growth of real-time crime centers (RTCCs). These police facilities serve as central nodes and control rooms for a variety of surveillance technologies, including automated license plate readers, gunshots detection, and predictive policing. Perhaps the most defining characteristic of a RTCC is that a network of video cameras installed in the community that analysts watch on a wall of monitors, often in combination with sophisticated, automated analytical software. 

As we write in the report: 

RTCCs are similar to Fusion Centers, to the extent the terms are sometimes used interchangeably. We distinguish between the two: fusion centers are technology command centers that function on a larger regional level, are typically controlled by a state-level organization, and are formally part of the U.S. Department of Homeland Security's fusion center network. They also focus on distributing information about national security "threats," which are often broadly interpreted. RTCCs are generally focused on municipal or county level activities and focus on a general spectrum of public safety issues, from car thefts to gun crime to situational awareness at public events. 

The term “real-time” is also somewhat misleading: while there is often a focus on accessing data in real-time to communicate to first responders, many law enforcement agencies use RTCC to mine historical data to make decisions about the future through "predictive policing," a controversial and largely unproven strategy to identify places where crime could occur or people who might commit crimes.

We identified more than 80 RTCCs in the U.S. in 29 states, with the largest number concentrated in New York and Florida.  The report includes case studies of RTCCs in: Albuquerque, NM; Atlanta, GA; Detroit, MI; Miami Gardens, FLA; New Orleans, LA; Ogden, UT; and Sacramento,CA. We have also included a profile of the Fresno Real-Time Crime Center, which was suspended prior to publication of our report. These profiles break down the costs, what technology is installed in neighborhoods, and what type of equipment and software is accessible by RTCC staff. We also document controversies that have arisen in response to the creation of these RTCCs. 

"Surveillance Compounded" is part of the Atlas of Surveillance project, an ongoing collaboration with the Reynolds School of Journalism that aims to build a central database and map of police technologies using open source intelligence. This is the second such report, following 2019 "Atlas of Surveillance: Southwestern Border Communities," which documented surveillance technology in the 23 U.S. counties that border Mexico. 

As of November 15, 2020, the Atlas contains more than 6,100 data points related to automated license plate readers, drones, body-worn cameras, cell-site simulators, and other law enforcement technologies. 

Visit the Atlas of Surveillance.

Dave Maass

EFF Urges Universities to Commit to Transparency and Privacy Protections For COVID-19 Tracing Apps

3 months 3 weeks ago
Campus Communities Shouldn’t Be Forced to Use Apps They Can’t Trust

San Francisco—The Electronic Frontier Foundation (EFF) called on universities that have launched or plan to launch COVID-19 tracking technologies—which sometimes collect sensitive data from users’ devices and lack adequate transparency or privacy protections—to make them entirely voluntary for students and disclose details about data collection practices.

Monitoring public health during the pandemic is important to keep communities safe and reduce the risk of transmission. But requiring students, faculty, and staff returning to campus to commit to using unspecified tracking apps that record their every movement, and failing to inform them about what personal data is being collected, how it’s being used, and with whom it’s being shared, is the wrong way to go about it.

EFF is urging university officials to commit to its University App Mandate Pledge, a set of seven transparency-and privacy-enhancing policies that will help ensure a higher standard of protection for the health and personal information of students, faculty, and staff.

In committing to EFF’s pledge, university officials are agreeing to make COVID-19 apps opt-in, disclose app vendor contracts, disclose data collection and security practices, reveal the entities inside and outside the school that have access to the data, tell users if the university or app vendors are giving law enforcement access to data, and stay on top of any vulnerabilities found in the technologies.

“The success of public health efforts depends on community participation, and if students are being forced to download COVID-19 apps they don’t trust to their phones, and are being kept in the dark about who’s collecting their personal information and whether it’s being shared with law enforcement, they’re not going to want to participate,” said EFF Grassroots Advocacy Organizer Rory Mir. “University leaders should support the app mandate pledge and show that they are committed to respecting the privacy, security, and consent of everyone that is returning to campus.”

Universities have rushed to adopt apps and devices to monitor public health, with some mandating that students download apps that track their locations in real time or face suspension. Location data using GPS, for example, can reveal highly personal information about people, such as when they attend a protest or go to a bar, where their friends live, and what groups they associate with. It should be up to users to decide whether to download and use a COVID-19-related app, and up to universities and public health authorities to communicate the technology’s benefits, protections, and risks.

For the pledge:
https://www.eff.org/app-mandate/pledge

For more about COVID-19 and digital rights:
https://www.eff.org/issues/covid-19

Contact:  RoryMirGrassroots Advocacy Organizerrory@eff.org
Karen Gullo

InternetLab’s Report Sets Direction for Telecom Privacy in Brazil

3 months 3 weeks ago

Five years have passed since InternetLab published “Quem Defende Seus Dados?" (“Who defends your data?"), a report that holds ISPs accountable for their privacy and data protection policies in Brazil. Since then, major Brazilian telecom companies have provided more transparency about their data protection and privacy policies, a shift primarily fueled by Brazil’s new data protection law. 

InternetLab’s fifth annual report launches today, identifies steps companies should take to protect Brazil’s telecom privacy and data protection. This edition, featuring eight telecom providers for mobile and broadband services, shows Brazil telecom provider TIM leading the way, followed by Vivo and Oi right behind. TIM scored high marks for defending privacy in public policy debates and the judiciary, publishing transparency reports, and transparent data protection policies. In contrast, Nextel scored in the last place as it did in 2019, very far away from the rest of its competitors. Nextel did take a step forward in defending privacy in the judiciary, in contrast to 2019, when it received no stars in any category.

In stark contrast to InternetLab’s first report in 2016, half of the covered providers (Claro, NET, TIM, and Algar) have made significant progress in the data protection category. After being poorly rated in 2019, Algar obtained a full star this year in this category, a positive change as Brazil starts embracing its new GDPR-inspired data protection law. 

This year’s report also assessed which companies stood out in publicly defending privacy against unprecedented government pressure to access telecom data during the COVID-19 pandemic.  For context, Brazil’s Supreme Court suspended the government's provisional measure 954/2020 that ordered telecom providers to disclose their customers' data with the Brazilian Institute of Geography and Statistics (IBGE) during the health emergency situation. The court ruled the measure as overbroad and failing to clarify the purpose of the request.  Oi called upon IBGE to sign a term of responsibility before disclosing the data.

Unfortunately, telecom providers also signed non-transparent data-sharing agreements with states and municipalities to help public authorities fight the COVID-19 pandemic. Here, Vivo and Tim publicly committed in the media that only anonymous and aggregated data, via heat maps and pivot tables, would be shared with the government. In São Paulo, for example, the deal allows public authorities access to a data visualization tool that includes anonymous and aggregated location data to measure social distancing orders' effectiveness. After a São Paulo court ruled the agreement should be public, many telecom providers have published the relevant policies on their sites, including TIM, Vivo Claro, NET, and OI. The companies' policies, however, did not specify the security practices and techniques adopted to ensure the shared data's anonymity. In the future, companies should publish their policies proactively and immediately, and not after public pressure.

Most providers continue to seriously lag on notifying users when the government requests their data. As we’ve explained, no Brazilian law compels either the State or companies to notify targets of surveillance. Judges may require notice, and companies are not prevented from notifying users when secrecy is not legally or judicially required. Prior user notice is essential to restrict improper government data requests of service providers. It is usually impossible for the user to know that the government demanded their data unless it leads to criminal charges. As a result, the innocent are least likely to discover the violation of their privacy rights.

The report also evaluates for the first time if the companies publish their own Data Protection Impact Assessment; unfortunately, none did so. In the face of controversy on the interpretation of laws compelling companies to disclose data to the government, this year's report, for the first time, looks at companies’ transparency regarding their legal understanding of such laws.

Overall, this year's report evaluates providers in six criteria: data protection policies, law enforcement guidelines, defending users in the judiciary, defending privacy in policy debates or the media, transparency reports and data protection impact assessment, and user notification. The full report is available in Portuguese and English. These are the main results:

Data protection policies

Some providers are now telling users about what data they collect about them, how long the information is kept, and whom they share with (although frequently in an overly generic way). In some cases, providers notify users about changes in their privacy policy. Nathalie Fragoso, InternetLab’s Head of Research on Privacy and Surveillance, told EFF.

In contrast to 2016, there has been a significant advance in the content and form of privacy and data protection policies. They are now complete and accessible. However, information on data deletion is often missing, and changes in their privacy policies are rarely proactively reported. While Claro and TIM send messages to their users about their privacy policy changes, Oi only tells users that any change will be available on their website. Far behind is Vivo, which reserves the right to change its policy at any time and does not commit to notifying users of such updates. 

The report also sheds light on how providers respond to users’ requests to access their data, and it evaluates the effectiveness of such responses. Nathalie Fragoso told EFF:

We sent requests for our personal data to all the providers surveyed in this report, and gave them one month to respond. Our requests included any information relating to us. All providers, however, comply by disclosing only our subscriber information, except Claro and Oi, who fail to do so. We also learned that Algar and Tim took additional steps to certify the requestor's identity before disclosing the data, a good practice that deserves to be highlighted. 

Defending users’ privacy in the media or public policy debates

This year, Quem Defende Seus Dados? assesses if providers defended users’ privacy and data protection in public policy debates or the media. The first parameter evaluates the companies’ public contributions to congressional discussions and public policy consultations around data protection.

Even though Vivo wrote a public submission to the "National Strategy for Artificial Intelligence” consultation, it made no concrete, normative or technical proposals to protect its customers. On the other hand, InternetLab found that TIM's policy statements took a clear and robust pro-privacy stand on the same consultation. TIM calls for transparency and an explanation about AI systems. It also recommends providing sufficient information to those affected by an AI system to understand the reasons behind the results and allow those adversely affected to contest such results.

Law enforcement guidelines

Most providers seriously lag in publishing detailed guidelines for government data demands. Vivo Broadband and Mobile lead the way in this category; However, none obtained a full star. This category includes five parameters, which you can read in more detail in the report. Below we summarize two that deserve attention:

Identifying which competent authorities can demand subscriber data without a court order

Brazil's Civil Rights Framework generally requires a court order to access communications data, including location data and connection logs. It has an exception for when  "competent administrative authorities" demand subscriber data when authorized by law. There is controversy about which government officials are included within the term “competent administrative authorities.” Thus, the report focuses closely on whether each company publicly explains its interpretations of this legal term, and if so, how it does. The report also focuses on whether the companies publicly explain which kinds of data they will disclose without a warrant and which they will only disclose with a warrant.

Vivo Broadband and Mobile are far ahead of the other companies. According to its policies, Vivo discloses subscriber data only upon request from representatives of the Public Prosecutor's Office, police authorities (police commissioners), and judges. Its policies say it makes connection logs and location data available only by court order.

Claro and TIM have mixed results. Claro tells users that it discloses subscriber data to competent authorities--but fails to identify them. Likewise, TIM does not pinpoint the competent authorities that it believes can request subscriber data without a court order. However, TIM promises to comply with legislation in making “data and communications” available to “competent authorities.”

InternetLab recommends that TIM expressly identify these authorities. Oi tells users that it shares data with competent authorities and names them. However, the report shows that the company fails to clarify which of the cited competent authorities do not require a court order and which need one. Algar and Nextel scored zero stars for their law enforcement guidelines. There is still much more that all companies can do in this category. 

Identifying which crimes justify disclosure of subscriber data without a warrant

As we explained in our legal FAQs for Brazil, authorizes prosecutors and police officers (usually the Chief of the Civil Police) to access subscriber data without a warrant to investigate money laundering and criminal organizations. The Criminal Procedure Code allows equal access for human trafficking, kidnapping, organ trafficking, and sexual exploitation crimes. Unfortunately, police authorities have claimed the power to access subscriber data without a warrant during the investigation of other crimes. As we’ve explained, they improperly assert a general authorization that regulates criminal investigation by the Civil Police Chief. 

We are happy that InternetLab challenges erroneous legal interpretation regarding police power by assessing companies’ responses to such requests. Here again, in the face of controversy on the interpretation of the law, InternetLab calls for corporate transparency about the law's interpretations.

InternetLab results show that NET, OI Mobile, TIM Broadband, Tim Mobile, Nextel, Algar, and Sky failed to identify the crimes for which competent authorities may obtain subscriber records without a warrant. 

Conclusion

Given this year's results, InternetLab encourages companies to improve their channels for data access requests to facilitate full access to ones' data. It recommends companies to adopt proactive user notification practices when changing their privacy policies. It also encourages them to publish law enforcement guidelines disclosing all the possibilities when disclosing subscriber data, location logs, and connection records, and for which crimes. Companies should ensure transparency regarding their legal interpretation of laws compelling them to disclose data to the government. Companies should be clear and precise when dealing with judicial orders vs. administrative requests for data demands. In the face of exceptional circumstances, such as the COVID-19 pandemic, InternetLab calls upon companies to take an active transparency approach regarding possible collaboration and data sharing agreements with the State, and ensure that such exceptional measure is carried out in the public interest, limited in time and proportional.

Finally, InternetLab encourages companies to publish comprehensive transparency reports and notify users when disclosing their customers' data upon law enforcement demands. Through ¿Quien Defiende Tus Datos? reports, a project coordinated by EFF, local organizations have been comparing companies' commitments to transparency and user privacy in different Latin American countries and Spain. Today’s InternetLab report on Brazil joins similar reports earlier this year from= Fundación Karisma in Colombia, ADC in Argentina, Hiperderecho in Peru, ETICAS in Spain, IPANDETEC in Panama, and TEDIC in Paraguay. New editions in Nicaragua are on their way. All of these critical reports spot which companies stand with their users and which fall short.

Katitza Rodriguez

End University Mandates for COVID Tech

3 months 3 weeks ago

Since the COVID-19 crisis began, many universities have looked to novel technologies to assist their efforts to retain in-person operations. Most prominent are untested contact tracing and notification applications or devices. While universities must commit to public health, too often these programs invade privacy and lack transparency. To make matters worse, some universities mandate these technologies for students, faculty, staff, and even visitors.  As we’ve stated before, forcing people to install COVID-related technology on their personal devices is the wrong call.            

This is why the EFF is launching our new campaign: End University App Mandates.  Please help us call on university officials to publicly commit to the University App Mandate Pledge (UAMP). It contains seven transparency and privacy-enhancing policies that university officials must adopt to protect the privacy, security, and transparency of their community members. Whether you are a student, a worker, a community member, or an alum, we need your support in defending privacy on campus.

TAKE ACTION

CALL ON YOUR UNIVERSITY TO TAKE THE PLEDGE

Surveillance Is No Cure-All 

Technology is not a silver bullet for solving a public health crisis. If COVID-related apps or devices will help at all, they must be part of a larger public health strategy, including participation and trust from the affected community. In other words, even the best contact tracing and notification software cannot be a substitute for regular testing, PPE, access to care, and interview-based contact tracing. And no public health strategy will work if coercive and secretive measures undermine trust between the educational community and university administrators.

Beyond the invasion of our privacy, public health measures that use digital surveillance also can chill our free speech. These programs, and the ways they are implemented and enforced, also can have a disproportionate impact on vulnerable groups. This is why university leadership can encourage participation in these measures, but ultimately these programs must remain voluntary.  

Users can’t offer their informed consent to the app or device if it is a privacy black box. For example, leadership must make it clear whether any collected information can be accessed by law enforcement, and must disclose the privacy policies of external vendors. 

Universities must also outline exactly what precautions and protocols they are implementing to protect their community from data breaches. Novel technologies created in rapid response to a crisis have a greater potential for security vulnerabilities, as they have not fully received the sort of rigorous testing that would happen in a normal development process. This makes it even more essential to open these programs to public scrutiny and allow individuals to assess the risks.

How You Can Help

There are 4,000 colleges and universities in the United States, all impacted by the current pandemic. There is a vast variety of tools and policies being implemented at educational institutions across the United States. 

So we are targeting every college and university with our campaign. Every time a college or university receives 100 new petitioners, we will deliver the petition letter to the institution’s leadership. We will also work with local advocates to implement these necessary and urgent changes.

To make this campaign possible, we’re turning to our nation-wide network of grassroots and community activists in the Electronic Frontier Alliance and beyond. If you are part of a student group or community group potentially impacted by these app mandate policies, please sign the petition and consider applying to join the Alliance.  We want to work with you to push leadership to adopt this pledge through direct action, and assist your local efforts in defending privacy on college campuses

TAKE ACTION

CALL ON YOUR UNIVERSITY TO TAKE THE PLEDGE

Rory Mir

Don’t Blame Section 230 for Big Tech’s Failures. Blame Big Tech.

3 months 3 weeks ago

Next time you hear someone blame Section 230 for a problem with social media platforms, ask yourself two questions: first, was this problem actually caused by Section 230? Second, would weakening Section 230 solve the problem? Politicians and commentators on both sides of the aisle frequently blame Section 230 for big tech companies’ failures, but their reform proposals wouldn’t actually address the problems they attribute to Big Tech. If lawmakers are concerned about large social media platforms’ outsized influence on the world of online speech, they ought to confront the lack of meaningful competition among those platforms and the ways in which those platforms fail to let users control or even see how they’re using our data. Undermining Section 230 won’t fix Twitter and Facebook; in fact, it risks making matters worse by further insulating big players from competition and disruption.

While large tech companies might clamor for regulations that would hamstring their competitors, they’re notably silent on reforms that would curb the practices that allow them to dominate the Internet today.

Section 230 says that if you break the law online, you should be the one held responsible, not the website, app, or forum where you said it. Similarly, if you forward an email or even retweet a tweet, you’re protected by Section 230 in the event that that material is found unlawful. It has some exceptions—most notably, that it doesn’t shield platforms from liability under federal criminal law—but at its heart, Section 230 is just common sense: you should be held responsible for your speech online, not the platform that hosted your speech or another party.

Without Section 230, the Internet would be a very different place, one with fewer spaces where we’re all free to speak out and share our opinions. Social media wouldn’t exist—at least in its current form—and neither would important educational and cultural platforms like Wikipedia and the Internet Archive. The legal risk associated with operating such a service would deter any entrepreneur from starting one, let alone a nonprofit.

As commentators of all political stripes have targeted large Internet companies with their ire, it’s become fashionable to blame Section 230 for those companies’ failings. But Section 230 isn’t why five companies dominate the market for speech online, or why the marketing and behavior analysis decisions that guide Big Tech’s practices are so often opaque to users.

The Problem with Social Media Isn’t Politics; It’s Power

A recent Congressional hearing with the heads of Facebook, Twitter, and Google demonstrated the highly politicized nature of today’s criticisms of Big Tech. Republicans scolded the companies for “censoring” and fact-checking conservative speakers while Democrats demanded that they do more to curb misleading and harmful statements.

There’s a nugget of truth in both parties’ criticisms: it’s a problem that just a few tech companies wield immense control over what speakers and messages are allowed online. It’s a problem that those same companies fail to enforce their own policies consistently or offer users meaningful opportunity to appeal bad moderation decisions. There’s little hope of a competitor with fairer speech moderation practices taking hold given the big players’ practice of acquiring would-be competitors before they can ever threaten the status quo.

Unfortunately, trying to legislate that platforms moderate “neutrally” would create immense legal risk for any new social media platform—raising, rather than lowering, the barrier to entry for new platforms. Can a platform filter out spam while still maintaining its “neutrality”? What if that spam has a political message? Twitter and Facebook would have the large legal budgets and financial cushions to litigate those questions, but smaller platforms wouldn’t.

We shouldn’t be surprised that Facebook has joined Section 230’s critics: it literally has the most to gain from decimating the law.

Likewise, if Twitter and Facebook faced serious competition, then the decisions they make about how to handle (or not handle) hateful speech or disinformation wouldn’t have nearly the influence they have today on online discourse. If there were twenty major social media platforms, then the decisions that any one of them makes to host, remove, or factcheck the latest misleading post about the election results wouldn’t have the same effect on the public discourse. The Internet is a better place when multiple moderation philosophies can coexist, some more restrictive and some more permissive.

The hearing showed Congress’ shortsightedness when it comes to regulation of large Internet companies. In their drive to use the hearing for their political ends, both parties ignored the factors that led to Twitter, Facebook, and Google’s outsized power and remedies to bring competition and choice into the social media space.

Ironically, though calls to reform Section 230 are frequently motivated by disappointment in Big Tech’s speech moderation policies, evidence shows that further reforms to Section 230 would make it more difficult for new entrants to compete with Facebook or Twitter. It shouldn’t escape our attention that Facebook was one of the first tech companies to endorse SESTA/FOSTA, the 2018 law that significantly undermined Section 230’s protections for free speech online, or that Facebook is now leading the charge for further reforms to Section 230 (PDF). Any law that makes it more difficult for a platform to maintain Section 230’s liability shield will also make it more difficult for new startups to compete with Big Tech. (Just weeks after SESTA/FOSTA passed and put multiple dating sites out of business, Facebook announced that it was entering the online dating world.) We shouldn’t be surprised that Facebook has joined Section 230’s critics: it literally has the most to gain from decimating the law.

Remember, speech moderation at scale is hard. It’s one thing for platforms to come to a decision about how to handle divisive posts by a few public figures; it’s quite another for them to create rules affecting everyone’s speech and enforce them consistently and transparently. When platforms err on the side of censorship, marginalized communities are silenced disproportionately. Congress should not try to pass laws dictating how Internet companies should moderate their platforms. Such laws would not pass Constitutional scrutiny, would harden the market for social media platforms from new entrants, and would almost certainly censor innocent people unfairly.

Then How Should Congress Keep Platforms in Check? Some Ideas You Won’t Hear from Big Tech

While large tech companies might clamor for regulations that would hamstring their competitors, they’re notably silent on reforms that would curb the practices that allow them to dominate the Internet today. That’s why EFF recommends that Congress update antitrust law to stop the flood of mergers and acquisitions that have made competition in Big Tech an illusion. Before the government approves a merger, the companies should have to prove that the merger would not increase their monopoly power or unduly harm competition.

But even updating antitrust policy is not enough: big tech companies will stop at nothing to protect their black box of behavioral targeting from even a shred of transparency. Facebook recently demonstrated this when it threatened the Ad Observatory, an NYU project to shed light on how the platform was showing different political advertising messages to different segments of its user base. Major social media platforms’ business models thrive on practices that keep users in the dark about what information they collect on us and how it’s used. Decisions about what material (including advertising) to deliver to users are informed by a web of inferences about users, inferences that are usually impossible for users even to see, let alone correct.

Because of the link between social media’s speech moderation policies and its irresponsible management of user data, Congress can’t improve Big Tech’s practices without addressing its surveillance-based business models. And although large tech companies have endorsed changes to Section 230 and may endorse further changes to Section 230 in the future, they will probably never endorse real, comprehensive privacy-protective legislation.

That the Internet Association and its members have fought tooth-and-nail to stop privacy protective legislation while lobbying for bills undermining Section 230 says all you need to know about which type of regulation they see as the greater threat to their bottom line.

Any federal privacy bill must have a private right of action: if a company breaks the law and infringes on our privacy rights, it’s not enough to put a government agency in charge of enforcing the law. Users should have the right to sue the companies, and it should be impossible to sign away those rights in a terms-of-service agreement. The law must also forbid companies from selling privacy as a service: all users must enjoy the same privacy rights regardless of what we’re paying—or being paid—for the service.

The recent fights over the California Consumer Privacy Act serve as a useful example of how tech companies can give lip service to the idea of privacy-protecting legislation while actually insulating themselves from it. After the law passed in 2018, the Internet Association—a trade group representing Big Tech powerhouses like Facebook, Twitter, and Google—spent nearly $176,000 lobbying the California legislature to weaken the law. Most damningly, the IA tried to pass a bill exempting surveillance-based advertising from the practices from which the law protects consumers. That’s right: big tech companies tried to pass a law protecting their own invasive advertising practices that helped cement their dominance in the first place. That the Internet Association and its members have fought tooth-and-nail to stop privacy protective legislation while lobbying for bills undermining Section 230 says all you need to know about which type of regulation they see as the greater threat to their bottom line.

Section 230 has become a hot topic for politicians and commentators on both sides of the aisle. Whether it’s Republicans criticizing Big Tech for allegedly censoring conservatives or Democrats alleging that online platforms don’t do enough to fight harmful speech online, both sides seem increasingly convinced that they can change Big Tech’s social media practices by undermining Section 230. But history has shown that making it more difficult for platforms to maintain Section 230 protections will further isolate a few large tech companies from meaningful competition. If Congress wants to keep Big Tech in check, it must address the real problems head-on, passing legislation that will bring competition to Internet platforms and curb the unchecked, opaque user data practices at the heart of social media’s business models.

You’ll never hear Big Tech advocate that.

Elliot Harmon
Checked
17 minutes 44 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed