Manifest V3: Open Web Politics in Sheep's Clothing

1 month 4 weeks ago

When Google introduced Manifest V3 in 2019, web extension developers were alarmed at the amount of functionality that would be taken away for features they provide users. Especially features like blocking trackers and providing secure connections. This new iteration of Google Chrome’s web extensions interface still has flaws that might be addressed through thoughtful consensus of the web extension developer community. However, two years and counting of discussion and conflict around Manifest V3 have ultimately exposed the problematic power Google holds over how millions of people experience the web. With the more recent announcement of the official transition to Manifest V3 and the deprecation of Manifest V2 in 2023, many privacy based web extensions will be mitigated in how they are able to protect users.

The security and privacy claims that Google has made about web extensions may or may not be addressed with Manifest V3. But the fact remains that the extensions that users have relied on for privacy will be heavily stunted if the current proposal moves forward. A move that was presented as user-focused, actually takes away the user’s power to block unwanted tracking for their security and privacy needs.

Large Influence, Little Challenge

First, a short history lesson. In 2015, Mozilla announced its move to adopt the webRequest API, already used by Chrome, in an effort to synchronize the landscape for web extension developers. Fast forwarding to the Manifest V3 announcement in 2019, Google put Mozilla in the position of choosing to split or sync with their Firefox browser. Splitting would mean taking a strong stand against Manifest V3 as an alternative and supporting web extensions developers’ innovation in user privacy controls. Syncing would mean going along with Google’s plan for the sake of not splitting up web extension development any further.

Mozilla has decided to support Manifest V2’s blocking webRequest API and MV3’s declarativeNetRequest API for now. A move that is very much shaped by Google’s push to make MV3 the standard, supporting both APIs is only half the battle. MV3 dictates an ecosystem change that limits MV2 extensions and would likely force MV2 based extensions to conform to MV3 in the near future. Mozilla’s acknowledgement that MV3 doesn’t meet web extension developers’ needs shows that MV3 is not yet ready for prime time. Yet, there is pressure to get stable, trusted extensions to allocate resources to port their extensions to more limited versions of themselves with a less stable API.

Manifest V3 Technical Issues

Even though strides have been made in browser security and privacy, web extensions like Privacy Badger, NoScript, and uBlock Origin have filled the gap of providing the granular control users want. One of the most significant changes outlined in Manifest V3 is the removal of blocking webRequest API and the flexibility it gave developers to programmatically handle network requests on behalf of the user. Queued to replace blocking webRequest API, the declarativeNetRequest API includes low caps on how many sites these extensions could cover. Another mandate is moving from Background Pages, a context that allows web extension developers to properly assess and debug, to an alternative, less powerful context called Background Service Workers. This context wasn’t originally built with web extension development in mind, which has led to its own conversation in many forums.

In short, Service Workers were meant for a sleep/wake cycle of web asset-to-user delivery—for example, caching consistent images and information so the user won’t need to use a lot of resources when reconnecting to that website again with a limited connection. Web extensions need persistent communication between the extension and the browser, often based on user interaction, like being able to detect and block ad trackers as they load onto the web page in real time. This has resulted in a significant list of issues that will have to be addressed to cover many valid use cases. These discussions, however, are happening as web extension developers are being asked to port to MV3 in the next year without a stable workflow available with pending issues such as no defined service worker context for web extensions, pending WebAssembly support, and lack of consistent and direct support from the Chrome extensions team itself.

Privacy SandStorm

Since the announcement of Manifest V3, Google has announced several controversial “Privacy Sandbox” proposals for privacy mechanisms for Chrome. The highest-stakes discussions about these proposals are in the World Wide Web Consortium, or W3C. While technically “anyone” can listen into the open meetings, only W3C members can propose formal documentation on specifications and have leadership positions. Being a member has its own overhead of fees and time commitment. This is something a large multinational corporation can easily overcome, but it can be a barrier to user-focused groups. Unless these power dynamics are directly addressed, a participant’s voice gets louder with market share.

Recently this year, after the many Google forum-based discussions around Manifest V3, a WebExtensions Community Group has been formed in the W3C. Community group participation does not require W3C membership, but they do not produce standards. Chaired by employees from Google and Apple, this group states that by “specifying the APIs, functionality, and permissions of WebExtensions, we can make it even easier for extension developers to enhance end user experience, while moving them towards APIs that improve performance and prevent abuse.”

But this move for greater democracy would have been more powerful and effective before Google’s unilateral push to impose Manifest V3. This story is disappointingly similar to what occurred with Google’s AMP technology: more democratic discussions and open governance were offered only after AMP had become ubiquitous.

With the planned deprecation of Manifest V2 extensions, the decision has already been made. The rest of the web extensions community are forced to comply, deviate from, or leave a large browser extension ecosystem that doesn’t include Chrome. And that’s harder than it may sound: Chromium, the open-source browser engine based on Chrome, is the basis for Microsoft Edge, Opera, Vivaldi, and Brave. Statements have been made by Vivaldi, Brave, and Opera on MV3 and their plans to preserve ad-blockers and privacy preserving features of MV2, yet the ripple effects are clear when Chrome makes a major change.

What Does A Better MV3 Look Like?

Some very valid concerns and asks have been raised with the W3C Web Extensions Community Group that would help to propel the web extensions realm back to a better place.

  1. Make the declarativeNetRequest API optional in Chrome, as it is currently. The API provides a path for extensions that have more static and simplistic features without needing to implement more powerful APIs. Extensions that use the blocking webRequest API, with its added power can be given extra scrutiny upon submission review. 
  2. In an effort to sooth the technical issues around Background Service Workers, Mozilla proposed in the W3C Group an alternative to Service Workers for web extensions, dubbed “Limited Event Pages”. Where this approach restores a lot of the standard website APIs and support lost with Background Service Workers. Safari expressed support, but Chrome has expressed lack of support with reasons pending but not explicitly stated at the time of this post.
  3. No further introduction of regressions in important functionality that MV2 has. For example, being able to inject scripts before page load. This is broken with pending amendments in MV3.

Even though one may see the moves between web extensions API changes and privacy mechanism proposals as two separate endeavors, it speaks to the expansive power of how one company can impact the ecosystem of the web; both when they do great things, and when they make bad decisions. The question that must be asked is who has the burden of enforcing what is fair: the standards organizations that engage with large company proposals or the companies themselves? Secondly, who has the most power if one constituency says “no” and another says “yes”? Community partners, advocates, and smaller companies are permitted to say no and not work with companies who enter the room frequently with worrying proposals, but then that company can claim that silence means consensus when they decide to go forward with a plan. Similar dynamics have occurred before when the W3C grappled with Do Not Track (DNT) where proponents of weaker privacy mechanisms feigned concern over user privacy and choice. So in this case, large companies like Google can make nefarious or widely useful decisions without much incentive to say no to themselves. In the case of MV3, they gave room and time to discuss issues with the web extensions community. That is the bare minimum standard for making such a big change, so to congratulate a powerful entity for making space for many other voices would only add to the sentiment that this should be the norm in open web politics.

No matter how well-meaning a proposal can be, the reality is millions of people’s experiences on the internet are often left up to the ethics of a few in companies and standards organizations.

More Information
Alexis Hancock

Police Aerial Surveillance Endangers Our Ability to Protest

1 month 4 weeks ago

The ACLU of Northern California has concluded a year-long Freedom of Information campaign by uncovering massive spying on Black Lives Matter protests from the air. The California Highway Patrol directed aerial surveillance, mostly done by helicopters, over protests in Berkeley, Oakland, Palo Alto, Placerville, Riverside, Sacramento, San Francisco, and San Luis Obispo. The footage, which you can watch online, includes police zooming in on individual protestors, die-ins, and vigils for victims of police violence.

You can sign the ACLU’s petition opposing this surveillance here

Dragnet aerial surveillance is often unconstitutional. In summer 2021, the Fourth Circuit ruled that Baltimore’s aerial surveillance program, which surveilled large swaths of the city without a warrant, violated the Fourth Amendment right to privacy for city residents. Police planes or helicopters flying overhead can easily track and trace an individual as they go about their day—before, during, and after a protest. If a government helicopter follows a group of people leaving a protest and returning home or going to a house of worship, there are many facts about these people that can be inferred. 

Not to mention, high-tech political spying makes people vulnerable to retribution and reprisals by the government. Despite their constitutional rights, many people would be chilled and deterred from attending a demonstration protesting against police violence if they knew the police were going to film their face, and potentially identify them and keep a record of their First Amendment activity.

The U.S. government has been spying on protest movements for as long as there have been protest movements. The protests for Black Lives in the summer of 2020 were no exception. For over a year, civil rights groups and investigative journalists have been uncovering the diversity of invasive tactics and technologies used by police to surveil protestors and activists exercising their First Amendment rights. Earlier this year, for example, EFF uncovered how the Los Angeles Police Department requested Amazon Ring surveillance doorbell footage of protests in an attempt to find “criminal behavior.” We also discovered that police accessed BID cameras in Union Square to spy on protestors

Like the surveillance used against water protectors at the Dakota Access Pipeline protests, the Occupy movements across the country, or even the Civil Rights movements in the mid-twentieth century, it could takes years or even decades to uncover all of the  surveillance mobilized by the government during the summer of 2020. Fortunately, the ACLU of Northern California has already exposed CHPs aerial surveillance against the protests for Black lives.

We must act now to protect future protestors from the civil liberties infringements the government conjures on a regular basis. Aerial surveillance of protests must stop.

Matthew Guariglia

Digital Rights Updates with EFFector 33.7

2 months ago

Want the latest news on your digital rights? Then you’ve come to the right place! Version 33, issue 7 of EFFector, our monthly-ish newsletter, is out now! Catch up on the latest EFF news, from how Apple is listening and retracting some of its phone-scanning features to how Congress can act on the Facebook leaks, by reading our newsletter or listening to the new audio version below.

LISTEN ON THE INTERNET ARCHIVE

EFFECTOR 33.07 - Victory: Apple will retract some harmful phone-scanning

Make sure you never miss an issue by signing up by email to receive EFFector as soon as it's posted! Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and now listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Apple’s Self Service Repair Program Must Live Up To Its Promises

2 months ago

The Right to Repair movement got a boost this week, when Apple announced a new program, Self Service Repair, that will let people buy genuine Apple parts and tools to make some of their own repairs to limited Apple products such as newer iPhones and some Macs. It will be starting early next year. Implemented well, Apple’s program could be huge for everyone who supports the right to repair.

This is a major shift for the company, which has fought for years against movements to expand people’s right to repair their Apple products. Right-to-repair advocates have not only pushed the company to move on this issue, but also to get regulators and lawmakers to acknowledge the need to protect the right to repair in law. Apple’s announcement is only one illustration of how far the advocacy on the right to repair has come; in just the past two years, advocates have won at the ballot box in Massachusetts, received a supportive directive from the Biden Administration, changed policy at Microsoft, and made some gains at the Library of Congress to expand repair permissions.

The Self Service Repair Program could be another feather in that cap. But now that Apple has announced the program, we urge them to roll it out in ways that truly expand their customers’ access and choice.

It’s important that Apple’s program, or any program, does not come with strings attached that make it unworkably difficult or too expensive for a normal person to use. In the past, Apple has done both—as YouTuber and professional repairer Louis Rossman pointed out.

Apple’s Independent Repair Provider Program, which was supposed to make manuals and parts more available to independent repairers, did not live up to its early promise. In practice, it saddled those who wanted to participate with restrictive non-disclosure agreements, made it difficult to obtain parts, and made it impossible for independent repair shops to keep parts in stock to respond quickly to repair requests.

The company also ultimately limited the Independent Repair Provider Program to a few parts for a few devices. Apple should not do this again with the Self Service Repair Program. At launch, the forthcoming program is very limited — first to parts for the iPhone 12 and 13, and soon Mac computers with M1 chips. Apple has said its repair program will support the most frequently serviced parts, but they are not the only components that break; it would be great to see this list continue to expand. As it does, Apple should strive to make the program accessible and provide parts in ways that protect device owners from high charges for replacement. For example, if someone drops their phone, they should be able to just buy a screen, and not a whole display assembly.   

We urge Apple not to repeat past mistakes, but instead move forward with a program that truly encourages broader access to parts, manuals, and tools.

Expanding access to repair also means providing support for the independent repair shops who help people who need their products fixed but lack the technical knowledge or confidence to do so. The company should go further to support independent shops—which, after all, are also working toward the goal of keeping Apple’s customers happy.

We’ve worked for years with our fellow advocates, such as the Repair Coalition, iFixit, U.S. PIRG, and countless others, to shift the conversation around the right to repair. We must ensure that the market continues to get better for people who want choice when it comes to fixing their devices—whether that’s protecting individual rights to fix devices, supporting independent repair shops, encouraging more companies to take steps that embrace this right, or winning cases and passing laws to make it crystal clear that people have the right to repair their own devices.

Apple’s announcement shows there has been considerable pressure on the company to change its designs and policy to answer consumer demand for the right to repair. Let’s keep it up and keep them on the right track.

Hayley Tsukayama

EFF Tells Court to Protect Anonymous Speakers, Apply Proper Test Before Unmasking Them In Trademark Commentary Case

2 months ago

Judges cannot minimize the First Amendment rights of anonymous speakers who use an organization’s logo, especially when that use may be intended to send a message to the trademark owner, EFF told a federal appeals court this week.

EFF filed its brief in the U.S. Court of Appeal for the Second Circuit after several anonymous defendants in a case brought by Everytown for Gun Safety Action Fund appealed a district court’s order that mandated the disclosure of their identifying information. Everytown’s lawsuit alleges that the defendants used its trademarked logos in 3-D printed gun part plans and sought the order to learn the identities of several online speakers who printed them.

Unmasking can result in serious harm to anonymous speakers, exposing them to harassment and intimidation, which is why the First Amendment offers strong protections for such speech. So courts around the country have applied a now well-established three-step test when parties seek to unmask Doe speakers, to ensure that the litigation process is not being abused to pierce anonymity unnecessarily. But in granting the order in this case, the district court instead applied a looser test that is usually used only in P2P copyright cases. The court then ruled that the online speakers could not rely on the First Amendment here because “anonymity is not protected to the extent that it is used to mask the infringement of intellectual property rights, including trademark rights.”

That ruling cannot stand. As we explained in our friend-of-the-court brief, “Although the right to speak anonymously is not absolute, the constitutional protections it affords to speakers required the district court to pause and meaningfully consider the First Amendment implications of the discovery order sought by Plaintiffs, applying the correct test designed to balance the needs of plaintiffs and defendants in Doe cases such as this one.”  By choosing to apply the wrong test, and even then in the most cursory way, the district court fell far short of its obligations.

To be clear, at this point we aren’t commenting on the merits of Everytown’s trademark claim. Instead, we’re worried about something else: that the court’s ruling, if affirmed by the Second Circuit, will be used in other trademark cases to minimize the interests of speakers who use trademarks as part of their commentary. 

The traditional, robust legal test under the First Amendment requires those seeking to identify anonymous speakers to give them notice and meet a high evidentiary standard, which ensures the plaintiffs have meritorious legal claims and are not misusing courts to intimidate or harass anonymous speakers. If those steps are met, the First Amendment requires courts to weigh several factors, including the nature of the expression at issue and whether there are ways to provide plaintiffs with the information they need short of publicly identifying the anonymous speakers.

The district court instead relied on the lower standard used in cases involving peer-to-peer networks, which offers insufficient protections for anonymous speakers and should never have been used in this case.

If the court had looked instead to trademark precedent, it would have found that several sister courts have applied the more traditional test in trademark cases. And that is as it should be. As we explain:

as courts around the country have recognized, trademark uses may implicate First Amendment interests in myriad ways. Thus, trademark rights must be carefully balanced against constitutional rights, to ensure that trademark rights are not used to impose monopolies on language and intrude on First Amendment values

The Second Circuit granted defendants’ request for an administrative stay of the district court’s order and plans to more fully review the appeal next week. We hope that the court will reverse the district court’s order and require it to seriously consider the competing interests here before issuing any other unmasking orders, in this or any other case. 

Corynne McSherry

Podcast Episode: What Police Get When They Get Your Phone

2 months ago
Episode 101 of EFF’s How to Fix the Internet

If you get pulled over and a police officer asks for your phone, beware. Local police now have sophisticated tools that can download your location and browsing history, texts, contacts, and photos to keep or share forever. Join EFF’s Cindy Cohn and Danny O’Brien as they talk to Upturn’s Harlan Yu about a better way for police to treat you and your data. 

Click below to listen to the episode now, or choose your podcast player:

%3Ciframe%20height%3D%22200px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2Fa9ca2a7f-0baf-4971-a0b1-f1abe6839013%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com


  
  

Today, even small-town police departments have powerful tools that can easily access the most intimate information on your cell phone. 

When Upturn researchers surveyed police departments on the mobile device forensic tools they were using on mobile phones, they discovered that the tools are being used by police departments large and small across America. There are few rules on what law enforcement can do with the data they download, and not very many policies on how the information should be stored, shared, or destroyed.

Recently Upturn researchers surveyed police departments on the mobile device forensic tools they were using on mobile phones, and discovered that the tools are being used by police departments large and small across America. There are far too few rules on what law enforcement can do with the data they download, and not very many policies on how the information should be stored, shared or destroyed. 

Mobile device forensic tools can access nearly everything—all the data on the phone—even when they’re locked. 

You can also find the Mp3 of this episode on the Internet Archive.

In this episode you’ll learn about:

  • Mobile device forensic tools (MDFTs) that are used by police to download data from your phone, even when it’s locked
  • How court cases such as Riley v. California powerfully protect our digital privacy-- but those protections are evaded when police get verbal consent to search a phone
  • How widespread the use of MDFTs are by law enforcement departments across the country, including small-town police departments investigating minor infractions 
  • The roles that phone manufacturers and mobile device forensic tool vendors can play in protecting user data 
  • How re-envisioning our approaches to phone surveillance helps address issues of systemic targeting of marginalized communities by police agencies
  • The role of warrants in protecting our digital data. 

Harlan Yu is the Executive Director of Upturn, a Washington, D.C,-based organization that advances equity and justice in the design, governance, and use of technology. Harlan has focused on the impact of emerging technologies in policing and the criminal legal system, such as body-worn cameras and mobile device forensic tools, and in particular their disproportionate effects on communities of color. You can find him on Twitter @harlanyu

If you have any feedback on this episode, please email podcast@eff.org.

Below, you’ll find legal resources – including links to important cases, books, and briefs discussed in the podcast – as well a full transcript of the audio.

EFF is deeply grateful for the support of the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology, without whom this podcast would not be possible.  

Resources Legal Cases Other Resources Transcript of Episode 101: What Police Get When They Get Your Phone 

Harlan: the fact that all of this information is collected and saved on your phone, right? Your web browsing history, your location, history. This is all information that is now kept digitally in ways that we'd never had records of this. And so over this past decade, smartphones have become this treasure trove for law enforcement. 

Cindy: That's Harlan Yu. And he's our guest today on How to Fix The Internet. Harlan is the executive director at Upturn where he's working to advance equity and justice in the way technology is used. 

Danny: Harlan's going to talk to us about some of the tools used in policing. This tech makes law enforcement much more powerful when it comes to street level surveillance, and we'll explore some of the dangers in that.

Cindy: Harlan has solutions that will make us all safer and protect our privacy. One of our central themes at EFF is that when you go online or use digital tools, your rights should go with you. Harlan is going to tell us how to get there.

Cindy: I'm Cindy Cohn, EFFs executive director.

Danny: and I'm Danny O'Brien and this is how to fix the internet, a podcast of the Electronic Frontier Foundation.

Cindy: Harlan. Thank you so much for joining us. At Upturn,  you have been working in the space where technology and justice meet, and I'm really excited to dig into some of this with you.

Harlan: Thanks so much for having me, Cindy.

Cindy: So let's start by giving an explanation about what kinds of tools police are using when it comes to our digital phones.

Harlan: last year and over the past two years, my team and upturn and I, we published and have been doing a lot of research on law enforcement's use of mobile device forensic tools. Now what a mobile device forensic tool does is it's a device where law enforcement will plug your cell phone into that device. It allows law enforcement to extract and copy all of the data. So all of the emails, texts, photos, locations, and contacts even deleted data, off of your cell phone.  And if necessary, we'll also circumvent the security features on the phone. 

Harlan: So for example, device level encryptio in order to do that extraction, once it has all of the data from your phone, these tools also help law enforcement to analyze all of that data in much more efficient ways. So imagine, you know, gigabytes of data on your phone, it can help law enforcement do keyword searches, create social graphs, make maps of all of the places that you've been.You know, so an officer who's not super tech savvy, will be able to easily pour over that information. So it can help officers automatically detect photos and filter for photos that have, say weapons or tattoos or do text level classification as well. 

Cindy: Yeah, there were some screenshots in that report that were really pretty stunning. You know, a little cute little touchscreen that lets you push a button and find out whether people are talking about drugs. Uh, another little touch screen that lets you identify who the people are that you talk to the most often.

Cindy: you know, really user-friendly 

Harlan: these tools are made by a range of different vendors. The most popular being, uh, Celebrite,  gray shift, which makes a tool called gray key, magnet forensics. And, you know, there's a whole industry of vendors that make these tools. And what our report did was we submitted about 110 public records request to local and state law enforcement agencies around the country, asking about what tools they've purchased, how they're using them, and whether there are any policies in place that constraints their use. And what we found was that almost every major law enforcement agency across the United States already has these tools. Including all 50 of the largest police departments in the country and state law enforcement agencies in all 50 states and the District of Columbia. 

Cindy: Wow, all across the country. How much are police using it? 

Harlan: We found through our public records requests that law enforcement have been doing, you know, hundreds of thousands of cell phone searches, and extractions, since 2015. This is not just limited to, you know, the major law enforcement agencies that have the resources to purchase these tools. We also found that many smaller agencies can afford them. So cities and towns with under, you know, tens of thousands of residents with maybe a dozen or two dozen officers, places like Shaker Heights in Ohio or Lompoc in California, or Walla Walla, Washington. The breadth and availability of these tools, was pretty shocking to us.

Cindy: You know, people might think that this is something that the FBI can do in national security cases or that, you know, we can do in other situations, in which we've got very serious crimes by very dangerous people. But the thing that was stunning to me about the report you guys did was just how easy it is to do this, how often and how mundane the crimes are that are being, uh, that are being identified through this. Can you give me a couple more examples or talk about that a little more? 

Harlan: Yeah, that's exactly right. I think one of the main takeaways from our report is just how pervasive this tool is. Even for the most common. You know, I think there's this narrative, especially at the national level around encryption back doors, right. And the way that story gets told is that, you know, that law enforcement will use these tools in high profile cases, cases like terrorism and child exploitation, you know, they even use a term. Around exceptional or extraordinary access, which kind of indicates that access will be rare. I think what our report does is that it challenges this prevailing wisdom that law enforcement is going dark. 

What law enforcement is saying as far from the entire story as a report points out where these kinds of tools and the law enforcement interest in accessing data on people's cell phones happens, not only in cases involving major harm, but we documented our report where across the country, these tools are being used to investigate cases, including graffiti, shoplifting, vandalism, traffic crashes, parole violations, petty theft, public intoxication, you know, the full gamut of drug related offenses, you name it. These tools are being used every day on the streets, in the United States right now.

Danny: “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation's Program in Public Understanding of Science, enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. 

 

Danny: So you say that, that these devices not only can scan for data, but also make copies. Is there any kind of understanding we have about how long those copies are kept?

Harlan: That is a really important issue. One thing that we asked law enforcement agencies to provide to us through our public records requests were whether they have any policies in place. Just about half indicated that no policies at all among those only about nine had policies that we would consider detailed enough to provide any meaningful guidance to constrain what officers do.

So I think for in large part law enforcement agencies don't have specific policies in place around the use of these tools and that includes, you know, how long a law enforcement agency can retain and save that data. Now, maybe I'll just raise here a recent case in Wisconsin state vs Birch, which is a case that the eff, the ACLU and, Epic recently filed an Amicus brief in which was a case in Wisconsin where this suspect Birch was involved in this hit and run. So the police verbally asked Birch whether or not they could see his text messages. The suspect said yes, and the police had Mr. Birch sign kind of a vague consent form to search the phone. Right.

And rather than just searching and looking at text messages based on the vague consent form, law enforcement did a full forensic extraction of the phone and copied all of the data. Ultimately they found no evidence in that particular case, but then they stored that data. Now months later, Brown County Sheriff's office was investigating a homicide and they suspected that Mr. Birch was somehow involved.

And so based on the extraction that a different police department did and retaining that data, Brown County Sheriff's office then was able to get a copy of that extraction and searched the phone again and found that the suspect viewed news about the murder and there was location data on the phone that indicated that he might be around the location.

And in any case, he was then arrested and charged with the homicide an entirely different case from the first extraction. So I think that case illustrates, I think the dangers of, well, not only consent searches, which we can talk about, but the dangers of indefinite retention and the use of these tools overall.

Cindy: Oh, it's just chilling, right? I mean, essentially the police have a time machine. Right. And if they get your information at any, any point in time, then they can just go back to it later and look at it. And I think it's important to recognize that the cases that we hear about, like this case in Wisconsin, are the cases in which they found something that could be incriminating, but that says nothing about all the stories underneath the surface, where they didn't find anything, but they still did a time machine search on people.

Cindy: I want to talk about consent in a second, but I think one of the things that your report really points out is that given the racial problems that we have in our law enforcement right now, this has very serious implications for equity, for who's going to get caught up in these kinds of broad searches and time machine searches And I wonder if you want to talk a little bit more about that.

Harlan: overall we see law enforcement adoption of these tools as a dangerous expansion in their investigatory power. Given how widespread and routine these tools are being used at the local level, and given also our history of racist and discriminatory policing practices, across the country that continues to this day, it's highly likely that these tools disparately affect and are used against communities of color. The communities that are already being over policed.

Danny: What are the kinds of things they can get from these searches?

Harlan: Mobile device forensic tools can access nearly everything, all the data on the phone, even when sometimes when they're locked, right. You know, in creating mobile phone operating systems, designers have to balance security with user convenience, right? So even when your phone's locked, it'd be really nice to get notifications, to know when there's an email or a, an event on your calendar. 

Moreover, many Americans, especially people of color and people of lower incomes rely solely on their cell phones to connect to the internet.

Harlan: And so over this past decade, smartphones have become this treasure trove for law enforcement, where, you know, the information that we store on our phones, arguably. Contains much more sensitive information than even the physical artifacts that are in our homes. Which has traditionally been, perhaps the most sacred place, in terms of constitutional protection from intrusion from the government.

Cindy: Now I want to talk a little bit about, you know, how the courts have been addressing the situation we know, and a great victory for, uh, for privacy. We won a case called Riley vs California in the Supreme Court a few years ago that basically said that you can't search somebody’s phone incident to arrest without a warrant. You need to go get a warrant.

Harlan: Law enforcement is required to get a warrant to perform these kinds of phone searches, but there are many exceptions to this warrant requirement. One of them being the consent exception- this is a really common practice that we're seeing on the ground, right? When there's a consent search, those searches are then not subject to the constraints and oversight that warrants typically provide.

Now, that's not to say that warrants actually provide that many constraints in reality. And we can talk about that. We see that more as a speed bump, but even those basic legal constraints are not in place. And so this is one of the reasons why one of the recommendations in our report is to ban the use of consent searches of mobile phones, because this idea of consent search in the policing context is essentially a legal fiction. There are several states that have banned the use of consent searches for in traffic stop situations, New Jersey in 2002 Minnesota in 2003. 

Earlier this year the DC police reform commission, they made the recommendation to the DC council that the DC council prohibit all consent searches, not just for mobile phones, but a blanket prohibition across the board. And if DC council takes up this recommendation, as far as I know it would be the first full ban of consent searches anywhere across the country.

And so that's where Upturn believes that, the law should go. 

Cindy: Yeah. I just think that the idea of even calling them consent searches is a bit of a lie, right? You know, the, the, you know, either let us search your phone or let us search your house, or we're going to take you down, you know, and book you and hold you for how many hours they possibly can, like that isn't a consent, right?

I think that one of the things that we're doing here is we're trying to be honest about a situation in which consent is actually the wrong word for what's going on here, you know. I consent to lots of things because I have free will. These are not situations like that. 

Danny: And I don't think that people would necessarily understand what they were consenting to. I mean, this has been eye-opening for me and I, I feel like I track this kind of thing, but if we're talking about banning consent searches using this technology, do you think the technology as a whole should be banned, do you think police should have access to these tools at all?

Harlan:I think the goal needs to be, to reduce the use of these tools and the data available to law enforcement.

Danny: So, would that be a question of wrapping the use of these tools into sort of serious crimes or putting some constraints about how the data is used or how long it is stored for.

Harlan: I mean, I, I would worry about even legitimizing the use of these tools in certain cases, right? Again, when there's a charge, it's just the accusation that a person committed a particular crime. And I think no matter what the charge is, I think people should have the same rights. And so I don't necessarily think that we should relax the rules for certain kinds of charges or not.

Cindy: It's, it's a big step to deny law enforcement a tool, and so what's the other side of that?

Harlan: Well, I think we can look toward, all of the costs that our system of policing has on our society, right? When people get roped up into the criminal legal system in the United States, it's extremely hard to then, you know, with a criminal record, get a job, have other economic opportunities to the extent that these tools are you know making law enforcement more powerful in their investigative powers. I'm just not sure that that's the direction that our society needs to go. Right. The incarceration rate in the United States is already, you know, far outside the norm. 

Danny: I think the way I tend to think about it is that we have this protection, as you say, in a how a home and possessions, but when you talk about mobile phones, you're actually getting much more closer to kind of your people's internal thought processes and it feels more like either an interrogation or in some cases when you can go back and forth like this, a kind of mind reading exercise. And so if anything, these very intimate devices should have even more protections than we have to our closest living environments.

Harlan: One commentator called the use of these tools in particular, create a window into the soul. Right? These searches are incredibly invasive. They're incredibly broad. And yeah, as you're saying, you know, traditionally the home has been the most sacred place. There's an argument today that our phones should be just as sacred because they have the potential to reveal much more about us than any physical search. 

Cindy: We talked about the fourth amendment briefly, but it plays a role here too right? 

 

Harlan: The fourth amendment requires warrants to describe with particularity the places to be searched and the things to be seized. But in this context, oftentimes law enforcement agents also rely on the plain view exception which effectively allows law enforcement to do anything during these searches, right?

Harlan: This is a problem that legal scholars have wrestled with and EFF has wrestled with for decades where for physical searches, the plain view exception allows law enforcement to seize evidence in plain view of any place that they're lawfully permitted to be right. If it's immediately obvious that the incriminating character of the evidence is there.

But for digital searches, you know, this standard makes no sense, right? This idea that digital evidence can, can exist in quote unquote, plain view, right? In the way that physical evidence can considering how the software can display in sort, oversees data. I think is just incoherent. The language can vary from warrant to warrant, but they all authorize essentially an unlimited, and unrestricted search of the cell phone. So I think there's a questions here too, even in the search warrant context is whether these warrants are sufficiently particular. I think in many cases, the answer has got to be clearly no.

Danny: So these tools to analyze these phones are made by companies all around the world. Do you think they're used all around the world?

Harlan: Yeah, I think, human rights activists have been seeing this happen all around the world, especially for journalists who live in authoritarian countries, where, yeah, we're seeing, you know, lots of governments, purchasing these tools and using them to limit freedom of speech and freedom of expression in many other places, in addition to here.

Cindy: So let's, let's switch gears and talk a little bit about what the world looks like if we get this right. This is unlike a lot of difficult problems, this is one where you've really clearly articulated a way that we can fix it. So let's say that we ban law enforcement use of these devices, or we ban evidence you know, collected through the use of these devices from being admissible, some kind of extension of the exclusionary rule. How's this going to feel and work for those of us who have phones, which is by the way, all of us.

Harlan: I think,  you know, people will probably need to worry a little bit less, or less frequently about the ways that powerful institutions like the police can have that window into your soul to have an inside look to the things that you're thinking, the things that you're searching online, the things that you're curious about, the places that you're going right with location data being stored on the phone. Whether you're going to a doctor's office or a church or other, other religious institution, all sorts of sensitive information will at least be accessed less frequently by law enforcement in a way that hopefully will provide a greater sense of freedom and liberation especially in the society that we live in here in the United States.

Cindy: the freedom and the space of privacy that we get is not just for the individual whose phone is seized. There's a broader effect on this, not just for the people who are, you know, find themselves pulled over by the cops. It's going to be for all the people who ever talked to interact with, learn from, or read about, uh, the people who get pulled over by the cops.

Harlan: Yeah, that's absolutely right. Right. The photos on my phone have some pictures of me, but are also of my family are also of my community. And my text messages also include obviously sensitive data that other people are providing to me. The contacts in my phone, right? Just my social graph. 

Danny: So one of the things that I think can make people feel a little bit hopeful in what can feel a very oppressive story. Is what they can do to change this. And what is the role of individuals in transforming, this story? 

Harlan: I'm not sure that individual decisions are really gonna. Us to the future where we want to be. Right. We can't tell individuals to buy a higher end cell phone if they don't have the resources to do so. Right. Or to have every individual, you know, configure their phones in just the right way. I'm not sure that that is a realistic outcome. To get to where we want to be. I think, you know, the better approach is to look more systemically at the problems with our law, with the problems in law enforcement and problems where, you know, we can fix it, for everyone, you know, at the systemic level. And I think that those are the areas of opportunity in which we should focus.

Danny: In this positive vision where we're, we're presenting, is there a role for the phone companies themselves? Is there some capacity that they should be playing even in a sort of utopia where the laws and policies in the courts support, protecting your privacy?

Harlan: Yeah. The phone manufacturers have essentially been playing a cat and mouse game with law enforcement over decades, right. Uh, these tools that are being created by celebrite and gray shift, you know, they can break into the latest, you know, iPhones and the highest end Samsung, Android phones, with rare exception, right? And so there's this idea too, that even in the case of a locked phone, that law enforcement is having trouble getting access to even if, you know, you just turn on the phone and there is device encryption, there's actually a significant amount of information on iPhones that remains un-encrypted outside of the encrypted portion of the phone, what technical folks called before first unlock now after the first unlocks of the user unlocks the phone and then it gets locked.

Even more un-encrypted data becomes available. Right. 

Danny: Why is that? 

Harlan: That's a design decision that most manufacturers make to provide users with, you know, convenient features. This is just what they believe is the right balance. And so, yeah, I think there's a role here for the phone manufacturers to continue to address vulnerabilities and to make it more difficult for law enforcement to get access. 

I think there's also a potential role here to play by the vendors of the mobile device forensic tools. Right? I think one thing that we suggest in our report is that the vendors of these tools ought to maintain an audit log for every search, right, that details, the precise steps that a law enforcement officer took when extracting and analyzing the phone. The goal here would be better equipped. In cases to push back and to challenge the scope of these searches. If we could, for instance, played back using, say automatic screen reading technology, play back exactly what an examiner looked at or the process in what the examiner took in doing the search.

This would allow the judge and the defense lawyers, a chance to ask questions, and for defense lawyers to have a better chance potentially of suppressing over seized information.

Cindy: What does public safety look like in this world?

Harlan: Public safety is not the same as policing. Right? I think public safety means communities and individuals who have economic safety, who have economic opportunity, have stable housing, have job opportunities, have a good education. Right? I think we need to, you know, as, many black feminists have laid out the vision around defunding the police, right? The idea here isn’t just to tear down the police, but the process of what we have to build up. 

Cindy: I really agree with you Harlan, getting this right isn’t about whether we give or take away a particularly sophisticated law enforcement tool. It’s about shoring up the systems in communities that are too often unfairly targeted by surveillance. At EFF we say we can’t surveil ourselves to safety and I think your work really demonstrates that. 

Harlan: The idea here isn't just to tear down the police, but really the process of what we need to build up to support people and their families and their communities, which is things that don't look like surveillance tools and law enforcement as we have it today, but the absence of that and the creation and the existence of other structures that are supportive of people's livelihoods and ability to thrive and to be free.

Cindy: Oh, Harlan. This has been so interesting. And we really enjoyed talking with you. And the work that you guys do at upturn is just fabulous. Right? Really bringing a deep tech lens into the tools that law enforcement is using and recognizing how that's going to impact all of us in society, but especially the most vulnerable.

Cindy: Thank you so much for taking some time to talk with us. And, uh, let's, let's hope we move towards this, this vision of this better world together.

Danny: Thanks Harlan, it’s been great. 

Harlan: Thanks so much for having me. 

Cindy: Well, that was just terrific. You know, one of the things that struck me on this as we've spent a lot of time on this podcast, and of course, EFF has, you know, fighting for the ability for people to have strong encryption, especially on their devices. One of the things that the research that Upturn did demonstrated is that's just a tiny little piece of things. In general, our phones are broadly available and everything that's on our phones and even stuff in the cloud that's accessible through our phones is widely available to law enforcement. So it really strikes me as funny that we're spending on this tiny little piece where law enforcement might have some small problems getting access to stuff where in the gigantic piece of it they're already having free access to everything that's on our phone. 

Danny: Well, I think that there's always this framing that the world is going dark for law enforcement because of encryption. And no one talks about the fact that it's lighting up like a huge scanning display when it comes to the devices themselves and every technologist you talk to says, yeah, all bets are off once you hand a device to someone else because they can undo whatever protections that you might have on it. I think the thing that really struck me about this, though, that I hadn't realized is just how cheap and available this is. I did have it in my head that this was an FBI thing, and now we're seeing it used by really quite small local town police departments and for very low level crime too.

Cindy: Yeah,  it's eye opening. I think the other thing that's opening about this work is about how law enforcement is using consent, or at least the fiction of consent, to get around a very powerful Supreme Court protection that we got in a case called Riley vs California in 2014 that bans search incident to arrest without a warrant and the cops are just simply walking right around that by getting, you know, phony consent from people. 

Danny: I've been in that situation going through immigration where I'm asked to hand over my phone and that it's very hard to say no, because you just kind of assume they're going to flick through the last few entries and that's not what happens in these situations. 

Now Harlen wants to ban these consent searches completely. Do you agree with that? 

Cindy: Yeah, I really do and the reason I do is because it's so phony. I mean, it's the idea that these are consensual, it doesn't pass the giggle test, right? The way that power works in these situations and the pressure that cops put on you to call this consent, I think it's really not true. And so I don't think we should embrace legal fictions and the legal fiction that these searches are consensual is one that we just need to do away with because they are not. 

Danny: So while we're talking about banning consent search is one of the more positive things I got out of this discussion is there's no implication that we should be banning or forcing people to be more cautious in how they, they use their phones. Harlan called these tools essentially creating a window into the soul. But I think they also enhance our lives. I mean, they're not just a window into the soul. They actually give us ways to remember things that we would forget. They give us instant access to the world's knowledge. They make sure that I will never get lost again. And, and all of these things are things that we should be able to preserve in a free society. Despite the fact that they are so intimate and so revealing, I think that just means that they have to have the same protections that we would give to the thoughts in our head. 

Cindy: I think this is one of the ways that we need to make sure that we fix things. We need to fix things so that people can still have their devices. They can still have their tools. They can still outsource their memory and part of their brain to a device that they carry around in their pockets all the time. And  that is protected. The answer here isn't to limit what we can do with our devices. The answer is to lift up the protections that we get from law enforcement in society over the fact that we want to use these tools. 

Danny:Thank you for joining us on how to fix the internet. Check out our show notes, which will include a link to Upturn’s report. You can also get legal resources there, transcripts of the episode, and much more “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology.

And the music is by Nat Keef and Reed Mathis of Beatmower. Thanks for being with us today. And thanks again to our guest Harlan Yu from Upturn. I'm Danny O'Brien. 

Cindy: And I'm Cindy Cohn.



Related Cases: Riley v. California and United States v. Wurie
Jason Kelley

EFF’s New Series of How to Fix the Internet Podcast Tackles Toughest Issues in Tech

2 months ago
Episodes Feature Innovators Seeking Creative Solutions to Build a Better Online World

San Francisco—Troubled when Twitter takes down posts of people or organizations you follow? Concerned about protecting yourself and your community from surveillance? Electronic Frontier Foundation (EFF) has got you, with the launch today of the first season of the How to Fix the Internet podcast, featuring conversations that can plot a pathway out of today’s tech dystopias.

Hosted by EFF Executive Director Cindy Cohn and Special Advisor Danny O’Brien, How to Fix the Internet wades into topics that are top of mind among internet users and builders—ways out of the big tech lock-in, protecting our connected devices, and keeping texts and emails safe from prying eyes, just to name a few.

This season’s episodes will feature guests like comedian Marc Maron, who’ll talk about how, with EFF’s help, he marshaled the podcast community to fend off a troll claiming to own the patent for podcasting. Cohn will also host cybersecurity expert Tarah Wheeler, who’ll discuss how companies can better protect our data from attacks by giving the researchers who report vulnerabilities in their security networks a hearty thank you instead of slapping them with a lawsuit for exposing holes in their information systems.

“We piloted How to Fix the Internet last year, and it took off, because our conversations go beyond just complaining about the problems in our digital lives to explore how people are envisioning and building a better online world,” said Cohn. “We can’t create a better world unless we can envision it, and these conversations are needed to help us see how the world will look when technology better supports, protects, and empowers users.”

In today’s episode, Harlan Yu, executive director of Upturn, a nonprofit advocating justice in technology, will talk about the increasingly sophisticated tools used by police departments across the country to access the sensitive data on phones, even when they are locked. Yu will explain how straightforward changes in the law and technology can create a world where we can walk around with greater security in the tremendous amount of sensitive data we keep on our phones.

The new season of How to Fix the Internet is made possible with the support of the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology.

“We are thrilled to partner with EFF to support the launch of this major new podcast about the challenges posed by Big Tech and what consumers can do to protect their online privacy and security,” said Doron Weber, Vice President and Program Director at the Alfred P. Sloan Foundation. “How to Fix the Internet joins the nationwide Sloan radio effort, which supports shows such as Science Friday, Planet Money, and Radiolab, as well as Sloan programs to protect consumer privacy and promote the dissemination of credible information online with Wikipedia, Consumer Reports, and the Digital Public Library of America.”

To listen to today’s podcast: https://www.howtofixtheinternet.org

ABOUT THE ALFRED P. SLOAN FOUNDATION

The Alfred P. Sloan Foundation
is a New York based, philanthropic, not-for-profit institution that makes grants in three areas: research in science, technology, and economics; quality and diversity of scientific institutions; and public engagement with science. Sloan’s program in Public Understanding of Science and Technology supports books, radio, film, television, theater and new media to reach a wide, non-specialized audience. Sloan's program in Universal Access to Knowledge aims to harness advances in digital information technology to facilitate the openness and accessibility of all knowledge in the digital age for the widest public benefit under fair and secure conditions. For more information, visit Sloan.org or follow the Foundation on Twitter and Facebook at @SloanPublic.

Contact:  JasonKelleyAssociate Director of Digital Strategyjason@eff.org
Karen Gullo

EFF’s How to Fix the Internet Podcast Offers Optimistic Solutions to Tech Dystopias

2 months ago

It seems like everywhere we turn we see dystopian stories about technology’s impact on our lives and our futures—from tracking-based surveillance capitalism to street level government surveillance to the dominance of a few large platforms choking innovation to the growing pressure by authoritarian governments to control what we see and say—the landscape can feel bleak. Exposing and articulating these problems is important, but so is envisioning and then building a better future. That’s where our new podcast comes in.

Click below to listen to episodes now, or choose your podcast player:

%3Ciframe%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F1c515ea8-cb6d-4f72-8d17-bc9b7a566869%3Fdark%3Dfalse%26amp%3Bshow%3Dtrue%22%20width%3D%22100%25%22%20height%3D%22480px%22%20frameborder%3D%22no%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com


  
  

EFF's How to Fix the Internet podcast offers a better way forward. Through curious conversations with some of the leading minds in law and technology, we explore creative solutions to some of today’s biggest tech challenges.

After tens of thousands of listeners tuned in for our pilot mini-series last year, we are continuing the conversation by launching a full season. Listen today to become deeply informed on vital technology issues and join the movement working to build a better technological future. 

EFF is deeply grateful for the support of the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology, without whom this podcast would not be possible.  

“We are proud to partner with EFF to support this new podcast,” said Doron Weber, Vice President and Program Director at the Alfred P. Sloan Foundation. “How to Fix the Internet will bring an unprecedented level of expert knowledge and practical advice to one of the most complex and urgent problems of our technological age.”

With hosts Cindy Cohn and Danny O’Brien, this season we will explore ways that people are building a better world by fighting back against software patent trolls, empowering their communities to stand up for their privacy and security, supporting real security in our networks, phones and devices, creating social media communities that thrive, and safeguarding financial privacy in a world of digitized payments. 

We piloted the concept of an EFF podcast last year in a 6-episode mini-series of the same name. Not only was it a success, garnering tens of thousands of listens, but it also started a conversation. At the end of each episode, we asked how you would fix the internet, and we heard directly from our listeners about what they would do to build a better future. From technical solutions to policy fixes, people across the globe sent in thoughtful responses to what we discussed as well as their own ideas for how they’d like to see tomorrow’s Internet be more vibrant, equitable, decentralized, and free. As we kick off this season, we want to keep the invitation open and the conversation going: send your ideas and suggestions for improving the digital world to podcasts@eff.org.

Our goal is to start to imagine how the world will look when technology better supports user power and choices. This means examining how the modern Internet is often rooted in power imbalances, insecurity, and surveillance advertising in ways that have huge consequences for our ability to access information, hold private conversations, and connect with one another. But rather than reiterating everything that’s wrong on the Internet today, we also turn our attention to the solutions —both practical and idealistic — that can help to offer a better path for technology users.

 We also recognize that there is no one perfect fix for technology’s problems—in part because there’s no agreement on what those problems are, and also because there is not just one problem. Through this podcast, we seek to explore a range of different solutions rather than offer any one policy solution. We believe there are a plethora of ways to get it right.

We’re excited to be able to offer this podcast conversation, to spark us all thinking together about how we build a better future.  Please join us—the podcast is available in your podcast player of choice today.  

  
  

ABOUT THE ALFRED P. SLOAN FOUNDATION

The Alfred P. Sloan Foundation is a New York based, philanthropic, not-for-profit institution that makes grants in three areas: research in science, technology, and economics; quality and diversity of scientific institutions; and public engagement with science. Sloan’s program in Public Understanding of Science and Technology supports books, radio, film, television, theater and new media to reach a wide, non-specialized audience. For more information, visit Sloan.org or follow the Foundation on Twitter and Facebook at @SloanPublic.

rainey Reitman

Federal Agencies Need to Be Staffed to Advance Broadband and Tech Competition

2 months ago

In the U.S., we need better internet. We need oversight over Big Tech, ISPs, and other large companies. We need the federal agencies with the powers to advance competition, protect privacy, and empower consumers to be fully staffed and working. New infrastructure legislation aimed at ending the digital divide gives new responsibilities to Federal Communications Commission (FCC) and the National Telecommunications and Information Administration (NTIA), and Congress relies on the Federal Trade Commission (FTC) to reign in Big Tech and others. That means we need those agencies staffed—now, more than ever.

The new infrastructure package gives the FCC and the NTIA a lot of new work to do, including deciding how to allocate a large amount of funds to update our lagging internet infrastructure. In the meantime, we are relying on the FTC to police bad acts on the part of technology companies of all levels. When the FCC under Ajit Pai repealed net neutrality protections, they and the ISPs claimed that the FTC could police any abuse—even though the FTC already has big jobs, like safeguarding user privacy and advancing tech sector competition.

However, none of these agencies can do their jobs unless they are fully staffed. And that means that the Senate must confirm President Biden’s nominees. The consequences of sitting back are significant. These agencies have been given once-in-a-generation responsibilities. Senate leadership should commit itself to fully staffing each of these agencies before they leave for the holidays this December, so that the work on behalf of the public can begin. 

Congress must act on four critical nominations at these agencies, by the end of the year, or the agenda for better internet will fall by the wayside. Jessica Rosenworcel should be confirmed to another term on the FCC, as its chair. Gigi Sohn should be nominated to a term on the FCC, as well. At the FTC, the Senate should confirm Prof. Alvaro Bedoya. And the NTIA's work should be supported by confirming Biden's nominee Alan Davidson. 

The Incoming FCC Chairman Wants to Fix Internet Access for Children, But Can’t Without a Full Commission

In the middle of the pandemic, children—predominantly those from low-income neighborhoods—were forced onto overpriced and low-quality internet plans to do remote schooling from home. To address this, schools were forced to give wireless ISPs millions of public dollars to rent out mobile hotspots. This provided a “better than nothing” alternative to kids camping out in fast-food parking lots to do homework on wifi. In many places in the United States, this is the product of intentionally discriminatory deployment choices that happened because these companies were unregulated. While too many in Washington DC were busy praising the ISPs during the pandemic, FCC then-Commissioner Jessica Rosenworcel made clear we have to do better. The FCC has been given the power to address “digital discrimination” under the new infrastructure law. EFF and many others support an outright ban on digital redlining in order to prevent deployment practices that target 21st century access to high-income neighborhoods while forever excluding low-income areas.

However, both Chair Rosenworcel and President Biden’s FCC nominee Gigi Sohn need to be confirmed by the Senate by the end of the year to provide the Chair a working majority at the commission. (Disclosure: Sohn is a member of the EFF Board of Directors.) The FCC was inactive in 2021 due to its lack of a working majority, despite all the suffering happening in the public. It should be clear that not confirming both Rosenworcel and Sohn would be akin to doing nothing, despite the new infrastructure law Congress passed.

No One Is Addressing Big Tech if the Federal Trade Commission Remains Understaffed

The Chair of the FTC, Lina Khan, wants to improve the competitive landscape in the technology sector. She has written groundbreaking analysis on how antitrust and competition law should be updated. However, her goals remain at risk if the agency is deadlocked with four sitting commissioners, rather than five.

Professor Alvaro Bedoya, a major critic of Big Tech’s corporate surveillance practices, was nominated by the president to fully staff the FTC following the departure of Commissioner Rhohit Chopra. Long known as a privacy hawk, Professor Bedoya is generally inclined to side with Khan on the importance of regulating Big Tech, particularly when it comes to matters involving the corporate surveillance business model. That's why EFF and many civil rights and privacy organizations support his confirmation to the FTC. In essence, Professor Bedoya’s confirmation as an FTC Commissioner will provide Chair Khan with a working majority needed to begin the needed reboot to our competition policies and address consumer privacy when dealing with Big Tech. Preventing or stalling his nomination serves only one purpose—to block those efforts. 

The NTIA has been Given a Massive Job to Close the Digital Divide but Has No Leader

Biden’s NTIA nominee Alan Davidson will be given one of the biggest jobs among the new nominees: spending $65 billion to build long-term infrastructure for all Americans. This will be a multi-year effort with a range of complicated issues, guided by an agency that has never had a mission of this magnitude. The NTIA was in charge back in 2009, when Congress passed the American Recovery and Reinvestment Act, tasking the agency with implementing a much smaller $4 billion grant program

All of this complicated work remains rudderless until the Senate confirms Davidson as NTIA Administrator to do the job. The absence of an Administrator will greatly hamstring the Biden Administration and the states’ efforts to close the digital divide. 

There is a lot of work to be done, all of it important and necessary. And it can’t be done until the Senate confirms these four nominees.

Ernesto Falcon

After Facebook Leaks, Here Is What Should Come Next

2 months ago

Every year or so, a new Facebook scandal emerges. These blowups follow a fairly standard pattern, at least in the U.S. First, new information is revealed that the company misled users about an element of the platform—data sharing and data privacy, extremist content, ad revenue, responses to abuse—the list goes on. Next, following a painful news cycle for the company, Mark Zuckerberg puts on a sobering presentation for Congress about the value that Facebook provides to its users, and the work that they’ve already done to resolve the issue. Finally, there is finger-wagging, political jockeying, and within a month or two, a curious thing happens: Congress does nothing.

It’s not for lack of trying, of course—much like Facebook, Congress is a many-headed beast, and its members rarely agree on the specific problems besetting American life, let alone the solutions. But this year may be different.

Many of the problems highlighted by these documents are not particularly new. Regardless, we may finally be at a tipping point.

For the last month, Facebook has been at the center of a lengthy, damaging news cycle brought on by the release of thousands of pages of leaked documents, sent to both Congress and news outlets by former Facebook data scientist Frances Haugen. The documents show the company struggling internally with the negative impacts of both Facebook and its former-rival, now-partner platform, Instagram. (Facebook’s attempt to rebrand as Meta should not distract from the takeaways of these documents, so we will continue to call the company Facebook here.)

In addition to internal research and draft presentations released several weeks ago, thousands of new documents were released last week, including memos, chats, and emails. These documents paint a picture of a company that is seriously grappling with (and often failing in) its responsibility as the largest social media platform. In no particular order, the documents show that:

Many of the problems highlighted by these documents are not particularly new. People looking in at the black box of Facebook’s decision-making have come to similar conclusions in several areas; those conclusions have simply now been proven. Regardless, we may finally be at a tipping point.

When Mark Zuckerberg went in front of Congress to address his company’s role in the Cambridge Analytica scandal over three years ago, America’s lawmakers seemed to have trouble agreeing on basic things like how the company’s business model worked, not to mention the underlying causes of its issues or how to fix them. But since then, policymakers and politicians have had time to educate themselves. Several more hearings addressing the problems with Big Tech writ large, and with Facebook in particular have helped government develop a better shared understanding of how the behemoth operates; as a result, several pieces of legislation have been proposed to rein it in.

Now, the Facebook Papers have once again thrust the company into the center of public discourse, and the scale of the company’s problems have captured the attention of both news outlets and Congress. That’s good—it’s high time to turn public outrage into meaningful action that will rein in the company.

But it’s equally important that the solutions be tailored, carefully, to solve the actual issues that need to be addressed. No one would be happy with legislation that ends up benefitting Facebook while making it more difficult for competing platforms to coexist. For example, Facebook has been heavily promoting changes to Section 230 that would, by and large, harm small platforms while helping the behemoth.

Here’s where EFF believes Congress and the U.S. government could make a serious impact:

Break Down the Walls

Much of the damage Facebook does is a factor of its size. Other social media sites that aren’t attempting to scale across the entire planet run into fewer localization problems, are able to be more thoughtful about content moderation, and have, frankly, a smaller impact on the world. We need more options. Interoperability will help us get there.

Interoperability is the simple idea that new services should be able to plug into dominant ones. An interoperable Facebook would mean that you wouldn’t have to choose between leaving Facebook and continuing to socialize with the friends, communities and customers you have there. Today, if you want to leave Facebook, you need to leave your social connections behind as well: that means no more DMs from your friend, no more access to your sibling’s photos, and no more event invitations from your co-workers. In order for a new social network to get traction, whole social groups have to decide to switch at the same time - a virtually insurmountable barrier. But if Facebook were to support rich interoperability, users on alternative services could communicate with users on Facebook. Leaving Facebook wouldn’t mean leaving your personal network. You could choose a service - run by a rival, a startup, a co-op, a nonprofit, or just some friends - and it would let you continue to connect with content and people on Facebook, while enforcing its own moderation and privacy policies.

We need more options. Interoperability will help us get there.

Critics often argue that in an interoperable world, Facebook would have less power to deny bad actors access to our data, and thus defend us from creeps like Cambridge Analytica. But Facebook has already failed to defend us from them. When Facebook does take action against third-party spying on its platform, it’s only because that happens to be in its interests: either as a way to quell massive public outcry, or as a convenient excuse to undermine legitimate competition. Meanwhile, Facebook continues to make billions from its own exploitation of our data. Instead of putting our trust in corporate privacy policies, we’d need a democratically accountable privacy law, with a private right of action. And any new policies which promote interoperability should come with built-in safeguards against the abuse of user data.

Interoperability isn’t an alternative to demanding better of Facebook - better moderation, more transparency, better privacy rules - rather, it’s an immediate, tangible way of helping Facebook’s users escape from its walled garden right now. Not only does that make those users’ lives better - it also makes it more likely that Facebook will obey whatever rules come next, not just because those are the rules, but because when they break the rules, their users can easily leave Facebook.

Facebook knows this. It’s been waging a “secret war on switching costs” for years now. Legislation like the ACCESS Act that would force platforms like Facebook to open up are a positive step toward a more interoperable future. If a user wants to view Facebook through a third-party app that allows for better searching or more privacy, they ought to be able to do so. If they want to take their data to platforms that have better privacy protections, without leaving their friends and social connections behind, they ought to be able to do that too.

Pass a Baseline, Strong Privacy Law

Users deserve meaningful controls over how the data they provide to companies is collected, used, and shared. Facebook and other tech companies too often choose their profits over your privacy, opting to collect as much as possible while denying users intuitive control over their data. In many ways this problem underlies the rest of Facebook’s harms. Facebook’s core business model depends on collecting as much information about users as possible, then using that data to target ads - and target competitors. Meanwhile, Facebook (and Google) have created an ecosystem where other companies - from competing advertisers to independent publishers - feel as if they have no choice but to spy on their own users, or help Facebook do so, in order to squeak out revenue in the shadow of the monopolists.

Stronger baseline federal privacy laws would help steer companies like Facebook away from collecting so much of our data.  

Stronger baseline federal privacy laws would help steer companies like Facebook away from collecting so much of our data. They would also level the playing field, so that Facebook and Google cannot use their unrivaled access to our information as a competitive advantage. A strong privacy law should require real opt-in consent to collect personal data and prevent companies from re-using that data for secondary purposes. To let users enforce their rights, it must include a private cause of action that allows users to take companies to court if they break the law. This would tip the balance of power away from the monopolists and back towards users. Ultimately, a well-structured baseline could put a big dent in the surveillance business model that not only powers Facebook, but enables so many of the worst harms of the tech ecosystem as well.

Break Up the Tech

Facebook’s broken system is fueled by a growth-at-any-cost model, as indicated by some of the testimony Haugen delivered to Congress. The number of Facebook users and the increasing depth of the data it gathers about them is Facebook’s biggest selling point. In other words, Facebook’s badness is inextricably tied to its bigness

We’re pleased to see antitrust cases against Facebook. Requiring Facebook to divest Instagram, WhatsApp, and possibly other acquisitions and limiting the companies’ future mergers and acquisitions would go a long way toward solving some of the problems with the company, and inject competition into a field where it’s been stifled for many years now. Legislation to facilitate a breakup also awaits House floor action and was approved by the House Judiciary Committee.

Shine a Light On the Problems

Some of the most detailed documents that have been released so far show research done by various teams at Facebook. And, despite being done by Facebook itself, much of that research’s  conclusions are critical of Facebook’s own services.

For example: a large percentage of users report seeing content on Facebook that they consider disturbing or hateful—a situation that the researcher notes “needs to change.” Research also showed that some young female Instagram users report that the platform makes them feel bad about themselves.

But one of the problems with documents like these is that it’s impossible to know what we don’t know—we’re getting reports piecemeal, and have no idea what practical responses might have been offered or tested. Also, some of the research might not always mean what first glances would indicate, due to reasonable limitations or the ubiquity of the platform itself.

EFF has been critical of Facebook’s lack of transparency for a very long time. When it comes to content moderation, for example, the company’s transparency reports lack many of the basics: how many human moderators are there, and how many cover each language? How are moderators trained? The company’s community standards enforcement report includes rough estimates of how many pieces of content of which categories get removed, but does not tell us why or how these decisions are taken.

The company must make it easier for researchers both inside and outside to engage in independent analysis.

Transparency about decisions has increased in some ways, such as through the Facebook Oversight Board’s public decisions. But revelations from the whistleblower documents about the company’s “cross-check” program, which gives some “VIP” users a near-blanket ability to ignore the community standards, make it clear that the company has a long way to go.  Facebook should start by embracing the Santa Clara Principles on Transparency and Accountability in Content Moderation, which are a starting point for companies to properly indicate the ways that they moderate user speech.

But content moderation is just the start. Facebook is constantly talking out of both sides of its depressingly large mouth—most recently by announcing it would delete face recognition templates of users of Facebook, then backing away from this commitment in its future ventures. Given how two-faced the company has frankly, always been, transparency is an important step towards ensuring we have real insight into the platform. The company must make it easier for researchers both inside and outside to engage in independent analysis.

Look Outside the U.S. 

Facebook must do more to respect its global user base. Facebook—the platform—is available in over 100 languages, but the company has only translated its community standards into around 50 of those (as of this writing). How can a company expect to enforce its moderation rules properly when they are written in languages, or dialects, that its users can’t read?

The company also must ensure that its employees, and in particular its content moderators, have cultural competence and local expertise. Otherwise it is literally impossible for them to appropriately moderate content. But first, it has to actually employ people with that expertise. It’s no wonder that the company has tended to play catch-up when crises arrive outside of America (where it also isn’t exactly ahead of the game).

And by the way: it’s profoundly disappointing that the Facebook Papers were released only to Western media outlets. We know that many of the documents contain information about how Facebook conducts business globally—and particularly how the company fails to put appropriate resources behind its policymaking and content moderation practices in different parts of the world. Providing trusted, international media publications that have the experience and expertise to provide nuanced, accurate analysis and perspective is a vital step in the process—after all, the majority of Facebook’s users worldwide live outside of the United States and Europe.

Don’t Give In To Easy Answers

Facebook is big, but it’s not the internet. More than a billion websites exist; tens of thousands of platforms allow users to connect with one another. Any solutions Congress proposes must remember this. Though Zuckerberg may “want every other company in our industry to make the investments and achieve the results that [Facebook has],” forcing everyone else to play by their rules won’t get us to a workable online future. We can’t fix the internet with legislation that pulls the ladder up behind Facebook, leaving everyone else below.

For example: legislation that forces sites to limit recommended content could have disastrous consequences, given how commonly sites make (often helpful) choices about the information we see when we browse, from restaurant recommendations to driving directions to search results. And forcing companies to rethink their algorithms, or offer “no algorithm” versions, may seem like fast fixes for a site like Facebook. But the devil is in the details, and in how those details get applied to the entire online ecosystem.

The Facebook leaks should be the starting point—not the end—of a sincere policy debate over concrete approaches that will make the internet—not just Facebook—better for everyone. 

Facebook, for its part, seems interested in easy fixes as well. Rebranding as “Meta” amounts to a drunk driver switching cars. Gimmicks designed to attract younger users to combat its aging user base are a poor substitute for thinking about why those users refuse to use the platform in the first place.

Zuckerberg has gotten very wealthy while wringing his hands every year or two and saying, “sorry. I’m sorry. I’m trying to fix it.” Facebook’s terrible, no good, very bad news cycle is happening at the same time that the company reported a $9 billion dollar profit for the quarter.

Zuckerberg insists this is not the Facebook he wanted to create. But, he’s had nearly two decades of more-or-less absolute power to make the company into whatever he most desired, and this is where it’s ended up—despised, dangerous, and immensely profitable. Given that track record, it’s only reasonable that we handicap his suggestions during any serious consideration about how to get out of this place.

Nor should we expect policymakers to do much better unless and until they start listening to a wider array of voices. While the leaks have been directing the narrative about where the company is failing its users, there are plenty of other issues that aren’t grabbing headlines—like the fact that Facebook continues collecting data on deactivated accounts. A focused and thoughtful effort by Congress must include policy experts who have been studying the problems for years.

The Facebook leaks should be the starting point—not the end—of a sincere policy debate over concrete approaches that will make the internet—not just Facebook—better for everyone. 

Jason Kelley

EFF to Supreme Court: Warrantless 24-Hour Video Surveillance Outside Homes Violates Fourth Amendment

2 months 1 week ago
Police in Illinois Filmed Defendant’s Home Nonstop for 18 Months

Washington, D.C.—The Electronic Frontier Foundation (EFF) today urged the Supreme Court today to review and reverse a lower court decision in United States v. Tuggle finding that police didn’t need a warrant to secretly record all activity in front of someone’s home 24 hours a day, for a year and a half.

The Fourth Amendment protects people against lengthy, intrusive, always-on video recording—especially when that video records all activity outside their homes, EFF said today in a  brief filed with the court. Our homes are our most private and protected spaces, and police should not film everything that happens at the home without prior court authorization—even if police cameras are positioned on public property. In this case, police used three cameras mounted on utility poles to secretly record Travis Tuggle’s life 24/7 for 18 months. Surveillance like this can reveal intimate details of our private lives, such as when we’re home, who visits and when, what packages we receive, who our children are, and more.

The Supreme Court recognized in the landmark 2018 case Carpenter v. United States that tracking people’s physical movements using cell phone records creates a chronicle of our lives, and collecting such data without a warrant violates the Fourth Amendment. Because of its capacity to create detailed records of what goes on at people’s homes, long-term, warrantless pole camera surveillance is likewise unconstitutional.

EFF, along with the Brennan Center for Justice, the Center for Democracy and Technology, the Electronic Privacy Information Center, and the National Association of Criminal Defense Lawyers, is urging the Supreme Court to take up Tuggle’s case. It would be the first time the court has considered the rules around warrantless pole camera surveillance.

“If left to stand, the lower court’s ruling would allow police to secretly video record anyone’s home, at any time,” said EFF Senior Staff Attorney Andrew Crocker.

EFF and its partners argue that today’s video cameras make it easy for the government to collect massive amounts of information about someone’s private life. They are small, inexpensive, easily hidden, and capable of recording in the dark and zooming in to record even small text from far away. Footage can be retained indefinitely and combined with other police tools like facial recognition and filtering to enhance police capabilities.

“We urge the Court to grant certiorari and rule that using pole cameras to collect information about the comings and goings around someone’s home implicates Fourth Amendment protections,” said EFF Surveillance Litigation Director Jennifer Lynch.

For the brief:
https://www.eff.org/document/eff-tuggle-v-us-cert-petition

 

 

Karen Gullo

Apple Has Listened And Will Retract Some Harmful Phone-Scanning

2 months 1 week ago

Since August, EFF and others have been telling Apple to cancel its new child safety plans. Apple is now changing its tune about one component of its plans: the Messages app will no longer send notifications to parent accounts.

That’s good news. As we’ve previously explained, this feature would have broken end-to-end encryption in Messages, harming the privacy and safety of its users. So we’re glad to see that Apple has listened to privacy and child safety advocates about how to respect the rights of youth. In addition, sample images shared by Apple show the text in the feature has changed from “sexually explicit” to “naked,” a change that LBTQ+ rights advocates have asked for, as the phrase “sexually explicit” is often used as cover to prevent access to LGBTQ+ material. 

Now, Apple needs to take the next step, and stop its plans to scan photos uploaded to a user’s iCloud Photos library for child sexual abuse images (CSAM). Apple must draw the line at invading people’s private content for the purposes of law enforcement. As Namrata Maheshwari of Access Now pointed out at EFF’s Encryption and Child Safety event, “There are legislations already in place that will be exploited to make demands to use this technology for purposes other than CSAM.” Vladimir Cortés of Article 19 agreed, explaining that governments will “end up using these backdoors to … silence dissent and critical expression.” Apple should sidestep this dangerous and inevitable pressure, stand with its users, and cancel its photo scanning plans.

Apple: Pay attention to the real world consequences, and make the right choice to protect our privacy.

Read further on this topic: 

 

Erica Portnoy

Remembering Aaron Swartz: Aaron Swartz Day 2021

2 months 1 week ago

EFF invites everyone to participate this Saturday, Nov. 13, in this year's (virtual) Aaron Swartz Day and International Hackathon—an annual event celebrating the life and continuing legacy of activist, programmer, and entrepreneur Aaron Swartz.


Aaron Swartz was a digital rights champion who believed deeply in keeping the internet open. EFF was honored to call him an ally and friend. His life was cut short in 2013, after federal prosecutors charged him under the Computer Fraud and Abuse Act (CFAA) for systematically downloading academic journal articles from the online database JSTOR. With the threat of a long and unjust sentence before him, Aaron died by suicide at the age of 26.

He would have turned 35 this year, on November 8.

Aaron's death laid bare how federal prosecutors have abused the CFAA by wielding it to levy heavy penalties for any behavior they don't like that happens to involve a computer, rather than stopping malicious computer break-ins. EFF has continued to fight its misuses, including filing a brief in a recent Supreme Court case, Van Buren v. United States, in support of computer security researchers. In a victory for all internet users, the court recognized the danger of applying this law too broadly, and rejected the U.S. government's broad interpretation of it.

On Saturday, EFF Deputy Executive Director and General Counsel Kurt Opsahl will speak about that case and what the new ruling means for researchers like Aaron at 11:15 a.m. Following him at noon, EFF Special Advisor Cory Doctorow will present his keynote speech, "Move Fast and Fix Things: Aaron's Legacy, Competitive Compatibility and the CFAA."

In addition to speakers from EFF, the day will feature several talks from other friends and colleagues of Aaron, including Conor Schaefer of the Freedom of the Press Foundation for an annual update on one of Aaron's projects, SecureDrop; presentations from Michael “Mek” Karpeles, Brewster Kahle, and Tracey Jaquith of the Internet Archive; and a look at this year's hackathon project from Aaron Swartz Day co-founder Lisa Rein. 

There will also be several speakers from Bad Apple—a suite of easy-to-use tools designed to assist in the ongoing fight for police and sheriff accountability—to offer insights they've gleaned from working on the project and information about how to get involved. A full list of speakers and talk descriptions is available here.

Virtual talks begin at 10 a.m. PT and run until 5:15 p.m. After the programmed portion of the day wraps, all participants are invited for a more informal hangout on Chelsea Manning's Twitch stream.

If you can't make it on Saturday, you can still pay tribute to Aaron's legacy with volunteer work. This year, the organizers of Aaron Swartz Day are pointing volunteers to Bad Apple, to publicize and use the project, help review code, or volunteer some time to build up its databases. Visit www.aaronswartzday.org for more information.

Hayley Tsukayama

The Public Should Know Who Profits From Patent Troll Lawsuits

2 months 1 week ago

It’s often impossible to find out who owns a United States patent.

Even people who get sued over patents often can’t figure out who is demanding money from them. That’s even more true when the lawsuit comes from a patent troll wielding a vague software patent, something that is all too common.

That’s why we’re glad to see the issue of patent transparency come back to Congress, in the form of a recently introduced bill called the “Pride in Patent Ownership” Act, S. 2774, sponsored by Senators Patrick Leahy (D-VT) and Thom Tillis (R-NC). The Senate’s IP Subcommittee held a hearing on the bill last month.

Since 2013, EFF has supported efforts to make it clear to the public who owns patents. We’re pleased to see the issue come back to Congress, because it will once more bring attention to the lack of transparency in the patent system. We support the Pride in Patent Ownership Act as a modest step towards shining some light on the opaque operations of the U.S. patent system. However, because the bill lacks a strong enforcement mechanism, it falls short of being a bill that will truly shed the sunlight that the public needs.

Patents are a government-granted “right to exclude” competitors that typically last 20 years. During the life of the patent, a patent owner can file an infringement lawsuit against someone they believe is infringing the patent. The owner can request a court-ordered injunction, an ongoing royalty, and damages—including up to six years of retroactive damages.

Granted U.S. patents, along with many patent applications, can be looked up online, either at the U.S. Patent and Trademark Office (USPTO) website, or a third-party site that collects the public patent information, like Google Patents. While many parties choose to register their ownership at the USPTO’s Assignment Database—because it makes title and ownership clear—there’s no requirement that they do so.

Companies that have no business outside of making patent infringement threats against others often hide behind limited-liability companies (LLCs) from jurisdictions like Delaware that require little disclosure. That’s why, in the past, EFF has supported “real party in interest” language that would let the targets of bogus patent litigation, and the public at large, know who truly stands to benefit from problematic patent troll lawsuits.

The new bill, appropriately, requires that patent owners record their ownership at the USPTO. But the penalty it assesses for non-compliance is incredibly weak. Patent owners who don’t comply with the registration rules will only be barred from receiving triple infringement damages for “willfulness.” 

These special damages only come into play in especially egregious cases, such as infringement by a company that knows of a valid patent that it very likely infringes. Most importantly, patent trolls rarely have any interest whatsoever in getting triple damages for willfulness and generally would not be able to, since a defendant with a good invalidity defense is unlikely to be found to have infringed willfully, even if their defense fails. The majority of patent trolls simply ask for a settlement that’s less than the cost of litigation—perhaps $50,000 or $100,000, figures that are considered signs of a “nuisance value” settlement in the patent world. Even patent trolls that have the wherewithal to take a case to a jury trial are doing so because the prospect of regular damages makes the effort worthwhile. In other words, the threat of losing “triple damages” is no threat at all to patent trolls, and we’re concerned that even if this bill passes, they’ll simply choose not to comply with it. 

That’s too bad, because some patent trolls have gone to extreme lengths to sow confusion using shell companies. The notorious patent troll MPHJ Technologies created dozens of shell companies with names like AdzPro and GosNel, using them to send thousands of demand letters to small businesses around the country. Some of the most litigious patent trolls, such as “Shipping and Transit, LLC” (which acquired patents once owned by ArrivalStar) have changed their names and ownership structure more than once.

We’re glad these Senators are noticing that the patent system has become secretive and opaque, especially in the cases of patent troll lawsuits. We’d love to see this bill amended to have real consequences for noncompliance, and then see that stronger version passed. When it comes to 20-year government-granted monopolies, the public has a right to know who is benefiting.

Joe Mullin

Lawmakers Choose the Wrong Path, Again, With New Anti-Algorithm Bill

2 months 1 week ago

Facebook needs to be reined in. Lawmakers and everyday users are mad, having heard former Facebook employee Frances Haugen explain how Facebook valued growth and engagement over everything else, even health and safety. But Congress’s latest effort—to regulate algorithms that recommend content on social media platforms—misses the mark.

We need a strong privacy law. We need laws that will restore real competition and data interoperability. And then, we need to have a serious discussion about breaking Facebook up into its component parts. In other words, the federal government should go back and do the rigorous merger review that it should have done in the first place, before the Instagram and WhatsApp purchases.

It’s unfortunate that lawmakers are, by and large, declining to pursue these solutions. As they express shock and rage at Haugen’s testimony, we continue to see them promote legislation that will entrench the power of existing tech giants and do grievous harm to users’ right to free expression.

Personalized Recommendations Aren’t The Problem

The most recent effort is a bill called the “Justice Against Malicious Algorithms Act” (JAMA Act, H.R. 5596). This proposed law, sponsored by House Energy and Commerce Committee Chairman Frank Pallone (D-NJ) and three others, is yet another misguided attack on internet users in the name of attacking Big Tech.

The JAMA Act takes as its premise that because some large platforms are failing at content moderation, the government should mandate how services moderate users’ speech. This particular attempt at government speech control focuses on regulating the “personalized algorithms” that are used to promote or amplify user speech and connect users to each other.

The JAMA Act would remove Section 230 protections for internet services when users are served content with a “personalized algorithm” that suggests some piece of outside content to them, and then suffer a severe physical or emotional injury as a result. Essentially, the bill would make online services liable if they served a user with content from another user that the first user later claims harmed them.

One of the biggest problems is that the bill offers a vague definition for “personalized algorithm,” which it defines as any algorithm that uses “information specific to an individual.” This broad definition could go well beyond personally identifiable information and could include a user providing their location to the service, or indicating the type of content they’d like to see on a service.

Personalized recommendations happen a lot in the online world because they’re useful to users. Users who have seen a good article, watched an interesting video, or shown interest in a product or service are often interested in other, similar things.

The vague definition of “personalized algorithm” makes it almost impossible for a service to know what efforts it takes to organize and curate user-generated content on its site will fall under it. And that’s a big problem because the bill removes Section 230’s protections based on this vague definition.

Once Section 230 protections are gone, it will be much easier to sue internet services over the suggestions they make. The bill applies to any service with more than 5 million monthly users, which includes a vast swath of the net—most of it services that are much smaller than Facebook and don’t have anywhere near its level of content moderation resources.

Section 230 puts sharp limits on lawsuits when the legal claims are based on other peoples’ speech. JAMA would remove those limits in many situations. For instance, Section 230 means someone who maintains an online discussion forum usually can’t be sued over one of the discussions they hosted—but under JAMA, the forum owner could be sued if comments were presented according to a “personalized recommendation.”

A flood of low-quality lawsuits will be a strong incentive for online services to censor user speech and curtail basic tools that allow users to find topics and other users who share their interests and views.

For example, Section 230 generally prevents reviews sites from being sued over user reviews. But under JAMA, a site like Yelp could be sued for user speech, since the reviews are presented in a personalized way. A site like Etsy or eBay could be held liable for recommended products. Personalized news aggregators like Flipboard could be sued over news articles they didn’t write, but have served to users.

Punishing Recommendations Won’t Solve Anything

Given the new legal landscape JAMA would create, it’s easy to imagine web services and media companies dramatically paring back the number of recommendations they make if this proposal were to pass. But that won’t be the worst harm. In all likelihood, it will be the flood of meritless lawsuits that will be a huge burden for any web service that doesn’t have the money or clout of a Google or a Facebook. It’ll be very hard to create a small company or website that can afford to defend against the inevitable legal challenges over the content it recommends.

There are a few narrow exemptions in the bill, including one that would exempt businesses with 5 million or fewer unique monthly visitors. But that size limit wouldn’t exempt even many mid-size services. That limit would still make a site like BoardGameGeek.com liable for recommending you connect with the wrong game, or wrong gamer community; knitting site Ravelry could be in trouble for connecting you with the wrong crafter. Fitness sites like Strava, MapMyFitness, and RunKeeper are all well above the size limit, and could lose protection for recommending other users’ running and hiking routes.

This bill seems to be a direct response to ex-Facebooker Frances Haugen’s suggestion that the problem with her former employer is “the algorithm.” It’s true that content moderation at Facebook is terrible, a point we’ve made many times. But lawmakers jumped from there to drawing up a proposal that would punish a vast swath of services that simply use software to make personalized suggestions for other internet users. That’s bad policy, and it’s an attack on user speech rights.

The real answers to the problems the authors of this bill seek to fix—competition, privacy, interoperability, and strong antitrust action—are there for the taking. By introducing bills like the JAMA Act, lawmakers like Rep. Pallone are simply choosing not to use the tools that will actually get the job done. They’ve chosen the wrong path again and again—with last year’s EARN IT Act, in this year’s PACT Act, and in the SAFE Tech Act. These bills would create an explosion of new business for tort lawyers—including attorneys who file suit on behalf of online personalities who are in the business of spreading medical lies, hateful speech, and election disinformation.

Users should be able to get information tailored to them, and they should be able to choose the platform they want to deliver it. Creators should be able to have their content reach users who are interested in it, with the help of chosen platforms. All users should be able to connect with other users according to their interests, beliefs, and opinions—activities that are protected by the First Amendment. To have a truly vigorous digital public square, we need low barriers of entry for the platforms that can provide it. 

Joe Mullin

Ninth Circuit: Surveillance Company Not Immune from International Lawsuit

2 months 1 week ago

Vendors of surveillance technology can make big money on the global market, frequently by enabling authoritarian governments to spy on journalists and activists. That’s why, for years, EFF has called for more accountability against technology companies that facilitate human rights abuses by foreign governments. Now, the Ninth Circuit Court of Appeals has issued an important opinion that will help litigants hold these companies accountable. 

The court rightfully determined that, because  the NSO Group is a private company, it is not immune from the lawsuit even though it serves foreign government clients.

Almost a year after EFF attorneys filed a brief with the Ninth Circuit in support of WhatsApp’s lawsuit against the notorious Israeli spyware company NSO Group, the court issued a ruling that the company is not immune from the lawsuit alleging NSO helped its client governments target members of civil society, including Rwandan political dissidents and a journalist critical of Saudi Arabia.

The court rightfully determined that, because the NSO Group is a private company, it is not immune from the lawsuit even though it serves foreign government clients. The court addressed an open question in the case law. It has been clear that the Foreign Sovereign Immunities Act (FSIA) by its terms only applies to corporate entities owned by foreign governments. But there was an open question as to whether private corporations, whose clients are foreign governments, may invoke immunity based in common law, the rules described by court opinions rather than enacted by Congress. The Ninth Circuit said no. It held that Congress intended the statute to comprehensively address the foreign sovereign immunity of corporations, and thus the FSIA forecloses applications of immunity to corporations via common law.

Cybersurveillance companies like NSO Group shouldn’t be making a profit from spying on journalists, human right activists, and others deemed political enemies of foreign states. These companies must be held responsible for their role in not only violating digital rights – but also the very real-world consequences of that spying, including unlawful arrest, torture, and even extrajudicial killings. The Ninth Circuit’s ruling brings them one step closer.

Sophia Cope

Certbot’s Instructions Generator now available in Farsi

2 months 1 week ago

EFF’s Certbot tool helps to automate TLS/SSL certificates for web servers—and we believe that should be a global right. Certbot is a free, open source software tool for automatically using Let’s Encrypt certificates, and is part of EFF’s larger effort to encrypt the entire Internet. Websites need to use HTTPS to secure the web. Along with HTTPS Everywhere, Certbot aims to build a network that is more structurally private, safe, and protected against censorship.

A long standing goal is to make Certbot more accessible to those needing it in languages other than English. Today, we have taken that first step, by translating our Instructions Generator into Farsi.

Farsi is a highly requested language for developers that were struggling to access this much needed resource. With Certbot’s website now prepped for multilingual support (thanks to our amazing Engineering & Design and Tech-Ops teams), we plan to build in more support for translations come 2022.

EFF is grateful for the support of the National Democratic Institute in providing funding for this translation effort. NDI is a private, nonprofit, nongovernmental organization focused on supporting democracy and human rights around the world. Learn more by visiting https://NDI.org.

Alexis Hancock

European Parliament’s Plans Of A Digital Services Act Threaten Internet Freedoms

2 months 1 week ago

The EU's Digital Services Act is a chance to preserve what works and to fix what is broken. EFF and other civil society groups have advocated for new rules that protect fundamental rights online, while formulating a bold vision to address today's most pressing challenges. However, while the initial proposal by the EU Commission got several things right, the EU Parliament is toying with the idea of introducing a new filternet, made in Europe. Some politicians believe that any active platform should potentially be held liable for the communications of its users and they trust that algorithmic filters can do the trick to swiftly remove illegal content.

In an opinion piece published on "heise online" on 8 November 2021 under a CC BY 4.0 license, Julia Reda, head of the control © project at the civil rights NGO Gesellschaft für Freiheitsrechte (GFF) and former Member of the EU Parliament has analyzed the current proposals and explained what is at stake for internet users. We have translated this text below.

Edit Policy: Digital Services Act derailed in the European Parliament

It's a familiar pattern in net politics – the EU Commission makes a proposal that threatens fundamental digital rights. Civil society then mobilizes for protests and relies on the directly elected European Parliament to prevent the worst. However, in the case of the EU's most important legislative project for regulating online platforms – the Digital Services Act – the most dangerous proposals are now coming from the European Parliament itself, after the draft law of the EU Commission had turned out to be surprisingly friendly to fundamental rights.

Apparently, the European Parliament has learned nothing from the debacle surrounding Article 17 of the Copyright Directive. It threatens a dystopian set of rules that promotes the widespread use of error-prone upload filters, allows the entertainment industry to block content at the push of a button, and encourages disinformation by tabloid media on social media.

Vote on Digital Services Act postponed

The Committee on the Internal Market and Consumer Protection should have voted this Monday on its position on the Digital Services Act in order to be able to start negotiations with the Commission and the Council. Instead, a hearing for Facebook whistleblower Frances Haugen was on yesterdays’ agenda (8th November). The postponement of the vote is due to a disagreement among MEPs about the principles of platform regulation. Support for a complete departure from the tried-and-tested system of limited liability for Internet services is growing, directly threatening our freedom of communication on the Net.

The Digital Services Act is the mother of all platform laws. Unlike Article 17, the draft law is intended to regulate not only liability for copyright infringement on selected commercial platforms – but liability for all illegal activities of users on all types of hosting providers, from Facebook to non-commercial hobby discussion forums. Even if platforms block content on the basis of their general terms and conditions, the Digital Services Act is intended to define basic rules for this in order to strengthen users' rights against arbitrary decisions. In view of the balanced draft by the EU Commission, it is all the more astonishing what drastic restrictions on fundamental rights are now becoming acceptable in the European Parliament. The following three proposals are among the most dangerous.

Entertainment industry wants blocking within 30-minutes

Until now, platforms have not been liable for illegal uploads by their users, as long as they remove them expeditiously after becoming aware of an infringement. How quickly a deletion must be made depends on the individual case – for example, on whether an infringement can be clearly determined following an alert. Courts often deliberate for years on whether a particular statement constitutes an unlawful insult. In such borderline cases, no company can be expected to decide on blocking in the shortest possible time. For this reason, European legislators have renounced to strict deletion periods in the past.

This is set to change: The European Parliament's rapporteur for the Digital Services Act, Denmark's Christel Schaldemose, is demanding that platforms block illegal content within 24 hours if the content poses a threat to public order. It is unclear exactly when an upload on social networks poses a threat to public order that platforms will have little choice but to block content on demand after 24 hours.

The European Parliament’s co-advisory Legal Affairs Committee, which has already adopted its position on the Digital Services Act, goes even further and wants to give the entertainment industry in particular a free pass to block uploads. Livestreams of sports or entertainment events are to be blocked within 30 minutes; sports associations had already lobbied for similar special regulations during during copyright reform. Such short deletion periods can only be achieved by automated filters – it is hardly possible for humans to check whether a blocking request is justified at all within such a short time.

More dangerous than the Network Enforcement Act

 Strict withdrawal deadlines for illegal content are already familiar from the German Network Enforcement Act (NetzDG). Nevertheless, the proposals under discussion at the EU level are more dangerous in many respects. First, the obligation to block reported content within 24 hours under the Network Enforcement Act is limited to obviously illegal content and to a few large platforms. The European Parliament negotiator's proposal does not include such restrictions.

Second, the consequences of violating the deletion deadlines differ significantly between the NetzDG and the Digital Services Act. The NetzDG provides for fines if a platform systematically violates the requirements of the law. In plain language, this means that exceeding the 24-hour deadline once does not automatically lead to a penalty.

In the planned Digital Services Act, on the other hand, the deletion periods will become a prerequisite for limiting the liability of platforms: For content that the platform has not blocked within 24 hours of being notified, the platform operator itself will be liable, as if it had committed the illegal act itself. In the case of copyright, for example, platforms would be threatened with horrendous claims for damages for every single piece of content affected. The incentives to simply block all reported content unseen are much greater here than under the NetzDG.

Elsewhere, the European Parliament rapporteur also wants to punish platforms for misconduct by making them directly liable for infringements of their users' rights – for example, if the platforms violate transparency obligations. As important as transparency is, this approach carries great dangers. Infringements by platforms can always occur, and an appropriate response is strict market monitoring and the imposition of fines, which may well be high.

But if any violation of the rules by platforms immediately threatens the loss of the liability safe harbor, the legislator creates an incentive for platforms to control the behavior of their users as closely as possible using artificial intelligence. Such systems have high error rates and also block rows of completely unsuspicious, legal content this was recently underlined once again by Facebook whistleblower Frances Haugen.

Article 17 does not yet go far enough for the Legal Affairs Committee

The Legal Affairs Committee envisages that organizations in the entertainment industry can be recognized as so-called "trusted flaggers", which should be able to independently obtain the immediate blocking of content on platforms and only have to account for which content was affected once a year. This regulation opens the door to abuse. Even platforms that are not yet forced to use upload filters under the copyright reform would then automatically implement the blocking requests of the "trusted flaggers," who in turn would almost certainly resort to error-prone filtering systems to track down alleged copyright infringements.

On top of that, the Legal Affairs Committee’s position on redefining the exclusion of liability is absurd. Hosting providers should only be able to benefit from liability exclusion if they behave completely neutral towards the uploaded content, i.e. do not even intervene in the presentation of the content by using search functions or recommendation algorithms. If this position prevails, only pure web hosters would be covered by the liability safe harbor. All modern platforms would be directly liable for infringements by their users – including Wikipedia, GitHub or Dropbox, which were exempted from Article 17 during the copyright reform after loud protests from the Internet community. The Legal Affairs Committee's proposal would simply make it impossible to operate online platforms in the EU.

Disinformation for the ancillary copyright

Moreover, the ancillary copyright for press publishers, the prime example of patronage politics in copyright law, is once again playing a role in the debate about the Digital Services Act. The Legal Affairs Committee has been receptive to the press publishers' demand for special treatment of press content on social media. The committee is demanding that major platforms like Facebook would no longer be allowed to block the content of press publishers in the future – even if it contains obvious disinformation or violates terms and conditions.

The purpose of this regulation is clear – even if the Legal Affairs Committee claims that it serves to protect freedom of expression and media pluralism, this is yet another attempt to enforce the ancillary copyright. If the use of press articles by platforms is subject to a fee under the ancillary copyright, but at the same time the platforms are prohibited by the Digital Services Act from blocking press articles, then they have no choice but to display the articles and pay for them.

For publishers, this is a license to print money. In their crusade for the ancillary copyright, however, the publishers ensure that platforms can no longer counter disinformation as long as it only takes place in a press publication. That such a regulation is dangerous is not only revealed by the disclosures around fake news offers used for propaganda around autocratic regimes. A glance at the tabloid press is enough to understand that the publication of an article by a press publisher is no guarantee for quality, truth or even compliance with basic interpersonal rules of conduct.

The European Parliament still has time to live up to its reputation as a guarantor of fundamental rights. But unless the negotiations on the Digital Services Act take another turn, this law threatens to exacerbate the problems with online platforms instead of helping to solve them.

Original text at "heise online": Edit Policy: Digital Services Act entgleist im Europaparlament

Christoph Schmon

Data Broker Veraset Gave Bulk Device-Level GPS Data to DC Government

2 months 1 week ago

In the first weeks of the COVID-19 pandemic, a location data broker called Veraset offered officials in Washington, DC full access to its proprietary database of “highly sensitive” device-level GPS data, collected from cell phones, for the entire DC metro area.

The officials accepted the offer, according to public records obtained by EFF. Over the next six months, Veraset provided the District with regular updates about the movement of hundreds of thousands of people—cell phones in hand or tucked away in backpacks or pockets—as they moved about their daily lives. The DC Office of the Chief Technology Officer (OCTO) and The Lab @ DC, a division of the Office of the City Administrator, accepted the data and uploaded it to the District’s “Data Lake,” a unified system for storing and sharing data across DC government organizations. The dataset was only authorized for uses related to COVID research, and there’s no evidence that it has been misused. But it's unclear to what extent the policies in place bind the use or sharing of the data within the DC government.

This is far from the only instance of data sharing between private location data brokers and government agencies. Reports at the beginning of the pandemic indicated that governments around the world began working with data brokers, and in the documents we obtained, Veraset said that it was already working with “a few different agencies.” But to our knowledge, these documents are the first to detail how Veraset shared raw, individually-identifiable GPS data with a government agency. They highlight the scope and sensitivity of highly-invasive location data widely available on the open market. They also demonstrate the risk of “COVID-washing,” in which data brokers might try to earn goodwill by giving away their hazardous product to public health officials during a health crisis.

When asked to comment on the relationship, Sam Quinney, director of The Lab @ DC, gave the following statement:

DC Government received an opportunity from Veraset to analyze anonymous mobility data to determine if the data could inform decisions affecting COVID-19 response, recovery, and reopening. After working with the data, we did not find suitable insights for our use cases and did not renew access to the data when our agreement expired on September 30, 2020. The dataset was acquired for no cost and is scheduled to be deleted on December 31, 2021.

Acquisition

Veraset is a data broker that sells raw location data on the open market. It’s a spinoff of the more-publicized data broker Safegraph, which was recently banned from the Google Play store. (We’ve previously reported on Safegraph’s own relationships with government.) While Safegraph courts publicity by publishing blog posts, hosting podcasts about the data business, and touting its own work on COVID, Veraset has received comparatively little attention. The company’s website pitches its data products to real estate companies, hedge funds, advertising agencies, and governments.

A March 30, 2020 email from Veraset to DC about “Movement” and “Visits” data

Between April and September 2020, Veraset provided regular updates of raw cell-phone location data to the District, which then uploaded it to the “data lake” for further processing in conjunction with the Department of Health. Veraset offered the District access to both its “Movement” and “Visits” datasets for the DC metro area. According to Veraset literature located on data broker clearinghouse Datarade, “Movement” contains “billions” of raw GPS signals. Each signal contains a device identifier, timestamp, latitude, longitude, and altitude.

“Visits” is a more processed version of the data in Movement, which attributes device identifiers to specific, named locations. According to Veraset:

Our proprietary machine learning model merges raw GPS signal with precise polygon places to understand which anonymous device visited which POI at which time. The result is reliable, accurate, device-level visits, refreshed, and delivered daily. In other words, “X anonymous device id visited Y Starbucks at Z date and time.”

Veraset says its datasets contain only “anonymous device IDs” and do not contain personally-identifying information. But the so-called “anonymous” ID in question is the advertising identifier, the persistent string of letters and numbers that both iOS and Android devices expose to app developers. These ad IDs are not really anonymous: an entire industry of “identity resolution” services link ad IDs to real identities at scale. Moreover, the nature of location data makes it easy to de-anonymize, because a comprehensive location trace reveals where a person lives, works, and spends time. Often, all it takes to link a location trace to a real person is an address book. Although Veraset does not sell real names or phone numbers, its data is far from anonymous.

Information about exactly how many people are swept up in the dataset was redacted from the records we received. In our previous reporting on Illinois, Safegraph had sold access to data about 5 million monthly active users—over 40% of the state’s population. Elsewhere, Veraset advertises that its dataset contains “10+% of the US population” sourced from “thousands of [phone] apps and SDKs [software development kits].” Neither Veraset nor Safegraph disclose which specific mobile apps they collect data from, making it nearly impossible for most users to know whether their data ends up on the companies’ servers.

Sharing and Retention

Washington, DC’s use of the data is governed in part by a “Data Access Agreement,” written by Veraset and signed by the District. This agreement forbids the District from using the data for anything other than specified research without Veraset’s consent. Veraset provided the data for the District to download in bulk via Amazon Web Services. Updates were delivered approximately once a day, and contained data at a 24 to 72 hour delay from real time - in other words, data about where someone went on Monday could be shared with DC on Tuesday. District officials regularly transferred new data to DC’s “data lake,” where it remains today, more than a year after the relationship with Veraset ended. According to the District, the data is scheduled to be deleted at the end of 2021.

OCTO publishes a privacy policy applying to the data lake, which divides data into five distinct levels of sensitivity, numbered 0 through 4. The data acquired from Veraset, which the company described as “highly sensitive,” are categorized as “Level 3 - Confidential." This means the GPS data are encrypted at rest and in transit and that they’re inaccessible via FOIA. However, the policy is vague about how and with whom the data can be shared, stating only that other District agencies need to be “specifically authorized” to access it. In response to questions, DC officials said that that the data “was never shared in its raw state with anyone other than the authorized Lab staff and OCTO Data Lake staff.” But it remains unclear whether District law enforcement agencies are allowed to access the data, and what—if any—legal approval they’d need to do so.

Excerpt from OCTO’s “District of Columbia Data Policy”

District documents state that a “universal data sharing agreement” and “privacy impact assessment” for the data lake are in progress, but have not been completed yet.

Veraset Controlled Public Relations

Another feature of the Data Access Agreement is laid out in section 6, “Publicity.” This paragraph gives Veraset control over how the District may, or must, disclose Veraset’s involvement in any publications derived from the data. With this clause, Veraset asserted the right to approve any language that the District might use to disclose the broker’s involvement with its research. Furthermore, it reserved the right to remain “anonymous” as the source of the District’s data if it chose.

Agreement by DC to allow Veraset to control its public statements about the data

This agreement gives Veraset some power to control how DC can, or can’t, discuss its data. This appears to be part of a pattern. Though it generally gets little publicity, Veraset has received favorable mentions from trusted institutions, like university researchers and departments of health. For example, the company’s data has been credited in Veraset-friendly language by dozens of academic publications over the past two years. The Data Access Agreement between Veraset and DC may shed light on how Veraset secures such endorsements.

Was it worth it?

Although DC officials were happy to accept the “free trial” of Veraset data, they declined to sign a new contract with the provider after six months. According to the records we obtained, officials told Veraset that they “didn’t find a use case” for the data, due in part to “the limitations of app-based data.” The District continued to rely on a more privacy-protective, though still controversial, dataset from Google spin-off Replica. But Lab @DC director Sam Quinney was effusive in thanking Veraset for providing “such massive, massive regularly updating” datasets.

Email from DC stating it “didn’t find a use case” for Veraset’s “massive, massive” datasets.

Early in the pandemic, we wrote that governments shouldn’t turn to location surveillance to fight the virus, because they have not shown it would actually advance public health and thus justify the harms to civil rights and civil liberties. App-based location data in particular suffers from inaccuracy and bias that make it ill-suited to many public health uses. After more than 20 months of experimenting with intrusive location data to combat COVID, like the “massive, massive” datasets provided by Veraset to DC, governments have still not shown that this extraordinary invasion of privacy is justified by real impacts on the spread of disease.

Lessons Learned

Washington, DC is far from the only government which accepted this kind of deal from a data broker, and there’s no evidence that officials used it for anything other than COVID research. But that doesn’t make it okay. Veraset’s data is harvested from users without meaningful consent, and is monetized by giving corporations and businesses detailed information about the day-to-day movements of millions of people. At a minimum, DC should have performed more vetting of Veraset’s sources, implemented stronger privacy protections, and justified the acquisition of such sensitive data.

Governments must think twice before acquiring sensitive personal data from private companies that violate our privacy for profit. Even when money doesn’t change hands, deals between governments and data brokers erode our privacy rights and can provide cover for a shadowy, exploitative industry. Furthermore, government agencies that acquire sensitive personal data for one purpose, like public health, must have clear and specific policies that prevent excessive retention, sharing, or use. It is especially important that police and immigration enforcement officials not have access to public health data. These policies should be completed, and made public, before the agencies begin uploading sensitive data to their servers.

More importantly, this kind of data should not be for sale in the first place. Location brokers generally harvest phone app location data from users without meaningful consent, and Veraset does not disclose with specificity where it gets its data, making it nearly impossible for users to know whether they are captured in the company’s dragnet. Users derive no benefit from brokers’ exploitation of their private lives.

Thanks to the California Consumer Privacy Act, people in California can opt out of Veraset’s sale of their data using this form. But it shouldn’t be this difficult - users who don’t know Veraset exists should still be protected from its spying.

We need laws that rein in the rampant collection and sale of intimate data. That means effective consumer privacy legislation that requires companies to get real consent from users before collecting their data at all, and prevents them from harvesting data for one purpose but then selling or monetizing it in other ways. Finally, we need to prevent governments from acquiring data that should be protected by the Fourth Amendment on the open market.

Bennett Cyphers

Brazil’s Fake News Bill: Perils and Flaws of Expanding Existent Data Retention Obligations

2 months 1 week ago

This post is the second of two analyzing the risks of approving dangerous and disproportionate surveillance obligations in the Brazilian Fake News bill. You can read our first article here.

Following a series of public hearings in Brazil's Chamber of Deputies after the Senate's approval of the so-called Fake News bill (draft bill 2630), Congressman Orlando Silva released a revised text of the proposal. As we said in our first post, the new text contains both good and bad news for user privacy compared to previous versions. One piece of bad news is the expansion of existing data retention mandates.

Brazil’s Civil Rights Framework for the Internet (known as “Marco Civil”, approved in 2014) already stipulates the retention of “connection logs” and “access to application logs” for the internet service providers (ISPs) and applications set by the law. Internet applications broadly refer to websites and online platforms. According to Marco Civil, application providers constituted as legal entities, with commercial purposes, must collect and retain the date and time the application is used, from a certain IP address, for a period of six months. Article 37 of the bill seeks to indirectly expand the definition of “access to application logs” to compel application providers to retain “logs that unequivocally individualize the user of an IP address.”

Since the debates on the approval and further regulation of Marco Civil, law enforcement has pushed for including the information about users' networking ports in the law’s data retention obligation. They have sought to influence legislation and courts' understanding about the existing retention mandate, since Marco Civil doesn't mention the storage of users' ports. Such a push takes into account the current use of technical solutions (particularly those based on Network Address Translation (NAT)) that enable multiple users to simultaneously share a single public IP address. There is a shortage of public IPv4 addresses, and to help mitigate this issue, NAT allows us to use several private IPs for one public IP. NAT can do this by allocating a range of ports per private IP on the public IP. However, servers on the internet still need to correlate this information with the internet service provider logs.

Despite controversies in courts and well-founded criticism that judicial interpretation should not expand data retention obligations, recent rulings from the Superior Court of Justice (STJ) have upheld such a troublesome extension. Article 37 of the bill seeks to override this controversy with a language that can go even beyond the problematic retention of networking ports. 

The provision forces internet applications to unequivocally individualize the user of an IP address, apparently based on the flawed aspiration of linking a given IP address to a specific user without a margin of error. This language offers wide-open interpretations by law enforcement and courts that could severely extend the current data retention mandates, or even force the use of persistent identifiers linked to our every single move online. There are so many variables in internet routing that it is not possible for an application to say unequivocally who is related to a connection.

IP addresses were designed to uniquely identify electronic destinations on the internet, not specific users. While it is sometimes reasonable to assume that a single person has an IP address, for example the address given to a mobile phone, often a single address is given to an entire home and a single device like a tablet is commonly used by more than one person. Mobile networks bring additional issues that make IP addresses fluctuate. Also, a device switches IP addresses when connected to different Wi-Fi networks. Moreover, due to routing, efficiency, and availability of IPv4 address reasons, IP addresses are not static to specific devices.

Companies and individuals operating open wireless networks out of their homes, cafés, public libraries, businesses, and communities that various people can use, or even shared environments where several people use the same devices, are examples on how this can get tricky. Other services, such as Virtual Private Networks (VPNs) and proxy servers, also can make IP addresses unreliable indicators of the identity of a particular person. When connected to a VPN, the IP address visible to the website or app visited is the public IP of the VPN provider, not the one relating to the user's device.

Sometimes there might even be errors on the records that telecom companies hand to law enforcement authorities. Lastly, IP addresses can be maliciously forged to conceal the origin of the sender or impersonate another computer system. This technique, called IP Spoofing, is used in DDoS attacks and could be further exploited by attackers seeking to maliciously frame other users if the aspiration of unequivocally linking an IP address to a user is turned into law and reinforced by courts. 

Although IP addresses may be enough to pinpoint the person using a device, especially when having the date and time the application was used (and when the connection started and finished), it doesn't preclude additional checks. However, the provision seems to intend to skip this step, making the internet application responsible for checking and unequivocally asserting the individual user of an IP address. Other web identifiers like cookies, for example, can be deleted by the user and are also related to devices that can be used by multiple persons. Hardware identifiers, like the IMEI number, are only visible to applications with special permissions exactly for privacy and data protection reasons.

IP addresses (and the TCP/IP protocol) are a building block of communications on the internet, related web requests we make, and information we access. It was designed to individualize destinations so communications can happen and services can reach each other; not to uniquely individualize a user. Besides, advocating for the massive retention of IP addresses turned into unequivocally identifiers of every internet user (the vast majority of whom are  law-abiding individuals) runs afoul of international standards of privacy and data protection. 

In the landmark Digital Rights Ireland decision, the EU Court of Justice condemned the blanket retention of communications metadata as a violation of privacy and data protection rights under the EU Charter, which was later confirmed by the Tele2/Watson ruling. IACHR and UN human rights standards are clear in rejecting indiscriminate data retention mandates affecting all internet users. Mass data retention also poses security risks.

Marco Civil's debates about the provisions setting data retention obligations were heated and, although the mandates were finally approved,  legislators' choice was not to force internet applications to store information readily individualizing or identifying users. This choice was correct and does not prevent the investigation of illegal acts based on the information available. 

Article 37 of the bill is not a reasonable ask. Mass retention of communication logs unequivocally individualizing the user of an IP address will lead to severely disproportionate surveillance obligations as well as security risks. Like the traceability rule, Brazilian legislators should drop Article 37 in favor of privacy, free expression, and data protection fundamental rights. 

Veridiana Alimonti
Checked
2 hours 48 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed