Tech Rights Are Workers' Rights: Doordash Edition

1 month 2 weeks ago

Doordash workers are embroiled in a bitter labor dispute with the company: at issue, the tips that “Dashers” depend on to make the difference between a living wage and the poorhouse. Doordash has a long history of abusing its workers’ tips; including a particularly ugly case brought by the Washington, D.C. Attorney General,  only settled when Doordash paid back millions in stolen Dashers’ tips. 

Doordash maintains that its workers are “independent contractors” who can pick and choose among the delivery jobs available from moment to moment, based on the expected compensation. Given the outsized role that tips play in Dashers’ compensation, you’d think that the company would tell the workers the size of the tip that its customers had offered on each job.

But that’s not the case. Though customers input their tips when they place their orders, the amount is hidden from drivers until they complete the job - turning each dispatch into a casino game where the dealer knows the payout in advance but the worker only finds out whether they’ve made or lost money on a delivery after it’s done.

Dashers aren’t stupid - nor are they technologically unsophisticated. Dashers made heavy use of Para, an app that inspected Doordash’s dispatch orders and let drivers preview the tips on offer before they took the job. Para allowed Dashers to act as truly independent agents who were entitled to the same information as the giant corporation that relied on their labor.

But what’s good for Dashers wasn’t good for Doordash: the company wants to fulfill orders, even if doing so means that a driver spends more on gas than they make in commissions. Hiding tip amounts from drivers allowed the company to keep drivers in the dark about which runs they should make and which ones they should decline.

That’s why Doordash changed its data-model to prevent Para from showing drivers tips.  And rather than come clean about its goal of keeping drivers from knowing how much they would be paid, it made  deceptive “privacy and data security” claims. Among its claims: that Para violated its terms of service by “scraping.”

Scraping is an old and honorable tool in the technologist’s toolkit, a cornerstone of Competitive Compatibility (AKA comcom, or adversarial interoperability). It allows developers to make new or improved technologies that connect to existing ones, with or without permission from the company that made the old system.  Comcom lets users and toolsmiths collaborate to seize the means of computation, resisting disciplinary technologies like the bossware that is gradually imposing Doordash-style technological controls on all kinds of workers. It’s possible to do bad things with scraping - to commit privacy violations and worse - but there’s nothing intrinsically sinister about scraping.

Doordash loves comcom, when they’re the ones deploying it. The company routinely creates listings for restaurants that have never agreed to use it for delivery services, using “search engine optimization” and anticompetitive, loss-making pricing to interpose itself between struggling restaurateurs and their diners. 

Dashers also have a long history of subverting the technological controls that make their working lives so hard. But despite Doordash’s celebration of “disruption,” it has zero tolerance for apps that turn the tables on technological control. So Doordash stopped providing the tip information in the stream of information, effectively eliminating Para’s ability to show crucial tip information to Dashers.

Dashers are not giving up. When their technology stopped working, they switched to coordinated labor action. At the top of their demands: the  right to know what they’re going to be paid before they do a job - a perfectly reasonable thing to demand. The fact that Doordash intentionally designed an app to hide that information, and then cut off an app that tried to provide it, is ugly. Doordash should just tell Dashers the truth.   

And if they won’t, Dashers should be allowed to continue to develop and run programs that extract that information from the Doordash app, even if that involves decrypting a message or doing something else that the company doesn’t like. Reverse-engineering a program and modifying it can be fully compatible with data-security and privacy

Don’t get us wrong, the digital world needs strong legal privacy protections, which is why we support a strong federal privacy law with a private right of action. That way, your privacy would be protected whether or not a company decided to take it seriously.  But it’s hard to see how giving Dashers good information about what they will be paid is a privacy problem. And we all need to be on alert for companies that use “privacy-washing” to defend business decisions that hurt workers.  

Putting Doordash in charge of the information Dashers need would be a bad idea even if the company had a great privacy track-record (the company does not have a great privacy track-record!). It’s just too easy to use privacy as an all-purpose excuse for whatever restrictions the company wants to put on its technology.

Doordash didn’t invent this kind of spin. It is following the example set by a parade of large companies that break interoperability to improve their own bottom line at others’ expense, whether that’s HP claiming that it blocks third-party ink to protect you from blurry printouts, or car makers saying that they only want to shut down independent mechanics to defend you from murdering stalkers, or Facebook saying it only threatened accountability journalists as part of its mission to defend our privacy.

In a world where we use devices and networks to do everything from working to learning to being in community, the right to decide how those devices and networks work is fundamental. As the Dashers have shown us, when an app is your boss, you need a better app.

Cory Doctorow

Why Companies Keep Folding to Copyright Pressure, Even If They Shouldn’t

1 month 2 weeks ago

The giant record labels, their association, and their lobbyists have succeeded in getting a number of members of the U.S. House of Representatives to pressure Twitter to pay money it does not owe, to labels who have no claim to it, against the interests of its users. This is a playbook we’ve seen before, and it seems to work almost every time. For once, let us hope a company sees this extortion attempt for what it is and stands up to it.

Here is the deal. Online platforms that host user content are not liable for copyright infringement done by those users so long as they fulfill the obligations laid out in the Digital Millennium Copyright Act (DMCA). One of those obligations is to give rightsholders an unprecedented ability to have speech removed from the internet, on demand, with a simple notice sent to a platform identifying the offending content. Another is that companies must have some policy to terminate the accounts of “repeat infringers.”

Not content with being able to remove content without a court order, the giant companies that hold the most profitable rights want platforms to do more than the law requires. They do not care that their demands result in other people’s speech being suppressed. Mostly, they want two things: automated filters, and to be paid. In fact, the letter sent to Twitter by those members of Congress asks Twitter to add “content protection technology”—for free—and heavily implies that the just course is for Twitter to enter into expensive licensing agreements with the labels.

Make no mistake, artists deserve to be paid for their work. However, the complaints that the RIAA and record labels make about platforms are less about what individual artists make, and more about labels’ control. In 2020, according to the RIAA, revenues rose almost 10% to $12.2 billion in the United States. And Twitter, whatever else it is, is not where people go for music.

But the reason the RIAA, the labels, and their lobbyists have gone with this tactic is that, up until now, it has worked. Google set the worst precedent possible in this regard. Trying to avoid a fight with major rightsholders, Google voluntarily created Content ID. Content ID is an automated filter that scans uploads to see if any part—even just a few seconds—of the upload matches the copyrighted material in its database. A match can result in either a user’s video being blocked, or monetized for the claiming rightsholder. Ninety percent of Content ID partners choose to automatically monetize a match—that is, claim the advertising revenue on a creator’s video for themselves—and 95 percent of Content ID matches made to music are monetized in some form. That gives small, independent YouTube creators only a few options for how to make a living. Creators can dispute matches and hope to win, sacrificing revenue while they do and risking the loss of their channel. Fewer than one percent of Content ID matches are disputed. Or, they can painstakingly edit and re-edit videos, or avoid including almost any music whatsoever and hope that Content ID doesn’t register a match on static or a cat’s purr.

While any creator has the right to use copyrighted material without paying rightsholders in circumstances where fair use applies, Content ID routinely diverts money away from creators like these to rightsholders in the name of policing infringement. Fair use is an exercise of your First Amendment rights, but Content ID forces you to pay for that right. WatchMojo, one of the largest YouTube channels, estimated that over six years, roughly two billion dollars in ads have gone to rightsholders instead of creators. YouTube does not shy away from this effect. In its 2018 report “How Google Fights Piracy,” the company declares that “the size and efficiency of Content ID are unparalleled in the industry, offering an efficient way to earn revenue from the unanticipated, creative ways that fans reuse songs and videos.” In other words, Content ID allows rightsholders to take money away from creators who are under no obligation to obtain a license for their lawful fair uses.

That doesn’t even include the times these filters just get things completely wrong. Just the other week, a programmer live-streamed his typing and a claim was made for the sound of “typing on a modern keyboard.” A recording of static got five separate notices placed on it by the automated filter. These things don’t work.

YouTube also encourages people to simply use only the things that they have a license for or are in a library of free resources. That ignores that there is a fair use right to use copyrighted material in certain cases, and lets companies argue that no one has to use their work without paying since these free options exist.

So, when the labels make a lot of disingenuous noise about how inadequate the DMCA is and how platforms need to do more, they have YouTube to point to as a “voluntary” system that should be replicated. And companies will fold, especially if they end up being inundated with DMCA takedowns—some bogus—and if they think the other option is being required to do it by law, the implicit threat of a letter like the one Twitter received.

This tactic works. Twitch found itself buried under DMCA takedowns last year, handled that poorly, and then found itself being, like Twitter, blamed for taking money out of the hands of musicians by the RIAA. Twitch now makes removing music and claimed bits of videos easier, has adopted a similar repeat infringer policy to YouTube’s, and makes deleting clips easier for users. Snap, owner of Snapchat, went the route of getting a license, paying labels to make music available to its users.

Creating a norm of licensed or free music, monetization, or automated filters functionally eviscerates fair use. Even if people have the right to use something, they won’t be able to. On YouTube, reviewers don’t use the clips of the music or movies that are the best example of what they’re talking about—they pick whatever will satisfy the filter. That is not the model we want as a baseline. The baseline should be more protective of legal speech, not less.

Unfortunately, when the tech companies are facing off against the largest rightsholders, it's users who most often lose. Twitter is only the latest target, we hope they become the one to stand up for its users.

Katharine Trendacosta

This Captcha Patent Is An All-American Nightmare

1 month 2 weeks ago

A newly formed patent troll is looking for big money from small business websites, just for using free, off-the-shelf login verification tools. 

Defenders of the American Dream, LLC (DAD ), is sending out its demand letters to websites that use Google’s reCAPTCHA system, accusing them of infringing U.S. Patent No. 8,621,578. Google’s reCAPTCHA is just one form of a Captcha test, which describes a wide array of test systems that websites use to verify human users and keep out bots. 

DAD’s letter tells targeted companies that DAD will take an $8,500 payment, but only if “licensing terms are accepted immediately.” The threat escalates from there. If anyone dares to respond that DAD’s patent might be not infringed, or invalid, fees will rise to at least $17,000. If DAD’s patent gets subject to a legal challenge, DAD says they’ll increase their demand to at least $70,000. In the footnotes, DAD advises its targets that “not-for-profit entities are eligible for a discount.” 

The DAD demand letters we have reviewed are nearly identical, with the same fee structure. They mirror the one filed by the company itself (with the fee structure redacted) as part of their trademark application. This demand letter campaign is a perfect example of how the U.S. patent system fails to advance software innovation. Instead, our system enables extortionate behavior like DAD’s exploding fee structure. 

DAD Didn't Invent Image Captcha

DAD claims it invented a novel and patentable image-based Captcha system. But there’s ample evidence of image-based Captcha tests that predate DAD’s 2008 patent application. 

The term “Captcha” was coined by a group of researchers at Carnegie Mellon University in 2000. It’s an acronym, indicating a “Completely Automated Public Turing test to tell Computers and Humans Apart.” Essentially, it blocks automated tools like bots from getting into websites. Such tests have been important since the earliest days of the Internet. 

Early Captcha tests used squiggly lines or wavy text. The same group of CMU researchers who coined “Captcha” went on to work on an image-selection version they called ESP-PIX, which they had published and made public by 2005. 

By 2007, Microsoft had developed its own image-categorization Captcha, which used photos from, then asked users to identify cats and dogs. At the sime time, PayPal was working on new captchas that “might resemble simple image puzzles.” This was no secret—researchers from both companies spoke to the New York Times about their research, and Microsoft filed its own patent application, more than a year before DAD’s. 

There’s also evidence of earlier image-based Captcha tests in the patent record, like this early 2008 application from a company called Binary Monkeys. Here's an image from the Binary Monkeys Patent: 

And here's an image from DAD's patent: 

So how did DAD end up with this patent? During patent prosecution, DAD’s predecessor argued that they had a novel invention because the Binary Monkeys application asks users to select “all images” associated with the task, as opposed to selecting “one image,” as in DAD’s test. The patent examiner suggested adding yet another limitation: that the user still be granted access to the website if they got one “known” image and one “suspected” image. 

Unfortunately, adding trivial tweaks to existing technology, such as small details about the needed criteria for passing a Captcha test, can and often does result in a patent being granted. This was especially true back in 2008, before patent examiners should have applied guidance from the Supreme Court’s 2014 Alice v. CLS Bank decision. That’s why we have told the patent office to vigorously uphold Supreme Court guidelines, and have defended the Alice precedent in Congress.  

Where did DAD come from? 

DAD’s patent was originally filed by a Portland startup called Vidoop. In 2010, Vidoop and its patent applications were purchased by a San Diego investor who re-branded it as Confident Technologies. Confident Tech offered a “clickable, image-based CAPTCHA,” but ultimately didn’t make it as a business. In 2017 and 2018, Confident Tech sued Best Buy, Fandango Media, Live Nation, and AXS Group, claiming that the companies infringed its patent by using reCAPTCHA. Those cases all settled.

In 2020, Trevor Coddington, an attorney who worked on Confident Tech’s patent applications, created Defenders of the American Dream LLC. He transferred the patents to this new entity and started sending out demand letters. 

They haven’t all gone to large companies, either. At least one of DAD’s targets has been a one-person online publishing company. Coddington’s letter complains about how Confident Tech failed in the marketplace and suggests that because of this, reCAPTCHA users should pay—well, him. The letter states: 

[O]nce Google introduced its image-based reCAPTCHA for free, no less, [Confident Technologies] was unable to to maintain a financially viable business… Google’s efficient infringement forced CTI to abandon operations and any return on the millions of dollars of capital investment used to develop its patented solutions. Meanwhile, your company obtained and utilized the patented technology for free.” 

Creating new and better Captcha software is an area of ongoing research and innovation. While the lawyers and investors behind DAD have turned to patent threats to make money, other developers are actively innovating and competing with reCAPTCHA. There are competing image-based Captchas like hCaptcha and visualCaptcha, as well as long lists of Captcha alternatives and companies that are trying to make Captchas obsolete

These individuals and companies are all inventive, but they’re not relying on patent threats to make a buck. They’ve actually written code and shared it online. Unfortunately, because of their real contributions, they’re more likely to end up the victims of aggressive patent-holders like DAD. 

We’ll never patent our way to a better Captcha. Looking at the history of the DAD patent—which shares no code at all—makes it clear why the patent system is such a bad fit for software. 

Related documents: 

Joe Mullin

Apple's Plan to "Think Different" About Encryption Opens a Backdoor to Your Private Life

1 month 2 weeks ago

Apple has announced impending changes to its operating systems that include new “protections for children” features in iCloud and iMessage. If you’ve spent any time following the Crypto Wars, you know what this means: Apple is planning to build a backdoor into its data storage system and its messaging system.

Child exploitation is a serious problem, and Apple isn't the first tech company to bend its privacy-protective stance in an attempt to combat it. But that choice will come at a high price for overall user privacy. Apple can explain at length how its technical implementation will preserve privacy and security in its proposed backdoor, but at the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor.



To say that we are disappointed by Apple’s plans is an understatement. Apple has historically been a champion of end-to-end encryption, for all of the same reasons that EFF has articulated time and time again. Apple’s compromise on end-to-end encryption may appease government agencies in the U.S. and abroad, but it is a shocking about-face for users who have relied on the company’s leadership in privacy and security.

There are two main features that the company is planning to install in every Apple device. One is a scanning feature that will scan all photos as they get uploaded into iCloud Photos to see if they match a photo in the database of known child sexual abuse material (CSAM) maintained by the National Center for Missing & Exploited Children (NCMEC). The other feature scans all iMessage images sent or received by child accounts—that is, accounts designated as owned by a minor—for sexually explicit material, and if the child is young enough, notifies the parent when these images are sent or received. This feature can be turned on or off by parents.

When Apple releases these “client-side scanning” functionalities, users of iCloud Photos, child users of iMessage, and anyone who talks to a minor through iMessage will have to carefully consider their privacy and security priorities in light of the changes, and possibly be unable to safely use what until this development is one of the preeminent encrypted messengers.

Apple Is Opening the Door to Broader Abuses

We’ve said it before, and we’ll say it again now: it’s impossible to build a client-side scanning system that can only be used for sexually explicit images sent or received by children. As a consequence, even a well-intentioned effort to build such a system will break key promises of the messenger’s encryption itself and open the door to broader abuses.

That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change.

All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts. That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change. Take the example of India, where recently passed rules include dangerous requirements for platforms to identify the origins of messages and pre-screen content. New laws in Ethiopia requiring content takedowns of “misinformation” in 24 hours may apply to messaging services. And many other countries—often those with authoritarian governments—have passed similar laws. Apple’s changes would enable such screening, takedown, and reporting in its end-to-end messaging. The abuse cases are easy to imagine: governments that outlaw homosexuality might require the classifier to be trained to restrict apparent LGBTQ+ content, or an authoritarian regime might demand the classifier be able to spot popular satirical images or protest flyers.

We’ve already seen this mission creep in action. One of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed to create a database of “terrorist” content that companies can contribute to and access for the purpose of banning such content. The database, managed by the Global Internet Forum to Counter Terrorism (GIFCT), is troublingly without external oversight, despite calls from civil society. While it’s therefore impossible to know whether the database has overreached, we do know that platforms regularly flag critical content as “terrorism,” including documentation of violence and repression, counterspeech, art, and satire.

Image Scanning on iCloud Photos: A Decrease in Privacy

Apple’s plan for scanning photos that get uploaded into iCloud Photos is similar in some ways to Microsoft’s PhotoDNA. The main product difference is that Apple’s scanning will happen on-device. The (unauditable) database of processed CSAM images will be distributed in the operating system (OS), the processed images transformed so that users cannot see what the image is, and matching done on those transformed images using private set intersection where the device will not know whether a match has been found. This means that when the features are rolled out, a version of the NCMEC CSAM database will be uploaded onto every single iPhone. The result of the matching will be sent up to Apple, but Apple can only tell that matches were found once a sufficient number of photos have matched a preset threshold.

Once a certain number of photos are detected, the photos in question will be sent to human reviewers within Apple, who determine that the photos are in fact part of the CSAM database. If confirmed by the human reviewer, those photos will be sent to NCMEC, and the user’s account disabled. Again, the bottom line here is that whatever privacy and security aspects are in the technical details, all photos uploaded to iCloud will be scanned.

Make no mistake: this is a decrease in privacy for all iCloud Photos users, not an improvement.

Currently, although Apple holds the keys to view Photos stored in iCloud Photos, it does not scan these images. Civil liberties organizations have asked the company to remove its ability to do so. But Apple is choosing the opposite approach and giving itself more knowledge of users’ content.

Machine Learning and Parental Notifications in iMessage: A Shift Away From Strong Encryption

Apple’s second main new feature is two kinds of notifications based on scanning photos sent or received by iMessage. To implement these notifications, Apple will be rolling out an on-device machine learning classifier designed to detect “sexually explicit images.” According to Apple, these features will be limited (at launch) to U.S. users under 18 who have been enrolled in a Family Account. In these new processes, if an account held by a child under 13 wishes to send an image that the on-device machine learning classifier determines is a sexually explicit image, a notification will pop up, telling the under-13 child that their parent will be notified of this content. If the under-13 child still chooses to send the content, they have to accept that the “parent” will be notified, and the image will be irrevocably saved to the parental controls section of their phone for the parent to view later. For users between the ages of 13 and 17, a similar warning notification will pop up, though without the parental notification.

Similarly, if the under-13 child receives an image that iMessage deems to be “sexually explicit”, before being allowed to view the photo, a notification will pop up that tells the under-13 child that their parent will be notified that they are receiving a sexually explicit image. Again, if the under-13 user accepts the image, the parent is notified and the image is saved to the phone. Users between 13 and 17 years old will similarly receive a warning notification, but a notification about this action will not be sent to their parent’s device.

This means that if—for instance—a minor using an iPhone without these features turned on sends a photo to another minor who does have the features enabled, they do not receive a notification that iMessage considers their image to be “explicit” or that the recipient’s parent will be notified. The recipient’s parents will be informed of the content without the sender consenting to their involvement. Additionally, once sent or received, the “sexually explicit image” cannot be deleted from the under-13 user’s device.

Whether sending or receiving such content, the under-13 user has the option to decline without the parent being notified. Nevertheless, these notifications give the sense that Apple is watching over the user’s shoulder—and in the case of under-13s, that’s essentially what Apple has given parents the ability to do.

These notifications give the sense that Apple is watching over the user’s shoulder—and in the case of under-13s, that’s essentially what Apple has given parents the ability to do.

It is also important to note that Apple has chosen to use the notoriously difficult-to-audit technology of machine learning classifiers to determine what constitutes a sexually explicit image. We know from years of documentation and research that machine-learning technologies, used without human oversight, have a habit of wrongfully classifying content, including supposedly “sexually explicit” content. When blogging platform Tumblr instituted a filter for sexual content in 2018, it famously caught all sorts of other imagery in the net, including pictures of Pomeranian puppies, selfies of fully-clothed individuals, and more. Facebook’s attempts to police nudity have resulted in the removal of pictures of famous statues such as Copenhagen’s Little Mermaid. These filters have a history of chilling expression, and there’s plenty of reason to believe that Apple’s will do the same.

Since the detection of a “sexually explicit image” will be using on-device machine learning to scan the contents of messages, Apple will no longer be able to honestly call iMessage “end-to-end encrypted.” Apple and its proponents may argue that scanning before or after a message is encrypted or decrypted keeps the “end-to-end” promise intact, but that would be semantic maneuvering to cover up a tectonic shift in the company’s stance toward strong encryption.

Whatever Apple Calls It, It’s No Longer Secure Messaging

As a reminder, a secure messaging system is a system where no one but the user and their intended recipients can read the messages or otherwise analyze their contents to infer what they are talking about. Despite messages passing through a server, an end-to-end encrypted message will not allow the server to know the contents of a message. When that same server has a channel for revealing information about the contents of a significant portion of messages, that’s not end-to-end encryption. In this case, while Apple will never see the images sent or received by the user, it has still created the classifier that scans the images that would provide the notifications to the parent. Therefore, it would now be possible for Apple to add new training data to the classifier sent to users’ devices or send notifications to a wider audience, easily censoring and chilling speech.

But even without such expansions, this system will give parents who do not have the best interests of their children in mind one more way to monitor and control them, limiting the internet’s potential for expanding the world of those whose lives would otherwise be restricted. And because family sharing plans may be organized by abusive partners, it's not a stretch to imagine using this feature as a form of stalkerware.

People have the right to communicate privately without backdoors or censorship, including when those people are minors. Apple should make the right decision: keep these backdoors off of users’ devices.



Read further on this topic: 

India McKinney

We Have Questions for DEF CON's Puzzling Keynote Speaker, DHS Secretary Mayorkas

1 month 2 weeks ago

The Secretary of Homeland Security Alejandro Mayorkas will be giving a DEF CON keynote address this year. Those attending this weekend’s hybrid event will have a unique opportunity to “engage” with the man who heads the department responsible for surveillance of immigrants, Muslims, Black activists, and other marginalized communities. We at EFF, as longtime supporters of the information security community and having stood toe-to toe with government agencies including DHS, have thoughts on areas where Secretary Mayorkas must address digital civil liberties and human rights. So we thought it prudent to suggest some questions you might ask.

If you’re less than optimistic about getting satisfying answers to these from the Secretary, here are some organizations who are actively working to protect the rights of people targeted by the Department of Homeland Security:


Learn more about EFF's virtual participation in DEF CON 29 including EFF Tech Trivia, our annual member shirt puzzle, and a special presentation with author and Special Advisor Cory Doctorow.

Will Greenberg

16 Civil Society Organizations Call on Congress to Fix the Cryptocurrency Provision of the Infrastructure Bill

1 month 2 weeks ago

The Electronic Frontier Foundation, Fight for the Future, Defending Rights and Dissent and 13 other organizations sent a letter to Senators Charles Schumer (D-NY), Mitch McConnell (R-KY), and other members of Congress asking them to act swiftly to amend the vague and dangerous digital currency provision of Biden’s infrastructure bill.

The fast-moving, must-pass legislation is over 2,000 pages and primarily focused on issues such as updating America’s highways and digital infrastructure. However, included in the “pay-for” section of the bill is a provision relevant to cryptocurrencies that includes a new, vague, and expanded definition of what constitutes a “broker” under U.S. tax law. As EFF described earlier this week, this vaguely worded section of the bill could be interpreted to mean that many actors in the cryptocurrency space—including software developers who merely write and publish code, as well as miners who verify cryptocurrency transactions—would suddenly be considered brokers, and thus need to collect and report identifying information on their users.

In the wake of heated opposition from the technical and civil liberties community, some senators are taking action. Senators Wyden, Lummis, and Toomey have introduced an amendment that seeks to ensure that some of the worst interpretations of this provision are excluded. Namely, the amendment would seek to clarify that miners, software developers who do not hold assets for customers, and those who create hardware and software to support consumers in holding their own cryptocurrency would not be implicated under the new definition of broker.

We have already seen how digital currency supports independent community projects, routes around financial censorship, and supports independent journalists around the world. Indeed, the decentralized nature of digital currency is allowing cryptographers and programmers to experiment with more privacy-protective exchanges, and to offer alternatives for those who wish to protect their financial privacy or those who have been subject to financial censorship

The privacy rights of cryptocurrency users is a complex topic. Properly addressing such an issue requires ample opportunity for civil liberties experts to offer feedback on proposals. But there has been no opportunity to do that in the rush to fund this unrelated bill. That’s why the coalition that sent the letter—which includes national groups and local groups representing privacy advocates, journalists, technologists, and cryptocurrency users—shares a common concern about this provision's push to run roughshod over this nuanced issue.

The Wyden-Lummis-Toomey Amendment removes reporting obligations from network participants who don’t have, and shouldn’t have, access to customer information. It does so without affecting the reporting obligations placed on brokers and traders of digital assets.

Read full letter here:

rainey Reitman

With Great Power Comes Great Responsibility: Platforms Want To Be Utilities, Self-Govern Like Empires

1 month 2 weeks ago
Believe the Hype

After decades of hype, it’s only natural for your eyes to skate over corporate mission-statements without stopping to take note of them, but when it comes to ending your relationship with them,  tech giants’ stated goals take on a sinister cast.

Whether it’s “bringing the world closer together” (Facebook), “organizing the world’s information” (Google), to be a market “where customers can find and discover anything they might want to buy online” (Amazon) or “to make personal computing accessible to each and every individual” (Apple), the founding missions of tech giants reveal a desire to become indispensable to our digital lives.

They’ve succeeded. We’ve entrusted these companies with our sensitive data, from family photos to finances to correspondence. We’ve let them take over our communities, from medical and bereavement support groups to little league and service organization forums. We’ve bought trillions of dollars’ worth of media from them, locked in proprietary formats that can’t be played back without their ongoing cooperation.

These services often work great...but they fail very, very badly. Tech giants can run servers to support hundreds of millions or billions of users - but they either can’t or won’t create equally user-centric procedures for suspending or terminating those users.

But as bad as tech giants’ content removal and account termination policies are, they’re paragons of sense and transparency when compared to their appeals processes. Many who try to appeal a tech company’s judgment quickly find themselves mired in a Kafkaesque maze of automated emails (to which you often can’t reply), requests for documents that either don’t exist or have already been furnished on multiple occasions, and high-handed, terse “final judgments” with no explanations or appeal.

The tech giants argue that they are entitled to run their businesses largely as they see fit: if you don’t like the house rules, just take your business elsewhere. These house rules are pretty arbitrary: platforms’ public-facing moderation policies are vaguely worded and subject to arbitrary interpretation, and their account termination policies are even more opaque. 

Kafka Was An Optimist

All of that would be bad enough, but when it is combined with the tech companies’ desire to dominate your digital life and become indispensable to your daily existence, it gets much worse.

Losing your cloud account can cost you decades of your family photos. Losing access to your media account can cost you access to thousands of dollars’ worth of music, movies, audiobooks and ebooks. Losing your IoT account can render your whole home uninhabitable, freezing the door locks while bricking your thermostat, burglar alarm and security cameras. 

But really, it’s worse than that: you will incur multiple losses if you get kicked off just one service. Losing your account with Amazon, Google or Apple can cost you access to your home automation and security, your mobile devices, your purchased ebooks/audiobooks/movies/music, and your photos. Losing your Apple or Google account can cost you decades’ worth of personal correspondence - from the last email sent by a long-dead friend to that file-attachment from your bookkeeper that you need for your tax audit. These services are designed to act as your backup - your offsite cloud, your central repository - and few people understand or know how to make a local copy of all the data that is so seamlessly whisked from their devices onto big companies’ servers.

In other words, the tech companies set out to make us dependent on them for every aspect of our online lives, and they succeeded - but when it comes to kicking you off their platforms, they still act like you’re just a bar patron at last call, not someone whose life would be shattered if they cut you off.

YouTubers Warned Us

This has been brewing for a long time. YouTubers and other creative laborers have long suffered under a system where the accounts on which they rely to make their livings could be demonetized, suspended or deleted without warning or appeal. But today, we’re all one bad moderation call away from having our lives turned upside-down.

The tech giants’ conquest of our digital lives is just getting started. Tech companies want to manage our health, dispense our medication, take us to the polls on election day, televise our political debates and teach our kids. Each of these product offerings comes with grandiose pretensions to total dominance - it’s not enough for Amazon Pharmacy to be popular, it will be the most popular, leveraging Amazon’s existing business to cut off your corner druggist’s market oxygen (Uber’s IPO included a plan to replace all the world’s public transit and taxi vehicles with rideshares). 

If the tech companies deliver on their promises to their shareholders, then being locked out of your account might mean being locked out of whole swathes of essential services, from buying medicine to getting to work.

Well, How Did We Get Here?

How did the vibrant electronic frontier become a monoculture of “five websites, each consisting of screenshots of text from the other four?” 

It wasn’t an accident. Tech, copyright, contract and competition policy helped engineer this outcome, as did VCs and entrepreneurs who decided that online businesses were only worth backing if they could grow to world-dominating scale.

Take laws like Section 1201 of the Digital Millennium Copyright Act, a broadly worded prohibition on tampering with or removing DRM, even for lawful purposes. When Congress passed the DMCA in 1998, they were warned that protecting DRM - even when no copyright infringement took place - would leave technology users at the mercy of corporations. You may have bought your textbooks or the music you practice piano to, but if it’s got DRM and the company that sold it to you cuts you off, the DMCA does not let you remove that DRM (say goodbye to your media). 

Companies immediately capitalized upon this dangerously broad law: they sold you media that would only play back on the devices they authorized. That locked you into their platform and kept you from defecting to a rival, because you couldn’t take your media with you. 

But even as DRM formats proliferated, the companies that relied on them continued to act like kicking you off their platforms was like the corner store telling you to buy your magazines somewhere else - not like a vast corporate empire of corner stores sending goons  to your house to take back every newspaper, magazine and paperback you ever bought there, with no appeal.

It’s easy to see how the DMCA and DRM give big companies far-reaching control over your purchases, but other laws have had a similar effect. The Computer Fraud and Abuse Act (CFAA), another broadly worded mess of a law, is so badly drafted that tech companies were able to claim for decades that simply violating their terms of service could be  a crime - a chilling claim that was only put to rest by the Supreme Court this summer.

From the start, tech lawyers and the companies they worked for set things up so that most of the time, our digital activities are bound by contractual arrangements, not ownership. These are usually mass contracts, with one-sided terms of service. They’re end user license agreements that ensure that the company has a simple process for termination without any actual due process, much less strong remedies if you lose your data or the use of your devices.  

CFAA, DMCA, and other rules allowing easy termination and limiting how users and competitors could reconfigure existing technology created a world where doing things that displeased a company’s shareholders could literally be turned into a crime - a kind of “felony contempt of business-model.” 

These kinds of shady business practices wouldn’t have been quite so bad if there were a wide variety of small firms that allowed us to shop around for a better deal. 

Unfortunately, the modern tech industry was born at the same moment as American antitrust law was being dismantled - literally. The Apple ][+ appeared on shelves the same year Ronald Reagan hit the campaign trail. After winning office, Reagan inaugurated a 40-year, bipartisan project to neuter antitrust law, allowing incumbents to buy and crush small companies before they could grow to be threats; letting giant companies merge with their direct competitors, and looking the other way while companies established “vertical monopolies” that controlled their whole supply chains.

Without any brakes, the runaway merger train went barrelling along, picking up speed. Today’s tech giants buy companies more often than you buy groceries, and it has turned the tech industry into a “kill-zone” where innovative ideas go to die.

How is it that you can wake up one day and discover you’ve lost your Amazon account, and get no explanation? How is that this can cost you the server you run your small business on, a decade of family photos, the use of your ebook reader and mobile phone, and access to your entire library of ebooks, movies and audiobooks? 


Amazon is in so many parts of your life because it was allowed to merge with small competitors, create vertical monopolies, wrap its media with DRM - and never take on any obligations to be fair or decent to customers it suspected of some unspecified wrongdoing. 

Not just Amazon, either - every tech giant has an arc that looks like Amazon’s, from the concerted effort to make you dependent on its products, to the indifferent, opaque system of corporate “justice” governing account termination and content removal.

Fix the Tech Companies

Companies should be better. Moderation decisions should be transparent, rules-based, and follow basic due process principles. All of this - and more - has been articulated in detail by an international group of experts from industry, the academy, and human rights activism, in an extraordinary document called The Santa Clara Principles. Tech companies should follow these rules when moderating content, because even if they are free to set their own house rules, the public has the right to tell them when those rules suck and to suggest better ones.

If a company does kick you off its platform - or if you decide to leave - they shouldn’t be allowed to hang onto your data (or just delete it). It’s your data, not theirs. The concept of a “fiduciary” - someone with a duty to “act in good faith” towards you - is well-established. If you fire your lawyer (or if they fire you as a client), they have to give you your files. Ditto your doctor or your mental health professional. 

Many legal scholars have proposed creating “information fiduciary” rules that create similar duties for firms that hold your data. This would impose a “duty of loyalty” (to act in the best interests of their customers, without regard to the interests of the business), and a “duty of care” (to act in the manner expected by a reasonable customer under the circumstances). 

Not only would this go a long way to resolving the privacy abuses that plague our online interactions - it would also guarantee you the right to take your data with you when you left a service, whether that departure was your idea or not. 

Information fiduciary isn’t the only way to get companies to be responsible. Direct consumer protection laws -- such as requiring companies to make your content readily available to you in the event of termination -- could too (there are other approaches as well).  How these rules would apply would depend on the content they host as well as the size of the business you’re dealing with - small companies would struggle to meet the standards we’d expect of giant companies. But every online service should have some duties to you - if the company that just kicked you off its servers and took your wedding photos hostage is a two-person operation, you still want your pictures back!

Fix the Internet

Improving corporate behavior is always a laudable goal, but the real problem with giant companies that are entwined in your life in ways you can’t avoid isn’t that those companies wield their incredible power unwisely. It’s that they have that power in the first place.

To give power to internet users, we have to take it away from giant internet companies. The FTC - under new leadership - has pledged that it will end decades of waving through anticompetitive mergers. That’s just for openers, though. Competition scholars and activists have made the case for the harder task of  breaking up the giants, literally cutting them down to size.

But there’s more.  Congress is considering the ACCESS Act, landmark legislation that would force the largest companies to interoperate with privacy-respecting new rivals, who’d be banned from exploiting user data. If the ACCESS Act passes, it will dramatically lower the high switching costs that keep us locked into big platforms even though we don’t like the way they operate. It also protects folks who want to develop tools to make it easier for you to take your data when you leave, whether voluntarily or because your account is terminated. 

That’s how we’ll turn the internet back into an ecosystem of companies, co-ops and nonprofits of every size that can take receipt of your data, and offer you an online base of operations from which you can communicate with friends, communities and customers regardless of whether they’re on the indieweb or inside a Big Tech silo.

That still won’t be enough, though. The fact that terms of service, DRM, and other technologies and laws can prevent third parties from supplying software for your phone, playing back the media you’ve bought, and running the games you own still gives big companies too much leverage over your digital life.

That’s why we need to restore the right to interoperate, in all its guises: competitive compatibility (the right to plug new products and services into existing ones, with or without permission from their manufacturers), bypassing DRM (we’re suing to make this happen!), the right to repair (a fight we’re winning!) and an end to abusive terms of service (the Supreme Court got this one right).

Digital Rights are Human Rights

When we joined this fight,  30 long years ago, very few people got it. Our critics jeered at the very idea of “digital rights” - as if the nerdfights over Star Trek forums could somehow be compared to history’s great struggles for self-determination and justice! Even a decade ago, the idea of digital rights was greeted with jeers and skepticism.

But we didn’t get into this to fight for “digital rights” - we’re here to defend human rights. The merger of the “real world” and the “virtual world” could be argued over in the 1990s, but not today, not after a lockdown where the internet became the nervous system for the planet, a single wire we depended on for free speech, a free press, freedom of assembly, romance, family, parenting, faith, education, employment, civics and politics.

Today, everything we do involves the internet. Tomorrow, everything will require it. We can’t afford to let our digital citizenship be reduced to a heavy-handed mess of unreadable terms of service and broken appeals processes.

We have the right to a better digital future - a future where the ambitions of would-be monopolists and their shareholders take a back-seat to fairness, equity, and your right to self-determination.

Cory Doctorow

Flex Your Power. Own Your Tech.

1 month 2 weeks ago

Before advanced computer graphics, a collection of clumsy pixels would represent ideas far more complex than technology could capture on its own. With a little imagination, crude blocks on a screen could transform into steel titans and unknown worlds. It’s that spirit of creativity and vision that we celebrate each year at the Las Vegas hacker conferences—BSidesLV, Black Hat, and DEF CON—and beyond.

The Electronic Frontier Foundation has advised tinkerers and security researchers at conferences like these for decades because human ingenuity is faster and hungrier than the contours of the law. Copyright, patent, and hacking statutes often conflict with legitimate activities for ordinary folks, driving EFF to help fill the all-too-common gap in people's understanding of technology and the law. Thankfully, support from the public has allowed EFF to continue leading efforts to even the playing field for everyone. It brings us all closer to EFF's ambitious, and increasingly urgent, view of the future: one where creators keep civil liberties and human rights at the center of technology. You can help us build that future as an EFF member.

Stand with EFF

Join EFF and Protect Online Freedom

In honor of this week's hacker conferences, EFF’s annual mystery-filled DEF CON t-shirt is available to everyone, but it won’t last long! Our DC29 Pixel Mech design is a reminder that simple ideas can have colossal potential. Like previous years' designs, there's more than meets the eye.

EFF members' commitment to tech users and creators is more necessary each day. Together we've been able to develop privacy-enhancing tools like Certbot to encrypt more of the web; work with policymakers to support fiber broadband infrastructure; beat back dangerous and invasive public-private surveillance partnerships; propose user-focused solutions to big tech strangleholds on your free expression and consumer choice; and rein in oppressive tech laws like the CFAA, which we just fought, and won, in U.S. Supreme Court.

Just as a good hacker sees worlds of possibility in plastic, metal, and pixels, we must all envision and work for a future that’s better than what we’re given. It doesn't matter whether you're an OG cyberpunk phreaker or you just enjoy checking out the latest viral dance moves: we all benefit from a web that empowers users and supports curiosity and creativity online. Support EFF's vital work this year!

Viva Las Vegas, wherever you are.


EFF is a U.S. 501(c)(3) nonprofit with a top rating from Charity Navigator. Your gift is tax-deductible as allowed by law. You can even support EFF all year with a convenient monthly donation!

Aaron Jue

The Cryptocurrency Surveillance Provision Buried in the Infrastructure Bill is a Disaster for Digital Privacy

1 month 2 weeks ago

The forthcoming Senate draft of Biden's infrastructure bill—a 2,000+ page bill designed to update the United States’ roads, highways, and digital infrastructure—contains a poorly crafted provision that could create new surveillance requirements for many within the blockchain ecosystem. This could include developers and others who do not control digital assets on behalf of users.

While the language is still evolving, the proposal would seek to expand the definition of “broker” under section 6045(c)(1) of the Internal Revenue Code of 1986 to include anyone who is “responsible for and regularly providing any service effectuating transfers of digital assets” on behalf of another person. These newly defined brokers would be required to comply with IRS reporting requirements for brokers, including filing form 1099s with the IRS. That means they would have to collect user data, including users’ names and addresses.

The broad, confusing language leaves open a door for almost any entity within the cryptocurrency ecosystem to be considered a “broker”—including software developers and cryptocurrency startups that aren’t custodying or controlling assets on behalf of their users. It could even potentially implicate miners, those who confirm and verify blockchain transactions. The mandate to collect names, addresses, and transactions of customers means almost every company even tangentially related to cryptocurrency may suddenly be forced to surveil their users. 

How this would work in practice is still very much an open question. Indeed, perhaps this extremely broad interpretation was not even the intent of the drafters of this language. But given the rapid timeline for the bill’s likely passage, those answers may not be resolved before it hits the Senate floor for a vote.

Some may wonder why an infrastructure bill primarily focused on topics like highways is even attempting to address as complex and evolving a topic as digital privacy and cryptocurrency. This provision is actually buried in the section of the bill relevant to covering the costs of the other proposals. In general, bills that seek to offer new government services must explain how the government will pay for those services. This can be done through increasing taxes or by somehow improving tax compliance. The cryptocurrency provision in this bill is attempting to do the latter. The argument is that by engaging in more rigorous surveillance of the cryptocurrency community, the Biden administration will see more tax revenue flow in from this community without actually increasing taxes, and thus be able to cover $28 billion of its $2 trillion infrastructure plan. Basically, it’s presuming that huge swaths of cryptocurrency users are engaged in mass tax avoidance, without providing any evidence of that.

Make no mistake: there is a clear and substantial harm in ratcheting up financial surveillance and forcing more actors within the blockchain ecosystem to gather data on users. Including this provision in the infrastructure bill will: 

  • Require new surveillance of everyday users of cryptocurrency;
  • Force software creators and others who do not custody cryptocurrency for their users to implement cumbersome surveillance systems or stop offering services in the United States;
  • Create more honeypots of private information about cryptocurrency users that could attract malicious actors; and
  • Create more legal complexity to developing blockchain projects or verifying transactions in the United States—likely leading to more innovation moving overseas.

Furthermore, it is impossible for miners and developers to comply with these reporting requirements; these parties have no way to gather that type of information. 

The bill could also create uncertainty about the ability to conduct cryptocurrency transactions directly with others, via open source code (e.g. smart contracts and decentralized exchanges), while remaining anonymous. The ability to transact directly with others anonymously is fundamental to civil liberties, as financial records provide an intimate window into a person's life.

This poor drafting appears to be yet another example of lawmakers failing to understand the underlying technology used by cryptocurrencies. EFF has long advocated for Congress to protect consumers by focusing on malicious actors engaged in fraudulent practices within the cryptocurrency space. However, overbroad and technologically disconnected cryptocurrency regulation could do more harm than good. Blockchain projects should serve the interests and needs of users, and we hope to see a diverse and competitive ecosystem where values such as individual privacy, censorship-resistance, and interoperability are designed into blockchain projects from the ground up. Smart cryptocurrency regulation will foster this innovation and uphold consumer privacy, not surveil users while failing to do anything meaningful to combat fraud.

EFF has a few key concepts we’ve urged Congress to adopt when developing cryptocurrency regulation, specifically that any regulation:

  • Should be technologically neutral;
  • Should not apply to those who merely write and publish code;
  • Should provide protections for individual miners, merchants who accept cryptocurrencies, and individuals who trade in cryptocurrency as consumers;
  • Should focus on custodial services that hold and trade assets on behalf of users;
  • Should provide an adequate on-ramp for new services to comply;
  • Should recognize the human right to privacy;
  • Should recognize the important role of decentralized technologies in empowering consumers;
  • Should not chill future innovation that will benefit consumers.

The poorly drafted provision in Biden’s infrastructure bill fails our criteria across the board.

The Senate should act swiftly to modify or remove this dangerous provision. Getting cryptocurrency regulation right means ensuring an opportunity for public engagement and nuance—and the breakneck timeline of the infrastructure bill leaves no chance for either.

rainey Reitman

DHS’s Flawed Plan for Mobile Driver’s Licenses

1 month 3 weeks ago

Digital identification can invade our privacy and aggravate existing social inequities. Designed wrong, it might be a big step towards national identification, in which every time we walk through a door or buy coffee, a record of the event is collected and aggregated. Also, any system that privileges digital identification over traditional forms will disadvantage people already at society’s margins.

So, we’re troubled by proposed rules on “mobile driver’s licenses” (or “mDLs”) from the U.S. Department of Homeland Security. And we’ve joined with the ACLU and EPIC to file comments that raise privacy and equity concerns about these rules. The stakes are high, as the comments explain:

By making it more convenient to show ID and thus easier to ask for it, digital IDs would inevitably make demands for ID more frequent in American life. They may also lead to the routine use of automated or “robot” ID checks carried out not by humans but by machines, causing such demands to proliferate even more. Depending on how a digital ID is designed, it could also allow centralized tracking of all ID checks, and raise other privacy issues. And we would be likely to see demands for driver’s license checks become widespread online, which would enormously expand the tracking information such ID checks could create. In the worst case, this would make it nearly impossible to engage in online activities that aren’t tied to our verified, real-world identities, thus hampering the ability to engage in constitutionally protected anonymous speech and facilitating privacy-destroying persistent tracking of our activities and associations.

Longer-term, if digital IDs replace physical documents entirely, or if physical-only document holders are placed at a disadvantage, that could have significant implications for equity and fairness in American life. Many people do not have smartphones, including many from our most vulnerable communities. Studies have found that 15 percent of the population does not own a smartphone, including almost 40 percent of people over 65 and 24 percent of people who make less than $30,000 a year.

Finally, we are concerned that the DHS proposal layers REAL ID with mDL. REAL ID has many privacy problems, which should not be carried over into mDLs. Moreover, if a person had an mDL issued by a state DMV, that would address forgery and cloning concerns, without the need for REAL ID and its privacy problems.

Adam Schwartz

Texas AG Paxton's Retaliatory Investigation of Twitter Goes to Ninth Circuit

1 month 3 weeks ago

Governments around the world are pressuring websites and social media platforms to publish or censor particular speech with investigations, subpoenas, raids, and bans. Burdensome investigations are enough to coerce targets of this retaliation into following a government’s editorial line. In the US, longstanding First Amendment law recognizes and protects people from this chilling effect. In an amicus brief filed July 23, EFF, along with the Center for Democracy and Technology and other partner organizations, urged the Ninth Circuit Court of Appeals to apply this law and protect Twitter from a retaliatory investigation by Texas Attorney General Ken Paxton.

After Twitter banned then-President Trump following the January 6 riots at the U.S. Capitol, Paxton issued a Civil Investigative Demand (CID) to Twitter (and other major online platforms) for, among other things, any documents relating to its terms of use and content moderation practices. The CID alleged "possible violations" of Texas's deceptive practices law.

The district court allowed Paxton's investigation to proceed because, even if it was retaliatory, Paxton would have to go to court to enforce the CID: Twitter had to let the investigation play out before it could sue. But as our brief says, courts have recognized that “even pre-enforcement, threatened punishment of speech has a chilling effect.” You don't have to wait to go to court when your free expression is inhibited.

Access to online platforms with different rules and environments generally benefits users, though the brief also points out that EFF and partner organizations have criticized platforms for removing benign posts, censoring human rights activists and journalists, and other bad content moderation practices. We have to address those mistakes, but not through chilling government investigations.

Mukund Rathi

The Bipartisan Broadband Bill: Good, But It Won’t End the Digital Divide

1 month 3 weeks ago

The U.S. Senate is on the cusp of approving an infrastructure package, which passed a critical first vote last night by 67-32. Negotiations on the final bill are ongoing, but late yesterday NBC News had the draft broadband provisions. There is a lot to like in it, some of which will depend on decisions by the state governments and the Federal Communications Commission (FCC), and some drawbacks. Assuming that what was released makes it into the final bill, here is what to expect.

Not Enough Money to Close the Digital Divide Across the U.S.

We have long advocated for, backed up by evidence, a plan that would connect every American to fiber. It is a vital part of any nationwide communications policy that intends to actually function in the 21st century. The future is clearly heading towards more symmetrical uses, that will require more bandwidth at very low latency. Falling short of that will inevitably create a new digital divide, this one between those with 21st-century access and those without. Fiber-connected people will head towards the cheaper symmetrical multi-gigabit era while others are stuck on capacity-constrained expensive legacy wires. This “speed chasm” will create a divide between those who can participate in an increasingly remote, telecommuting world and those who cannot.

Most estimates put the price tag of universal fiber at $80 to $100 billion, but this bipartisan package proposes only $40 billion in total for construction. It’s pretty obvious that this shortfall will prevent many areas from the funding they need to deliver fiber--or really any broadband access—to the millions of Americans in need of access.

While Congress can rectify this shortfall in the future with additional infusions of funding, as well as a stronger emphasis on treating fiber as an infrastructure, versus purely a broadband service. But it should be clear what it means to not do so now. Some states will do very well under this proposal, by having the federal efforts complement already existing state efforts. For example, California already has a state universal fiber effort underway that recruits all local actors to work with the state to deliver fiber infrastructure. More federal dollars will just augment an already very good thing there. But other states may, unfortunately, get duped into building out or subsidizing slow networks that will inevitably need to be replaced. That will cost the state and federal government more money in the end. This isn’t fated to happen, but it’s a risk invited by the legislation’s adoption of 100/20 Mbps as the build-out metric instead of 100/100 Mbps.

Protecting the Cable Monopolies Instead of Giving Us What We Need

Lobbyists for the slow legacy internet access companies descended on Capitol Hill with a range of arguments trying to dissuade Congress from creating competition in neglected markets, which in turn would force existing carriers to provide better service. Everyone will eventually need access to fiber-optic infrastructure. Our technical analysis has made clear that fiber is the superior medium for 21st-century broadband, which is why government infrastructure policy needs to be oriented around pushing fiber into every community.

Even major wireless industry players agree now that fiber is “inextricably linked” with future high-speed wireless connectivity. But all of this was very inconvenient for existing legacy monopolies. Most noteworthy, cable stood to lose if too many people got very fast cheaper internet from someone else. The legislation includes provisions to effectively insulate the underinvested cable monopoly markets from federal dollars. That, arguably, is the worst outcome here.

By defining internet access as the ability to get 100/20 Mbps service, the draft language allows cable monopolies to argue that anyone with access to ancient, insufficient internet access does not need federal money to build new infrastructure. That means communities with nearly decade-old DOCSIS 3.0 broadband are shielded from federal dollars from being used to build fiber. Copper-DSL-only areas, and areas entirely without broadband, will likely take the lion’s share of the $40 billion made available. In addition to rural areas, pockets of urban markets where people are still lacking broadband will qualify. This will lead to an absurd result: people on inferior, too-expensive cable services will be seen as equally served as their neighbors who will get federally funded fiber.

The Future-Proofing Criteria Is Essential to Help Avoid Wasting These Investments

The proposal establishes a priority (not a mandate) for future-proof infrastructure, which is essential to avoid the 100/20 Mbps speed, or something close to it, from becoming standard. Legacy industry was fond of telling Congress to be “technology neutral” in its policy, when really they were asking Congress to create a program that subsidized their obsolete connections by lowering the bar. The future-proofing provision helps avoid that outcome though by establishing federal priorities of the broadband projects being funded (see below).

This is where things will be challenging in the years to come. The Biden Administration has been crystal clear about the link between fiber infrastructure and future-proofing per its Treasury guidelines that implemented the broadband provisions of the American Rescue Plan. But the bipartisan bill gives a lot of discretion to the states to distribute the funds. Without a doubt, the same lobby that descended on Congress to argue against 100/100 Mbps will attempt to grift state governments into believing any infrastructure will deliver these goals. That is just not true as a matter of physics.  States that understand this will push fiber, and are given the flexibility to do so here.

Digital Discrimination Rules

Under the section titled “digital discrimination,” the bill requires the FCC to establish what it means to have equal access to broadband and, more importantly, what a carrier would have to do to violate such a requirement. This provision carries major possibilities but is dependent on who the president nominates to run the FCC, as it will be their responsibility for setting the rules. If done right, it can set the stage for addressing digital redlining in certain urban communities, and push fiber on equitable terms.

If they get the right regulation, the most direct beneficiaries are likely to be city broadband users who have been left behind. Even in big cities with profitable markets, people have been left behind. For example, San Francisco has approximately 100,000 people per the city’s own internal analysis that lack broadband (most of whom are low-income and predominantly people of color), yet are surrounded by Comcast and AT&T fiber deployments in that same city. The same is true in various other major cities per numerous studies, which is why EFF has called for a ban on digital redlining both at the state and federal levels. 

Ernesto Falcon

It’s Time for Police to Stop Using ShotSpotter

1 month 3 weeks ago

Court documents recently reviewed by VICE have revealed that ShotSpotter, a company that makes and sells audio gunshot detection to cities and police departments, may not be as accurate or reliable as the company claims. In fact, the documents reveal that employees at ShotSpotter may be altering alerts generated by the technology in order to justify arrests and buttress prosecutors’ cases. For many reasons, including the concerns raised by these recent reports, police must stop using technologies like ShotSpotter.

Acoustic gunshot detection relies on a series of sensors, often placed on lamp posts or buildings. If a gunshot is fired, the sensors detect the specific acoustic signature of a gunshot and send the time and location to the police. Location is gauged by measuring the amount of time it takes for the sound to reach sensors in different locations.

According to ShotSpotter, the largest vendor of acoustic gunshot detection technology, this information is then verified by human acoustic experts to confirm the sound is gunfire, and not a car backfire, firecracker, or other sounds that could be mistaken for gunshots. The sensors themselves can only determine whether there is a loud noise that somewhat resembles a gunshot. It’s still up to people listening on headphones to say whether or not shots were fired.

In a recent statement, ShotSpotter denied the VICE report and claimed that the technology is “100% reliable.” Absolute claims like these are always dubious. And according to the testimony of a ShotSpotter employee and expert witness in court documents reviewed by VICE, claims about the accuracy of the classification come from the marketing department of the company—not from engineers.

Moreover, ShotSpotter presents a real and disturbing threat to people who live in cities covered in these AI-augmented listening devices—which all too often are over-deployed in majority Black and Latine neighborhoods. It's important to note that many of ShotSpotter's claims of accuracy are generated by marketers, not engineers. A recent study of Chicago showed how, over the span of 21 months, ShotSpotter sent police to dead-end reports of shots fired over 40,000 times--although some experts and studies have disputed this claim. This shows—again—that the technology is not as accurate as the company’s marketing department claims. It also means that police officers routinely are deployed to neighborhoods expecting to encounter an armed shooter, and instead encounter innocent pedestrians and neighborhood residents. This creates a real risk that police officers will interpret anyone they encounter near the projected site of the loud noises as a threat—a scenario that could easily result in civilian casualties, especially in over-policed communities.

In addition to its history of false positives, the danger it poses to pedestrians and residents, and the company's dubious record of altering data at the behest of police departments, there is also a civil liberties concern posed by the fact that these microphones intended to detect gunshots can also record human voices.

Yet people in public places—for example, having a quiet conversation on a deserted street—are often entitled to a reasonable expectation of privacy, without overhead microphones unexpectedly recording their conversations. Federal and state eavesdropping statutes (sometimes called wiretapping or interception laws) typically prohibit the recording of private conversations absent consent from at least one person in that conversation.

In at least two criminal trials, prosecutors sought to introduce as evidence audio of voices recorded on acoustic gunshot detection systems. In the California case People v. Johnson, the court admitted it into evidence. In the Massachusetts case Commonwealth v. Denison,  the court did not, ruling that a recording of “oral communication” is prohibited “interception” under the Massachusetts Wiretap Act.

It’s only a matter of time before police and prosecutors’ reliance on ShotSpotter leads to tragic consequences. It’s time for cities to stop using ShotSpotter.

Matthew Guariglia

Disentangling Disinformation: Not as Easy as it Looks

1 month 3 weeks ago

Body bags claiming that “disinformation kills” line the streets today in front of Facebook’s Washington, D.C. headquarters. A group of protesters, affiliated with “The Real Facebook Oversight Board” (an organization that is, confusingly, not affiliated with Facebook or its Oversight Board), is urging Facebook’s shareholders to ban so-called misinformation “superspreaders”—that is, a specific number of accounts that have been deemed responsible for the majority of disinformation about the COVID-19 vaccines.

Disinformation about the vaccines is certainly contributing to their slow uptake in various parts of the U.S. as well as other countries. This disinformation is spreading through a variety of ways: Local communities, family WhatsApp groups, FOX television hosts, and yes, Facebook. The activists pushing for Facebook to remove these “superspreaders” are not wrong: while Facebook does currently ban some COVID-19 mis- and disinformation, urging the company to enforce its own rules more evenly is a tried-and-true tactic.

But while disinformation “superspreaders” are easy to identify based on the sheer amount of information they disseminate, tackling disinformation at a systemic level is not an easy task, and some of the policy proposals we’re seeing have us concerned. Here’s why.

1. Disinformation is not always simple to identify.

In the United States, it was only a few decades ago that the medical community deemed homosexuality a mental illness. It took serious activism and societal debate for the medical community to come to an understanding that it was not. Had Facebook been around—and had we allowed it to be arbiter of truth—that debate might not have flourished.

Here’s a more recent example: There is much debate amongst the contemporary medical community as to the causes of ME/CFS, a chronic illness for which a definitive cause has not been determined—and which, just a few years ago, was thought by many not to be real. The Centers for Disease Control notes this and acknowledges that some healthcare providers may not take the illness seriously. Many sufferers of ME/CFS use platforms like Facebook and Twitter to discuss their illness and find community. If those platforms were to crack down on that discussion, relying on the views of the providers that deny the gravity of the illness, those who suffer from it would suffer more greatly.

2. Tasking an authority with determining disinfo has serious downsides.

As we’ve seen from the first example, there isn’t always agreement between authorities and society as to what is truthful—nor are authorities inherently correct.

In January, German newspaper Handelsblatt published a report stating that the Oxford-AstraZeneca vaccine was not efficacious for older adults, citing an anonymous government source and claiming that the German government’s vaccination scheme was risky.

AstraZeneca denied the claims, and no evidence that the vaccine was ineffective for older adults was procured, but it didn’t matter: Handelsblatt’s reporting set off a series of events that led to AstraZeneca’s reputation in Germany suffering considerably. 

Finally, it’s worth pointing out that even the CDC itself—the authority tasked with providing information about COVID-19—has gotten a few things wrong, most recently in May when it lifted its recommendation that people wear masks indoors, an event that was followed by a surge in COVID-19 cases. That shift was met with rigorous debate on social media, including from epidemiologists and sociologists—debate that was important for many individuals seeking to understand what was best for their health. Had Facebook relied on the CDC to guide its misinformation policy, that debate may well have been stifled.

3. Enforcing rules around disinformation is not an easy task.

We know that enforcing terms of service and community standards is a difficult task even for the most resourced, even for those with the best of intentions—like, say, a well-respected, well-funded German newspaper. But if a newspaper, with layers of editors, doesn’t always get it right, how can content moderators—who by all accounts are low-wage workers who must moderate a certain amount of content per hour—be expected to do so? And more to the point, how can we expect automated technologies—which already make a staggering amount of errors in moderation—to get it right?

The fact is, moderation is hard at any level and impossible at scale. Certainly, companies could do better when it comes to repeat offenders like the disinformation “superspreaders,” but the majority of content, spread across hundreds of languages and jurisdictions, will be much more difficult to moderate—and as with nearly every category of expression, plenty of good content will get caught in the net.

Jillian C. York

Should Congress Close the FBI’s Backdoor for Spying on American Communications? Yes.

1 month 3 weeks ago

All of us deserve basic protection against government searches and seizures that the Constitution provides, including requiring law enforcement to get a warrant before it can access our communications.  But currently, the FBI has a backdoor into our communications, a loophole, that Congress can and should close.

This week, Congress will vote on the Commerce, Justice, Science and Related Agencies Appropriations bill (H.R. 4505). Among many other things, this bill contains all the funding for the Department of Justice for Fiscal Year 2022 along with certain restrictions on how the DOJ is allowed to spend taxpayer funds. Reps. Lofgren, Massie, Jayapal, and Davidson have offered an amendment to the bill that would prohibit the use of taxpayer funds to conduct warrantless wiretapping of US Persons conducted under Section 702 of the FISA Amendments Act. We strongly support this Amendment.

Section 702 of the Foreign Intelligence Surveillance Act (FISA) requires tech and telecommunications companies to provide the U.S. government with access to emails and other communications to aid in national security investigations--ostensibly when U.S. persons are in communication with foreign surveillance targets abroad or wholly foreign communications transit the U.S. But in this wide-sweeping dragnet approach to intelligence collection, companies allows government access and collection of a large amount of “incidental” communications--that is millions of untargeted communications of U.S. persons that are swept up with the intended data. Once it is collected, the FBI currently can bypass the 4th Amendment requirement of a warrant and sift through these “incidental” non-targeted communications of Americans -- effectively using Section 702 as a “backdoor” around the constitution. They’ve been told by the FISA Court this violates Americans’ Fourth Amendment rights but it has not seemed to stop them and, frustratingly, the FISA Court has failed to take steps to ensure that they stop.

This amendment would not only forbid the DOJ from doing this activity, it would also send a powerful signal to the intelligence agency that Congress is serious about reform.

Take action

Tell your member of Congress to support this amendment today.

The DOJ is opposing this amendment, saying that it would inhibit their investigations and make them less successful in rooting out kidnappings and child trafficking. We’ve heard this argument before, and it’s just not convincing.

The FBI has a wide range of investigatory tools.  It gives a scary list of potential investigations that it says might be impacted by removing its backdoor, but for every single one of them, the FBI can get a warrant or use other investigatory tools like National Security Letters. What the DOJ elides in protesting this narrow amendment is that the FBI has gotten used to searching through already collected communications of Americans —overbroadly collected for foreign intelligence purposes — for domestic law enforcement purposes. But it is not the purpose of 702 to save the FBI the trouble of getting a warrant (FISA or otherwise) for domestic investigations as the law and the Constitution requires before it collects needed information from the telecommunications and Internet service providers. The FBI is in no way prohibited from using its long-standing powerful investigatory tools due to this amendment - it just can no longer piggy-back on admittedly over broad foreign intelligence collections.  

The government also elides that what it wants is to take advantage of Section 702’s massive well-documented over-collection to have a kind of time machine. There is a possibility that information collected by the NSA will be deleted before the FBI can get a warrant, but the FBI has not submitted any public (or as far as we can tell, classified) evidence that this is a major problem in practice or would have resulted in thwarted prosecutions -- as opposed to just requiring a bit more effort by the FBI. But protecting Americans privacy is worth making the FBI follow the Constitution, even if it is a bit more effort.

The US Supreme Court has denied domestic law enforcement a general warrant — collecting first a broad swath of Americans’ communications then sorting through later what it may need. That is what the FBI is defending here, it is what the FISC raised concerns about and it is what this amendment will rightfully stop.

Tell your member of Congress to support this amendment today.

Take action

Tell your member of Congress to support this amendment today.

India McKinney

EFF at 30: Freeing the Internet, with Net Neutrality Pioneer Gigi Sohn

1 month 3 weeks ago

To commemorate the Electronic Frontier Foundation’s 30th anniversary, we present EFF30 Fireside Chats. This limited series of livestreamed conversations looks back at some of the biggest issues in internet history and their effects on the modern web.

To celebrate 30 years of defending online freedom, EFF held a candid live discussion with net neutrality pioneer and EFF board member Gigi Sohn, who served as Counselor to the Chairman of the Federal Communications Commission and co-founder of leading advocacy organization Public Knowledge. Joining the chat were Senior Legislative Counsel at EFF Ernesto Falcon and Associate Director of Policy and Activism Katharine Trendacosta. You can watch the full conversation here or read the transcript.

In my perfect world, everyone’s connected to a future proof, fast, affordable—and open—internet.

On July 28, we’ll be holding our final EFF30 Fireside Chat—a "Founders Edition." EFF's Executive Director, Cindy Cohn will be joined by some of our founders and early board members, Esther Dyson, Mitch Kapor, and John Gilmore, to discuss everything from EFF's origin story and its role in digital rights to where we are today.

EFF30 Fireside Chat: Founders Edition
Wednesday, July 28, 2021 at 5 pm Pacific Time
Streaming Discussion with Q&A

RSVP to the Next EFF30 Fireside Chat


The conversation began with a comparison between the policy battles of the 1990s, the 2000’s, and today: “What was happening was that the copyright industry--Hollywood, the recording industry, the book publishers--, saw this technology that gave people power to control their own experience and what they wanted to see, and what they wanted to listen to, and it flipped them out...we really need[ed] an organization in Washington that’s dedicated to a free and open internet, that’s free of copyright gatekeepers, and free of ISP gatekeepers.” This was the founding of Public Knowledge, an organization that fights, alongside EFF, to protect the open internet. Privacy info. This embed will serve content from

Many think of net neutrality—the idea that Internet service providers (ISPs) should treat all data that travels over their networks fairly, without improper discrimination in favor of particular apps, sites or services—as a fairly recent issue. But it actually started in the late 1990’s, Sohn explained. The battle, in many ways, began in earnest in Portland, when the city’s consumer protection agency told AT&T that their cable modem service was going to be regulated under Title VI of the Communications Act of 1934. This led to a court case where the Ninth Circuit determined that the cable modem service was actually a communications service, and fell under Title II of the Communications Act, and should be regulated similarly to a telephone service. Watch the full clip below for a full deep dive into net neutrality’s history: Privacy info. This embed will serve content from

Moving to the topic of broadband access, Katharine Trendacosta describes it along the lines of net neutrality. “It’s not a partisan issue. Most Americans support net neutrality. Most Americans need internet access.” And the increased need for access during the pandemic wasn’t a blip--”This is always what the future was going to look like.” Privacy info. This embed will serve content from

But crises like the pandemic do show the dangerous cracks that exist due to the current lack of broadband regulation. For example, Sohn explained, the Santa Clara fire department was throttled during the Mendocino Complex fire, and had nowhere to go to fix the problem. And over the last year, “the former FCC chairman had to beg the companies not to cut off people’s service during the pandemic. The FCC couldn’t say ‘you must,’ they had to say ‘Mother, may I?’” To put it bluntly, said Ernesto Falcon, as access is more critical than ever, the lack of authority leaves many people without recourse: “Three-quarters of Americans now think of broadband as the same as electricity and water in terms of its importance in everyday life--and the idea that you would have an unregulated monopoly selling you water, who wants that? No one wants that.”

In the regulatory vacuum, Sohn said, the states are the new battleground for getting net neutrality and broadband access to everyone--and they are well poised to fight that fight. Several states have passed net neutrality laws, including California (the ISPs, of course, are fighting back with lawsuits). And though the federal government has failed to properly expand broadband access, states can do, and some have done, much better:

The FCC and other agencies have spent about 50 billion dollars trying to build broadband everywhere and they’ve failed miserably. They invested in slow technologies, they weren’t careful with where they built, we have slow networks, and by one count we have 42 million Americans that don’t have access to any network at all. We need to be much much smarter. It’s not only about who gets the money, or how much, or for what, but it’s also how it’s given out. And that’s one of the reasons why I’m favorable towards giving a big chunk to the states. They’ll have a better idea of where the need is. Privacy info. This embed will serve content from

This chat was recorded just weeks before California Governor Newsom signed a massive, welcome multi-billion dollar public fiber package into law in late July.

The conversation then went to questions from the audience, which tackled ways to kickstart competition in the ISP market, how to convince politicians to make an expensive fiber optic investment, and ultimately, what the role of government should be in an area in which they’ve (so far) failed. You can, and should, watch the entire Fireside Chat here. Whatever you take away from this wide-ranging discussion of open internet issues, we hope you’ll help us work towards Sohn’s vision of a world where “everyone’s connected to a future proof, fast and affordable—and open—internet.” This is a vision that EFF shares, and one that we believe can exist—if we fight for it.

Check out additional recaps of EFF's 30th anniversary conversation series, and don't miss our final program where we'll delve into the dawn of digital activism with EFF’s early leaders on July 28, 2021: EFF30 Fireside Chat: Founders Edition.

Jason Kelley

EFF Sues U.S. Postal Service For Records About Covert Social Media Spying Program

1 month 3 weeks ago
Service Looked Through People’s Posts Prior to Street Protests

Washington D.C.—The Electronic Frontier Foundation (EFF) filed a Freedom of Information Act (FOIA) lawsuit against the U.S. Postal Service and its inspection agency seeking records about a covert program to secretly comb through online posts of social media users before street protests, raising concerns about chilling the privacy and expressive activity of internet users.

Under an initiative called Internet Covert Operations Program, analysts at the U.S. Postal Inspection Service (USPIS), the Postal Service’s law enforcement arm, sorted through massive amounts of data created by social media users to surveil what they were saying and sharing, according to media reports. Internet users’ posts on Facebook, Twitter, Parler, and Telegraph were likely swept up in the surveillance program.

USPIS has not disclosed details about the program or any records responding to EFF’s FOIA request asking for information about the creation and operation of the surveillance initiative. In addition to those records, EFF is also seeking records on the program’s policies and analysis of the information collected, and communications with other federal agencies, including the Department of Homeland Security (DHS), about the use of social media content gathered under the program.

“We’re filing this FOIA lawsuit to shine a light on why and how the Postal Service is monitoring online speech. This lawsuit aims to protect the right to protest,” said Houston Davidson, EFF public interest legal fellow. “The government has never explained the legal justifications for this surveillance. We’re asking a court to order the USPIS to disclose details about this speech-monitoring program, which threatens constitutional guarantees of free expression and privacy.”

Media reports revealed that a government bulletin dated March 16 was distributed across DHS’s state-run security threat centers, alerting law enforcement agencies that USPIS analysts monitored “significant activity regarding planned protests occurring internationally and domestically on March 20, 2021.” Protests around the country were planned for that day, and locations and times were being shared on Parler, Telegram, Twitter, and Facebook, the bulletin said.

“Monitoring and gathering people’s social media activity chills and suppresses free expression,” said Aaron Mackey, EFF senior staff attorney. “People self-censor when they think their speech is being monitored and could be used to target them. A government effort to scour people’s social media accounts is a threat to our civil liberties.”

For the complaint:

For more on this case:

For more on social media surveillance:

Contact:  HoustonDavidsonLegal AaronMackeySenior Staff
Karen Gullo

EFF, ACLU Urge Appeals Court to Revive Challenge to Los Angeles’ Collection of Scooter Location Data

1 month 4 weeks ago
Lower Court Improperly Dismissed Lawsuit Against Privacy-Invasive Data Collection Practice

San Francisco—The Electronic Frontier Foundation and the ACLU of Northern and Southern California today asked a federal appeals court to reinstate a lawsuit they filed on behalf of electric scooter riders challenging the constitutionality of Los Angeles’ highly privacy-invasive collection of detailed trip data and real-time locations and routes of scooters used by thousands of residents each day.

The Los Angeles Department of Transportation (LADOT) collects from operators of dockless vehicles like Lyft, Bird, and Lime information about every single scooter trip taken within city limits. It uses software it developed to gather location data through Global Positioning System (GPS) trackers on scooters. The system doesn’t capture the identity of riders directly, but collects with precision riders’ location, routes, and destinations to within a few feet, which can easily be used to reveal the identities of riders.

A lower court erred in dismissing the case, EFF and the ACLU said in a brief filed today in the U.S. Circuit Court of Appeals for the Ninth Circuit. The court incorrectly determined that the practice, unprecedented in both its invasiveness and scope, didn’t violate the Fourth Amendment. The court also abused its discretion, failing to exercise its duty to credit the plaintiff’s allegations as true, by dismissing the case without allowing the riders to amend the lawsuit to fix defects in the original complaint, as federal rules require.

“Location data can reveal detailed, sensitive, and private information about riders, such as where they live, who they work for, who their friends are, and when they visit a doctor or attend political demonstrations,” said EFF Surveillance Litigation Director Jennifer Lynch. “The lower court turned a blind eye to Fourth Amendment principles. And it ignored Supreme Court rulings establishing that, even when location data like scooter riders’ GPS coordinates are automatically transmitted to operators, riders are still entitled to privacy over the information because of the sensitivity of location data.”

The city has never presented a justification for this dragnet collection of location data, including in this case, and has said it’s an “experiment” to develop policies for motorized scooter use. Yet the lower court decided on its own that the city needs the data and disregarded plaintiff Justin Sanchez’s statements that none of Los Angeles’ potential uses for the data necessitates collection of all riders’ granular and precise location information en masse.

“LADOT’s approach to regulating scooters is to collect as much location data as possible, and to ask questions later,” said Mohammad Tajsar, senior staff attorney at the ACLU of Southern California. “Instead of risking the civil rights of riders with this data grab, LADOT should get back to the basics: smart city planning, expanding poor and working people’s access to affordable transit, and tough regulation on the private sector.”

The lower court also incorrectly dismissed Sanchez’s claims that the data collection violates the California Electronic Communications Privacy Act (CalECPA), which prohibits the government from accessing electronic communications information without a warrant or other legal process. The court’s mangled and erroneous interpretation of CalECPA—that only courts that have issued or are in the process of issuing a warrant can decide whether the law is being violated—would, if allowed to stand, severely limit the ability of people subjected to warrantless collection of their data to ever sue the government.

“The Ninth Circuit should overturn dismissal of this case because the lower court made numerous errors in its handling of the lawsuit,” said Lynch. “The plaintiffs should be allowed to file an amended complaint and have a jury decide whether the city is violating riders’ privacy rights.”

For the brief:

Contact:  JenniferLynchSurveillance Litigation
Karen Gullo

Data Brokers are the Problem

1 month 4 weeks ago

Why should you care about data brokers? Reporting this week about a Substack publication outing a priest with location data from Grindr shows once again how easy it is for anyone to take advantage of data brokers’ stores to cause real harm.

This is not the first time Grindr has been in the spotlight for sharing user information with third-party data brokers. The Norwegian Consumer Council singled it out in its 2020 "Out of Control" report, before the Norwegian Data Protection Authority fined Grindr earlier this year. At the time, it specifically warning that the app’s data-mining practices could put users at serious risk in places where homosexuality is illegal.

But Grindr is just one of countless apps engaging in this exact kind of data sharing. The real problem is the many data brokers and ad tech companies that amass and sell this sensitive data without anything resembling real users’ consent.

Apps and data brokers claim they are only sharing so-called “anonymized” data. But that’s simply not possible. Data brokers sell rich profiles with more than enough information to link sensitive data to real people, even if the brokers don’t include a legal name. In particular, there’s no such thing as “anonymous” location data. Data points like one’s home or workplace are identifiers themselves, and a malicious observer can connect movements to these and other destinations. In this case, that includes gay bars and private residents.

Another piece of the puzzle is the ad ID, another so-called “anonymous" label that identifies a device. Apps share ad IDs with third parties, and an entire industry of “identity resolution” companies can readily link ad IDs to real people at scale.

All of this underlines just how harmful a collection of mundane-seeming data points can become in the wrong hands. We’ve said it before and we’ll say it again: metadata matters.

That’s why the U.S. needs comprehensive data privacy regulation more than ever. This kind of abuse is not inevitable, and it must not become the norm.

Gennie Gebhart

Council of Europe’s Actions Belie its Pledges to Involve Civil Society in Development of Cross Border Police Powers Treaty

2 months ago

As the Council of Europe’s flawed cross border surveillance treaty moves through its final phases of approval, time is running out to ensure cross-border investigations occur with robust privacy and human rights safeguards in place. The innocuously named “Second Additional Protocol” to the Council of Europe’s (CoE) Cybercrime Convention seeks to set a new standard for law enforcement investigations—including those seeking access to user data—that cross international boundaries, and would grant a range of new international police powers. 

But the treaty’s drafting process has been deeply flawed, with civil society groups, defense attorneys, and even data protection regulators largely sidelined. We are hoping that CoE's Parliamentary Committee (PACE), which is next in line to review the draft Protocol, will give us the opportunity to present and take our privacy and human rights concerns seriously as it formulates its opinion and recommendations before the CoE’s final body of approval, the Council of Ministers, decides the Protocol’s fate. According to the Terms of Reference for the preparation of the Draft Protocol, the Council of Ministers may consider inviting parties “other than member States of the Council of Europe to participate in this examination.”

The CoE relies on committees to generate the core draft of treaty texts. In this instance, the CoE’s Cybercrime Committee (T-CY) Plenary negotiated and drafted the Protocol’s text with the assistance of a drafting group consisting of representatives of State Parties. The process, however, has been fraught with problems. To begin with, T-CY’s Terms of Reference for the drafting process drove a lengthy, non-inclusive procedure that relied on closed sessions (​​Article 4.3 T-CY Rules of Procedures). While the Terms of Reference allow the T-CY to invite individual subject matter experts on an ad hoc basis, key voices such as data protection authorities, civil society experts, and criminal defense lawyers were mostly sidelined. Instead, the process has been largely commandeered by law enforcement, prosecutors and public safety officials (see here, and here). 

Earlier in the process, in April 2018, EFF, CIPPIC, EDRI and 90 civil society organizations from across the globe requested the COE Secretariat General provide more transparency and meaningful civil society participation as the treaty was being negotiated and drafted—and not just during the CoE’s annual and somewhat exclusive Octopus Conferences. However, since T-CY began its consultation process in July 2018, input from external stakeholders has been limited to Octopus Conference participation and some written comments. Civil society organizations were not included in the plenary groups and subgroups where text development actually occurs, nor was our input meaningfully incorporated. 

Compounding matters, the T-CY’s final online consultation, where the near final draft text of the Protocol was first presented to external stakeholders, only provided a 2.5 week window for input. The draft text included many new and complex provisions, including the Protocol’s core privacy safeguards, but excluded key elements such as the explanatory text that would normally accompany these safeguards. As was flagged by civil society, privacy regulators, and even by the CoE’s own data protection committee, two and a half weeks is not enough time to provide meaningful feedback on such a complex international treaty. More than anything, this short consultation window gave the impression that T-CY’s external consultations were truly performative in nature. 

Despite these myriad shortcomings, the Council of Ministers (CoE’ final statutory decision-making body, comprising member States’ Foreign Affairs Ministers) responded to our process concerns arguing that external stakeholders had been consulted during the Protocol’s drafting process. Even more oddly, the Council of Ministers’ justified the demonstrably curtailed final consultation period by invoking its desire to complete the Protocol on the 20th anniversary of the CoE’s Budapest Cybercrime Convention (that is, by this November 2021).

With great respect, we kindly disagree with Ministers’ response. If T-CY wished to meet its November 2021 deadline, it had many options open to it. For instance, it could have included external stakeholders from civil society and from privacy regulators in its drafting process, as it had been urged to do on multiple occasions. 

More importantly, this is a complex treaty with wide ranging implications for privacy and human rights in countries across the world. It is important to get it right, and ensure that concerns from civil society and privacy regulators are taken seriously and directly incorporated into the text. Unfortunately, as the text stands, it raises many substantive problems, including the lack of systematic judicial oversight in cross-border investigations and the adoption of intrusive identification powers that pose a direct threat to online anonymity. The Protocol also undermines key data protection safeguards relating to data transfers housed in central instruments like the European Union’s Law Enforcement Directive and the General Data Protection Regulation. 

The Protocol now stands with CoE’s PACE, which will issue an opinion on the Protocol and might recommend some additional changes to its substantive elements. It will then fall to CoE’s Council of Ministers to decide whether to accept any of PACE’s recommendations and adopt the Protocol, a step which we still anticipate will occur in November. Together with CIPPICEDRI, Derechos Digitales and NGOs around the world hope that PACE takes our concerns seriously, and that the Council produces a treaty that puts privacy and human rights first. 

Karen Gullo
16 minutes 37 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed