Visa Wants to Buy Plaid, and With It, Transaction Data for Millions of People

4 days 20 hours ago

Visa, the credit card network, is trying to buy financial technology company Plaid for $5.3 billion. The merger is bad for a number of reasons. First and foremost, it would allow a giant company with a controlling market share and a history of anticompetitive practices to snap up its fast-growing competition in the market for payment apps. But Plaid is more than a potential disruptor, it’s also sitting on a massive amount of financial data acquired through questionable means. By buying Plaid, Visa is buying all of its data. And Plaid’s users—even those protected by California’s new privacy law—can’t do anything about it.

Since mergers and acquisitions often fall outside the purview of privacy laws, only a pointed intervention by government authorities can stop the sale. Thankfully, this month, the US Department of Justice filed a lawsuit to do just that. This merger is about more than just competition in the financial technology (fintech) space; it’s about the exploitation of sensitive data from hundreds of millions of people. Courts should stop the merger to protect both competition and privacy.

Visa's Monopolistic Hedge

The Department of Justice lawsuit outlines a very simple motive for the acquisition. Visa, it says, already controls around 70% of the digital debit card payment market, from which it earned approximately $2 billion last year. (Mastercard, at 25% market share, is Visa’s only significant competitor.) Thanks to network effects with merchants and consumers, plus exclusivity clauses in its agreements with banks, Visa is comfortably insulated from threats by traditional competitors. But apps like Venmo have started—just barely—to eat away at the digital transaction market. And Plaid sits at the center of that new wave, providing the infrastructure that Venmo and hundreds of other apps use to send money around the world.

According to the DoJ, a Visa executive predicted that Plaid would undercut its debit card processing business eventually, and that buying Plaid would be an “insurance policy” to protect Visa’s dominant market share. The lawsuit alleges that Plaid already had plans to leverage its relationships with banks and consumers to launch a new debit service. Seen through this lens, the acquisition is a simple preemptive strike against an emerging threat in one of Visa’s core markets. Challenging the purchase of a smaller company by a giant one, under the theory that the purchase eliminates future competition rather than creating a monopoly in the short term, is a strong step for the DoJ, and one we hope to see repeated in technology markets.

But users’ interest in the Visa-Plaid merger should extend beyond fears of market concentration. Both companies are deeply involved in the collection and monetization of personal data. And as the DoJ’s lawsuit underscores, “Acquiring Plaid would also give Visa access to Plaid’s enormous trove of consumer data, including real-time sensitive information about merchants and Visa’s rivals.”

Plaid, Yodlee, and the sorry state of fintech privacy

Plaid is what’s known as a “data aggregator” in the fintech space. It provides the infrastructure that connects banks to financial apps like Venmo and Coinbase, and its customers are usually apps that need programmatic access to a bank account.

It works like this: first, an app developer installs code from Plaid. When a user downloads the app, Plaid asks the user for their bank credentials, then logs in on their behalf. Plaid then has access to all the information the bank would normally share with the user, including balances, assets, transaction history, and debt. It collects data from the bank and passes it along to the app developer. From then on, the app can use Plaid’s services to initiate electronic transfers to and from the bank account, or to collect new information about the user’s activity.

In a shadowy industry, Plaid has tried to cultivate a reputation as the “trustworthy” data aggregator. Envestnet/Yodlee, a direct competitor, has long sold consumer behavior data to marketers and hedge funds. The company claims the data are “anonymous,” but reporters have discovered that that’s not always the case. And Finicity, another financial data aggregator, uses its access to moonlight as a credit reporting agency. A glance at data broker listings shows a thriving marketplace for individually-identified transactions data, with dozens of sellers and untold numbers of buyers. But Plaid is adamant that it doesn’t sell or monetize user data beyond its core business proposition. Until recently, Plaid has often been mentioned alongside Yodlee in order to contrast the two companies’ approaches, when it’s been mentioned at all.

Now, in the wake of the Visa announcement, two new lawsuits (Cottle et al v. Plaid Inc and Evans v. Plaid Inc) claim that Plaid has exploited users all along. Chief among the accusations is that Plaid’s interface misleads users into sharing their bank passwords with the company, a practice that plaintiffs allege runs afoul of California’s anti-phishing law. The lawsuits also claim that Plaid collected much more data than was necessary, deceived users about what it was doing, and made money by selling that data back to the apps which used it.

EFF is not involved in either lawsuit against Visa/Plaid, nor are we taking any position on the validity of the legal claims. We’re not privy to any information that hasn’t been reported publicly. But many of the facts presented by the lawsuits are relatively straightforward, and can be verified with Plaid’s own documentation. For example, at the time of writing, https://plaid.com/demo/ still hosts example sign-in flow with Plaid. Plaid does not dispute that it collects users’ real bank credentials in order to log in on their behalf. You can see for yourself what that looks like: the interface puts the bank’s logo front and center, and looks for all the world like a secure OAuth page. Try to think about whether, seeing this for the first time, you’d really understand who’s getting what information.

Who’s getting your credentials? Not just Citi.

Many users might not realize the scope of the data that Plaid receives. Plaid’s Transactions API gives both Plaid and app developers access to a user’s entire transaction and balance history, including a geolocation and category for each purchase made. Plaid’s other APIs grant access to users’ liabilities, including credit card debt and student loans; their investments, including individual stocks and bonds; and identity information, including name, address, email, and phone number.

A screenshot from Plaid’s demo. What, exactly, does “link” mean?

For some products, Plaid’s demo will throw up a dialog box asking users to “Allow” the app to access certain kinds of data. (It doesn’t explain that Plaid will have access as well.) When we tested it, access to the “transactions,” “auth,” “identity,” and “investments” products didn’t trigger any prompts beyond the default “X uses Plaid to link to your bank” screen. It’s unclear how users are supposed to know what information an app will actually get, much less what they’ll do with it. And once a user enters their password, the data starts flowing.

Users can view the data they’re sharing through Plaid, and revoke access, after creating an account at my.plaid.com. This tool, which was apparently introduced in mid-2018 (after GDPR went into effect in Europe), is useful—for users who know where to look. But nothing in the standard “sign in with Plaid” flow directs users to the tool, or even lets them know it exists.

On the whole, it’s clear that Plaid was using questionable design practices to “nudge” people into sharing sensitive information.

What’s in it for Visa?

Whatever Plaid has been doing with its data until now, things are about to change.

Plaid is a hot fintech startup, but Visa thinks it can squeeze more out of Plaid than the company is making on its own. Visa is paying approximately 50 times Plaid’s annual revenue to acquire the company—a “very steep” sum by traditional metrics.

A huge part of Plaid’s value is its data. Like a canal on a major trade route, it sits at a key point between users and their banks, observing and directing flows of personal information both into and out of the financial system. Plaid currently makes money by charging apps for access to its system, like levying tariffs on those who pass through its port. But Visa is positioned to do much more.

For one, Visa already runs a targeted-advertising wing using customer transaction data, and thus has a straightforward way to monetize Plaid’s data stream. Visa aggregates transaction data from its own customers to create “audiences” based on their behavior, which it sells to marketers. It offers over two hundred pre-configured categories of users, including “recently engaged,” “international traveler - Mexico,” and “likely to have recently shifted spend from gasoline to public transportation services.” It also lets clients create custom audiences based on what people bought, where they bought it, and how much they spent.

Source: https://web.archive.org/web/20201125173340/https://usa.visa.com/dam/VCOM/global/run-your-business/documents/vsa218-08-visa-catalog-digital-may-2019-v2.pdf

Plaid’s wealth of transaction, liability, and identity information is good for more than selling ads. It can also be used to build financial profiles for credit underwriting, an obviously attractive application for credit-card magnate Visa, and to perform “identity matching” and other useful services for advertisers and lenders. Documents uncovered by the DoJ show that Visa is well aware of the value in Plaid’s data.

Illustration by a Visa executive of Plaid’s untapped potential, included in Department of Justice filings. The executive “analogized Plaid to an island ‘volcano’ whose current capabilities are just ‘the tip showing above the water’ and warned that ‘what lies beneath, though, is a massive opportunity – one that threatens Visa.’” Note “identity matching,” “credit decisioning,” and “advertising and marketing”—all data-based businesses.

Through Plaid, Visa is about to acquire transaction data from millions of users of its competitors: banks, other credit and debit cards, and fintech apps. As TechCrunch has reported, “Buying Plaid is insurance against disruption for Visa, and also a way to know who to buy.” The DoJ went deeper into the data grab’s anticompetitive effects: “With this insight into which fintechs are more likely to develop competitive alternative payments methods, Visa could take steps to partner with, buy out, or otherwise disadvantage these up-and coming competitors,” positioning Visa to “insulate itself from competition.”

The Data-Sale Loophole

The California Privacy Rights Act, which amends the California Consumer Privacy Act (CCPA), was passed by California voters in early November. It’s the strongest law of its kind in the U.S., and it gives people a general right to opt out of the sale of their data. In addition, the Gramm-Leach-Bliley Act (GLBA), a federal law regulating financial institutions, allows Americans to tell financial institutions not to share their personal financial information. Since the CPRA exempts businesses which are already subject to GLBA, it’s not clear which of the two governs Plaid. But neither law restricts the transfer of data during a merger or acquisition. Plaid’s own privacy policy claims, loudly and clearly, that “We do not sell or rent personal information that we collect.” But elsewhere in the same section, Plaid admits it may share data “in connection with a change in ownership or control of all or a part of our business (such as a merger, acquisition, reorganization, or bankruptcy).” In other words, the data was always for sale under one condition: you had to buy everything.

That’s what Visa is doing. It’s acquiring everything Plaid has ever collected and—more importantly—access to data flows from everyone who uses a Plaid-connected app. It can monetize the data in ways Plaid never could. And the move completely side-steps restrictions on old-fashioned data sales.

Stop the Merger

It’s easy to draw parallels from the Visa/Plaid deal to other recent mergers. Some, like Facebook buying Instagram or Google buying YouTube, gave large companies footholds in new or emerging markets. Others, like Facebook’s purchase of Onavo, gave them data they could use to surveil both users and competitors. Still others, like Google’s acquisitions of Doubleclick and Fitbit, gave them abundant new inflows of personal information that they could fold into their existing databases. Visa’s acquisition of Plaid does all three.

The DoJ’s lawsuit argues that the acquisition would “unlawfully maintain Visa’s monopoly” and “unlawfully extend [Visa’s] advantage” in the U.S. online debit market, violating both the Clayton and Sherman antitrust acts. The courts should block Visa from buying up a nascent competitor and torrents of questionably-acquired data in one move.

Beyond this specific case, Congress should take a hard look at the trend of data-grab mergers taking place across the industry. New privacy laws often regulate the sharing or sale of data across company boundaries. That’s great as far as it goes—but it’s completely sidestepped by mergers and acquisitions. Visa, Google, and Facebook don’t need to buy water by the bucket, they can just buy the well. Moreover, analysts predict that this deal, if allowed to go through, could set off a spree of other fintech acquisitions. It may have already begun: just months after Visa announced its intention to buy Plaid, Mastercard (Visa’s rival in the debit duopoly) began the process of acquiring Plaid competitor Finicity. It’s long past time for better merger review and meaningful, enforceable restrictions on how companies can use our personal information.

Bennett Cyphers

Let’s Stand Up for Home Hacking and Repair

5 days 14 hours ago

Let’s tell the Copyright Office that it’s not a crime to modify or repair your own devices.

Every three years, the Copyright Office holds a rulemaking process where it grants the public permission to bypass digital locks for lawful purposes. In 2018, the Office expanded existing protections for jailbreaking and modifying your own devices to include voice-activated home assistants like Amazon Echo and Google Home, but fell far short of the broad allowance for all computerized devices that we’d asked for. So we’re asking for a similar exemption, but we need your input to make the best case possible: if you use a device with onboard software and DRM keeps you from repairing that device or modifying the software to suit your purposes, see below for information about how to tell us your story.

DMCA 1201: The Law That Launched a Thousand Crappy Products

Why is it illegal to modify or repair your own devices in the first place? It’s a long story. Congress passed the Digital Millennium Copyright Act in 1996. That’s the law that created the infamous “notice-and-takedown” process for allegations of copyright infringement on websites and social media platforms. The DMCA also included the less-known Section 1201, which created a new legal protection for DRM—in short, any technical mechanism that makes it harder for people to access or modify a copyrighted work. The DMCA makes it unlawful to bypass certain types of DRM unless you’re working within one of the exceptions granted by the Copyright Office.

Suddenly manufacturers had a powerful tool for restricting how their customers used their products: build your product with DRM, and you can argue that it’s illegal for others to modify or repair it.

The technology landscape was very different in 1996. At the time, when most people thought of DRM, they were thinking of things like copy protection on DVDs or other traditional media. Some of the most dangerous abuses of DRM today come in manufacturers’ use of it to limit how customers use their products—farmers being unable to repair their own tractors, or printer manufacturers trying to restrict users from buying third-party ink.

When the DMCA passed, manufacturers suddenly had a powerful tool for restricting how their customers used their products: build your product with DRM, and you can argue that it’s illegal for others to modify or repair it.

Section 1201 caught headlines recently when the RIAA attempted to use it to stop the distribution of youtube-dl, a tool that lets people download videos from YouTube and other user-uploaded video platforms. Fortunately, GitHub put the youtube-dl repository back up after EFF explained on behalf of youtube-dl’s developers that the tool doesn’t circumvent DRM.

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2Fck7utXYcZng%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

Abuse of legal protections for DRM isn’t just a United States problem, either. Thanks to the way in which copyright law has been globalized through a series of trade agreements, much of the world has similar laws on the books to DMCA 1201. That creates a worst-of-both-worlds scenario for countries that don’t have the safety valve of fair use to protect people’s free expression rights or processes like the Copyright Office rulemaking to remove the legal doubt around bypassing DRM for lawful purposes. The rulemaking process is deeply flawed, but it’s better than nothing.

Let’s Tell the Copyright Office: Home Hacking Is Not a Crime

Which brings us back to this year’s Copyright Office rulemaking. We’re asking the Copyright Office to grant a broad exception for people to take advantage of in modifying and repairing all software-enabled devices for their own use.

If you have a story about how:

  • someone in the United States;
  • attempted or planned to modify, repair, or diagnose a product with a software component; and
  • encountered a technological protection measure (including DRM or digital rights management—any form of software security measure that restricts access to the underlying software code, such as encryption, password protection, or authentication requirements) that prevented completing the modification, repair, or diagnosis (or had to be circumvented to do so)

—we want to hear from you! Please email us at RightToMod-2021@lists.eff.org with the information listed below, and we’ll curate the stories we receive so we can present the most relevant ones alongside our arguments to the Copyright Office. The comments we submit to the Copyright Office will become a matter of public record, but we will not include your name if you do not wish to be identified by us. Submissions should include the following information:

  1. The product you (or someone else) wanted to modify, repair, or diagnose, including brand and model name/number if available.
  2. What you wanted to do and why.
  3. How a TPM interfered with your project, including a description of the TPM.
    • What did the TPM restrict access to?
    • What did the TPM block you from doing? How?
    • If you know, what would be required to get around the TPM? Is there another way you could accomplish your goal without doing this?
  4. Optional: Links to relevant articles, blog posts, etc.
  5. Whether we may identify you in our public comments, and your name and town of residence if so. We will treat all submissions as anonymous unless you expressly give us this permission to identify you.
Elliot Harmon

Victory! Court Protects Anonymity of Security Researchers Who Reported Apparent Communications Between Russian Bank and Trump Organization

5 days 15 hours ago

Security researchers who reported observing Internet communications between the Russian financial firm Alfa Bank and the Trump Organization in 2016 can remain anonymous, an Indiana trial court ruled last week.

The ruling protects the First Amendment anonymous speech rights of the researchers, whose analysis prompted significant media attention and debate in 2016 about the meaning of digital records that reportedly showed computer servers linked to the Moscow-based bank and the Trump Organization in communication.

In response to these reports, Alfa Bank filed a lawsuit in Florida state court alleging that unidentified individuals illegally fabricated the connections between the servers. Importantly, Alfa Bank’s lawsuit asserts that the alleged bad actors who fabricated the servers’ communications are different people than the anonymous security researchers who discovered the servers’ communications and reported their observations to journalists and academics.

Yet that distinction did not stop Alfa Bank from seeking the security researchers’ identities through a subpoena issued to Indiana University Professor L. Jean Camp, who had contacts with at least one of the security researchers and helped make their findings public. 

Prof. Camp filed a motion to quash the subpoena. EFF filed a friend-of-the-court brief in support of the motion to ensure the court understood that the security researchers had the right to speak anonymously under both the First Amendment and Indiana’s state constitution.

The brief argues: 

By sharing their observations anonymously, the researchers were able to contribute to the electorate’s understanding of a matter of extraordinary public concern, while protecting their reputations, families, and livelihoods from potential retaliation. That is exactly the freedom that the First Amendment seeks to safeguard by protecting the right to anonymous speech.

It’s not unusual for companies embarrassed by security researchers’ findings to attempt to retaliate against them, which is what Alfa Bank tried to do. That’s why EFF’s brief also asked the court to recognize that Alfa Bank’s subpoena was a pretext:

[T]he true motive of the litigation and the instant subpoena is to retaliate against the anonymous computer security researchers for speaking out. In seeking to impose consequences on these speakers, Alfa Bank is violating their First Amendment rights to speak anonymously.

In rejecting Alfa Bank’s subpoena, the Indiana court ruled that the information Alfa Bank sought to identify the security researchers “is protected speech under Indiana law” and that the bank had failed to meet the high bar required to justify the disclosure of the individuals’ identities.

EFF is grateful that the court protected the identities of the anonymous researchers and rejected Alfa Bank’s subpoena. We would also like to thank our co-counsel Colleen M. Newbill, Joseph A. Tomain, and D. Michael Allen of Mallor Grodner LLP for their help on the brief.

Aaron Mackey

Podcast Episode: Control Over Users, Competitors, and Critics

5 days 22 hours ago
Episode 004 of EFF’s How to Fix the Internet

Cory Doctorow joins EFF hosts Cindy Cohn and Danny O’Brien as they discuss how large, established tech companies like Apple, Google, and Facebook can block interoperability in order to squelch competition and control their users, and how we can fix this by taking away big companies' legal right to block new tools that connect to their platforms – tools that would let users control their digital lives.

In this episode you’ll learn about:

  • How the power to leave a platform is one of the most fundamental checks users have on abusive practices by tech companies—and how tech companies have made it harder for their users to leave their services while still participating in our increasingly digital society;
  • How the lack of interoperability in modern tech platforms is often a set of technical choices that are backed by a legal infrastructure for enforcement, including the Digital Millennium Copyright Act (DMCA) and the Computer Fraud and Abuse Act (CFAA). This means that attempting to overcome interoperability barriers can come with legal risks as well as financial risks, making it especially unlikely for new entrants to attempt interoperating with existing technology;
  • How online platforms block interoperability in order to silence their critics, which can have real free speech implications;
  • The “kill zone” that exists around existing tech products, where investors will not back tech startups challenging existing tech monopolies, and even startups that can get a foothold may find themselves bought out by companies like Facebook and Google;
  • How we can fix it: The role of “competitive compatibility,” also known as “adversarial interoperability”  in reviving stagnant tech marketplaces;
  • How we can fix it by amending or interpreting the DMCA, CFAA and contract law to support interoperability rather than threaten it.
  • How we can fix it by supporting the role of free and open source communities as champions of interoperability and offering alternatives to existing technical giants.

Cory Doctorow (craphound.com) is a science fiction author, activist and journalist. He is the author of many books, most recently ATTACK SURFACE, RADICALIZED and WALKAWAY, science fiction for adults, IN REAL LIFE, a graphic novel; INFORMATION DOESN’T WANT TO BE FREE, a book about earning a living in the Internet age, and HOMELAND, a YA sequel to LITTLE BROTHER. His latest book is POESY THE MONSTER SLAYER, a picture book for young readers.

Cory maintains a daily blog at Pluralistic.net. He works for the Electronic Frontier Foundation, is a MIT Media Lab Research Affiliate, is a Visiting Professor of Computer Science at Open University, a Visiting Professor of Practice at the University of North Carolina’s School of Library and Information Science and co-founded the UK Open Rights Group. Born in Toronto, Canada, he now lives in Los Angeles. You can find Cory on Twitter at @doctorow.

Please subscribe to How to Fix the Internet via RSSStitcherTuneInApple PodcastsGoogle PodcastsSpotify or your podcast player of choice. You can also find the Mp3 of this episode on the Internet Archive.  If you have any feedback on this episode, please email podcast@eff.org.

Below, you’ll find legal resources – including links to important cases, books, and briefs discussed in the podcast – as well a full transcript of the audio.

Resources

Anti-Competitive Laws

Anti-Competitive Practices 

Lawsuits Against Anti-Competitive Practices

Competitive Compatibility/Adversarial Interoperability & The Path Forward

State Abuses of Lack of Interoperability

Other

Transcript of Episode 004: Control Over Users, Competitors, and Critics

Danny O'Brien:
Welcome to How to Fix the Internet with the Electronic Frontier Foundation, the podcast that explores some of the biggest problems we face online right now, problems whose source and solution is often buried in the obscure twists of technological development, societal change, and the subtle details of internet law.

Cindy Cohn:
Hello, everyone. I'm Cindy Cohn, I'm the executive director of the Electronic Frontier Foundation. And for our purposes today, I'm also a lawyer.

Danny O'Brien:
And I'm Danny O'Brien, and I work at the EFF too, and I could only dream of going to law school. So, this episode has its roots in a long and ongoing discussion that we have at EFF about competition in tech, or rather, the complete lack of it these days. I think there's a growing consensus that big tech--Facebook, Google, Amazon, you can make your own list at home--have come to dominate the net and tech more widely and really not in a good way. They stand these days as potentially impregnable monopolies and there doesn't seem much consensus on how to best fix that.

Cindy Cohn:
Yeah. This problem affects innovation, which is a core EFF value, but it also impacts free speech and privacy. The lack of competition has policymakers pushing companies to censor us more and more, which, as we know, despite a few high-profile exceptions, disproportionately impacts marginalized voices, especially around the world.

Cindy Cohn:
And critically, way too many of these companies have privacy-invasive business models. At this point, I like to say that Facebook doesn't have users, it has hostages. So, addressing competition empowers users, and today we're going to focus on one of the ways that we can reintroduce competition into our world. And that's interoperability. Now this is largely a technical approach, but as you'll hear, it can work in tandem with legal strategies, and it needs some legal support right now to bring it back to life.

Danny O'Brien:
Interoperability is going to be useful because it accelerates innovation, and right now, the cycle of innovation just seems to be completely stuck. I mean, this may make me sound old, but I do remember when the pre-Facebook and the pre-Google quasi-monopolies just popped up, but grew, lived gloriously, and then died and shriveled like dragonflies.

Danny O'Brien:
We had Friendster, then Myspace, we had Yahoo and Alta Vista, and then they moved away. Nothing seems to be shifting this new generation of oligopolies, in the marketplace at least. I know lawsuits and antitrust investigations take a long time. We think at EFF that there's a way of speeding things up so we can break these down as quickly as their predecessors.

Cindy Cohn:
Yep. And that's what's so good about talking this through with our friend, Cory Doctorow. He comes at this from a deeply technological, economic, and historical perspective, and especially a historical perspective on how we got here in terms of our technology and law.

Cindy Cohn:
Now, I tend to think of it as a legal perspective, because I'm a litigator--I think, what doctrines are getting in the way? How can we address them? And how can we get the legal doctrines out of the way? But Danny, if I may, I had some personal experience here too. I bought an HP printer a while back, and because I wouldn't sign up for their ink delivery service, the darn thing just bricked. It wouldn't let me use anybody else's ink, and ultimately, it just stopped working entirely.

Danny O'Brien:
Interoperability is the ability for other parties to connect and build upon existing hardware and software without asking for permission, or begging for authorization, or being thrown out if they don't follow all the rules. So, in your printer's case, Cindy--and I love how when your printer doesn't work, you recognize it as an indictment of our zaibatsu control prison, rather than me who just thinks I failed to install the right driver. But in your case, your printer in Hewlett-Packard was building an ecosystem that only allowed other Hewlett-Packard projects to connect with it.

Danny O'Brien:
There's no reason why third-party ink couldn't work in HP, except that the printer has code in it that specifically rejects cartridges, not based on whether they work or not, but whether they come from the parent company or not. And there's a legal infrastructure around that too. It's much harder for third-party companies to interoperate with Hewlett-Packard printers, simply because there's so much legal risk about doing so.

Danny O'Brien:
This is the sort of thing that Cory excels at explaining, and I'm so glad we managed to grab him between, oh my god, all the million things he does. For those of you who don't know him, Cory works as a special advisor to EFF, but he's also a best-selling science fiction author. He has his own daily newsletter, at pluralistic.net, and a podcast of his own at craphound.com/podcast.

Danny O'Brien:
We caught him between publicizing his new kid's book Poesy the Monster Slayer, and promoting his new sequel to his classic "Little Brother" called "Attack Surface". And also curing world hunger, I'm pretty sure.

Cindy Cohn:
Hey, Cory.

Cory Doctorow:
It's always a pleasure to talk to you, and it's an honor to be on the EFF podcast.

Cindy Cohn:
So, let's get to it. What is interoperability? And why do we need to fix it?

Cory Doctorow:
Well, I like to start with an interoperability view that's pretty broad, right? Let's start with the fact that the company that sells you your shoes doesn't get to tell you whose socks you can wear, or that the company that makes your breakfast cereal doesn't get to tell you which dairy you have to go to. And that stuff is ... We just take it for granted, but it's a really important bedrock principle, and we see what happens when people lose interoperability: they also lose all agency and self-determination.

Cory Doctorow:
If you've ever heard those old stories about company mining towns where you were paid in company scrip that you could only spend at the company store, that was like non-interoperable money, right? The only way you could convert your company scrip into dollars would be to buy corn at the company store and take it down to the local moonshiner and hope he'd give you greenbacks, right?

Cory Doctorow:
And so, to the extent that you can be stuck in someone else's walled garden, it can turn, instead of, from a walled garden into a feedlot, where you become the fodder. And the tech industry has always had a weird relationship with interoperability,. On the one hand, computers have this amazing interoperable characteristic just kind of built into them. The underlying idea of things like von Neumann architectures, and Turing completeness really says that all computers can run all programs, and that you can't really make a computer that just, like, only uses one app store.

Cory Doctorow:
Instead, what you have to do is make a computer that refuses to use other app stores. You know, that tablet or that console you have, it's perfectly capable of using any app store that you tell it to. It just won't let you, and there's a really important difference, right? Like, I can't use a kitchen mixer to apply mascara, because the kitchen mixer is totally unsuited to applying mascara and if I tried, I would maim myself. But you can install any app on any device, provided that the manufacturer doesn't take steps to stop you.

Cory Doctorow:
And while manufacturers--tech manufacturers especially--have for a long time tried to take measures to stop use so they could increase their profits, what really changed the world was the passage of a series of laws, laws that we're very familiar with at the EFF: the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and so on, that started to allow companies to actually make it illegal--both civilly and criminally--for you to take steps to add interoperability to the products that you use, and especially for rivals to take steps.

Cory Doctorow:
I often say that the goal of companies who want to block interoperability is to control their critics, their customers, and their competitors, so that you have to arrange your affairs to benefit their shareholders. And if you don't, you end up committing an offense that our friend Saurok from the Cydia Project calls a felony contemptive business model.

Cindy Cohn:
This is something we care about in general at the EFF, because we worry a lot about the pattern of innovation, but I think it also has spillover effects on censorship and on surveillance. And I know you've thought about that a little bit, Cory, and I'd love to kind of just bring those out, because I think that it's important ... I mean, we all care, I think, about having functioning tools that really work. But there are effects on our rights as well, and that kind of old-school definition of rights, like what's in the constitution.

Cory Doctorow:
Yeah. Well, a lot of people are trusting of the firms that handle their communications. And that's okay, right? You might really think that Tim Cook is always going to exercise his judgment wisely, or that Mark Zuckerberg is a truly benevolent dictator and so on. But one of the things that keeps firms honest when they regulate your communications is the possibility that you might take your business elsewhere. And when firms don't face that possibility, they have less of an incentive to put your needs ahead of the needs of their shareholders. Or sometimes there's a kind of bank shot shareholder interest where, say, a state comes in and says, "We demand that you do something that is harmful to your users." And you weigh in the balance how many users you'll lose if you do it, versus how much it's going to cost you to resist the state.

Cory Doctorow:
And the more users you lose in those circumstances, the more you're apt to decide that the profitable thing to do is to resist state incursions. And there's another really important dimension, which is a kind of invitation to mischief that arises when you lock your users up, which is that states observe the fact that you can control the conduct of your users. And they show up and they say, "Great, we have some things your users aren't allowed to do." And you are now deputized to ensure that they don't do it, because you gave yourself that capability.

Cory Doctorow:
So, the best example of this ... I don't mean to pick on Apple, but the best example of this is Apple in China, where Apple is very dependent on the Chinese market, not just to manufacture its devices, but to buy and use its devices. Certainly, with President Trump's TikTok order, a lot of people have noted that some of the real fallout is going to be for Apple if they can't do business with Chinese firms and have Chinese apps and so on. And the Chinese government showed up at Apple's door and said, "You have to block working VPNs from your app store. We need to be able to spy on everyone who uses an iPhone. And so, the easiest way for us to accomplish that is to just tell you to evict any VPN that doesn't have a backdoor for us."

Danny O'Brien:
Just to connect those two things together, Cory, so what you're saying here is that because Apple phones don't have ... Apple has sort of exclusive control over them, and you can't just install your own choice of program on the iPhone. That means that Apple is this sort of choke point that bad actors can use, because they've got all this control for themselves, and then they can be pressured to impose that control on their customers.

Cory Doctorow:
They installed it so they could extract a 30% vig from Epic and other independent software vendors. But the day at which a government would knock on their door and demand that they use the facility that they developed to lock-in users to a store, to also lock-in users to authoritarian software, that day was completely predictable. You don't have to be a science fiction writer to say, "Oh, well, if you have a capability and it will be useful to a totalitarian state, and you put yourself in reach of that totalitarian states authority, they will deputize you to be part of their authoritarian project."

Cindy Cohn:
Yeah. And that's local as well as international. I mean, the pressure for the big platforms to be censors, to decide to be the omnipotent and always-correct deciders of what people get to say, is very strong right now. And that's a double-edged sword. Sometimes that can work well when there are bad actors on that, but really, we know how power works. And once you empower somebody to be the censor, they're going to be beholden to everybody who comes along who's got power over them to censor the people they don't like.

Cindy Cohn:
And it also then, I think, feeds this surveillance business model, this business model where tracking everything you do, and pay and trying to monetize that gets fed by the fact that you can't leave.

Danny O'Brien:
I want to try and channel the ghost of Steve Jobs here and present the other argument that lots of companies give for locking down their systems, which is that it prevents other smaller bad actors, it prevents malware, it means that Apple can control... But by controlling all of these avenues, it can build a securer, more consumer-friendly tool.

Cory Doctorow:
Yeah. I hear that argument, and I think there's some merit to it. Certainly, like, I don't have either the technical chops or the patience and attention to do a full security audit of every app I install. So, I like the idea of deputizing someone to figure out whether or not I should install an app, I just want to choose that person. I had a call recently with one of our colleagues from EFF, Mitch, who said that argument is a bit like the argument about the Berlin Wall, where the former East German government claimed that the Berlin Wall wasn't there to keep people in who wanted out, it was to stop people from breaking into the worker's paradise.

Cory Doctorow:
And if Apple was demonstrably only blocking things that harmed users, one would expect that those users would just never tick the box that says, "Let me try something else." And indeed, if that box was there, it would be much less likely that the Chinese state would show up and say, "Give us a means to spy on all your users," because Apple could say, "I will give you that means, but you have to understand that as soon as that's well understood, everyone who wants to evade your surveillance just ticks the box that says, 'Let me get a VPN somewhere else.'"

Cory Doctorow:
And so, it actually gives Apple some power to resist it. In that way, it's a bit like the warrant canaries that we're very fond of, where you have these national security letters that firms cannot disclose when they get them. And so, firms as they launch a new product say, "The number of national security letters we have received in respect to this product is zero," and they reissue that on a regular basis. And then they remove that line if they get a national security letter.

Cory Doctorow:
Jessamyn West, the librarian after the Patriot Act was passed, put a sign up in her library that said, "The FBI has not been here this week, watch for this sign to disappear," because she wasn't allowed to disclose that the FBI had been there, but she could take down the sign, and in the same way... And so, the idea here is that states are disincentivized to get up to this kind of mischief, where it relies on them keeping the existence of the mischief a secret, if that secrecy vanishes the instant they embark upon the mischief.

Cory Doctorow:
In the same way, if you have a lock-in model that disappears the instant you cease to act as a good proxy for your users' interests, then people who might want to force you to stop being a good proxy for your users' interest, have a different calculus that they make.

Cindy Cohn:
I just want to, sorry, put my lawyer hat on here. Warrant canaries are a really cute hack that are not likely to be something the FBI is just going to shrug its shoulders and say, "Oh, gosh, I guess you got us there, folks." So, I just, sorry...

Cory Doctorow:
Fair enough.

Cindy Cohn:
Sometimes I have to come in and actually make sure people aren't taking legal advice from Cory.

Danny O'Brien:
When we were kicking around ideas for the name of this podcast, one of them was, "This is not legal advice."

Cory Doctorow:
Well, okay, so instead, let's say binary transparency, where you just... automatically built into the app is a thing that just checks to see whether you got the same update as everyone else. And so that way, you can tell if you've been pushed to a different update from everyone else, and that's in the app when the app ships. And so, the only way to turn it off is to ship an update that turns it off, and if they only ship that to one user, it happens automatically. It's this idea of Ulysses pact, where you take some step before you're under coercion, or before you're in a position of weakness, to protect you from a future moment. It's equivalent of throwing away the Oreos than you go on a diet.

Cindy Cohn:
So, let's talk just a little bit more specifically about what are the things that we think are getting in the way of interoperability? And then, let's pivot to what we really want to do, which is fix it. So, what I've heard from you so far, Cory, is that we see law getting in the way, whether that's the Digital Millennium Copyright Act, Section 12, or one of the CFAA, or contract law--these kind of tools that get used by companies to stop interoperability. What are some of the other things that get in the way of us having our dream future where everything plugs into everything else?

Cory Doctorow:
I'd say that there's two different mechanisms that are used to block interop and that they interact with each other. There's law and there's tech, right? So, we have these technical countermeasures: the sealed vaults, chips, the TPMS inside of our computers, and our phones, and our other devices, which are dual-use technologies that have some positive uses, but they can be used to block interop.

Cory Doctorow:
As we move to more of a software-as-a-service model were some key element of the process happens in the cloud, it gives firms that control, that cloud, a gateway where they can surveil how users are using them and try and head off people who are adding interoperability--what we call competitive compatibility to a service, that's when you add a new interoperable feature without permission from the original manufacturer, and so on.

Cory Doctorow:
And those amount to a kind of cold war between different technologists working for different firms. So, on the one hand, you have companies trying to stop you from writing little scrapers that go into their website, and scrape their users waiting inboxes, and put them in a rival service on behalf of those users. And on the other hand, you have the people who are writing the scrapers, and we haven't seen a lot of evidence about who would win that fight, at least if it were a fair fight, because of the law--because between the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and a lot of other laws that kind of pop up as they are repurposed by firms with a lot of money to spend on legal entrepreneurship.

Danny O'Brien:
I want to drill down just a little bit with this because I loved your series that you wrote on competitive compatibility, which talked about the old age of the Internet, where we did have a far faster pace of innovation and the life and death of tech giants was far shorter, because they were kind of in this tooth and claw competitive mode, where ... I mean, just to plug an example, right? You would have, sort of, Facebook building on the contact lists that telephones and Google had by adversarially interoperating with them, right?

Danny O'Brien:
You would go to Facebook, and it would say, "Hey, tell us your friends." And it would be able to do that by connecting to their systems. Now, you can't do that with Facebook now, and you can't write an app that competes with Apple's software business, because neither of them will let you. And they're able to do that, I think what we're both saying, because... Not so much because of technical restrictions, but because of the laws that prevent you from doing that. You will get sued rather than out-innovated.

Cory Doctorow:
Well, yes. So, I think that's true. We don't know, right? I'm being careful here, because I have people who I trust as technologists who say, "No, it's really hard, they've got great engineers." I'm skeptical of that claim because we've had about a decade or more of companies being very afraid to try their hand at adversarial interoperability. And one of the things that we know is that well-capitalized firms can do a lot that firms that lack capital can't, and our investor friends tell us that what big tech has more than anything else is a kill zone--that even though Facebook, Apple, Google, and the other big firms have double-digit year-on-year growth with billions of dollars in clear profit every year, no one will invest in competitors of theirs.

Cory Doctorow:
So, I think that when technologists say, "Well, look, we beat our brains out on trying to write a bot that Facebook couldn't detect, or make an ad blocker that ... I don't know, the Washington Post couldn't stop or whatever, or write an app store and install it on iPhones and we couldn't do it." The part that they haven't tested is, well, what if an investor said, "Oh, I'm happy to get 10% of Facebook's total global profit, and I will capitalize you to reflect that expected return and let you spend that money on whatever it takes to do that"?

Cory Doctorow:
What if they didn't have the law on their side? What if they just had engineers versus engineers? But I want to get to this last piece, which is where all this law and these new legal interpretations come from, which is this legal entrepreneurship piece. So as I say, Facebook and its rivals, they have double-digit growth, billions of dollars in revenue every year, in profit, clear profit every year.

Cory Doctorow:
And some of that money is diverted to legal entrepreneurship. Instead of being sent to the shareholders, or being spent on engineering, or product design, it's spent on law. And that spend is only possible because there's just so much money sloshing around in those firms, and that spend is particularly effective, because they're all gunning for the same thing. They're a small number of firms that dominate the sector, and they have all used competitive compatibility to ascend to the top, and they are all committed to kicking away the ladder. And the thing that makes Oracle/Google so exceptional, is because it's an instance in which the two major firms actually have divergent interests.

Cory Doctorow:
Far more often, we see their industry associations and the executives from the firm's asking for the same things. And so, one of the things that we know about competition is when you lose competition, the firms that remain, find it easier to emerge a collusion. They don't have to actually all sit down and say, "This is what we all want." It's just easy for them to end up in the same place. Think about the kinds of offers you get for mobile phone plans, right? It's not that the executives all sat down and cooked up what those plans would be, it's just that they copy each other, and they all end up in the same place. Or publishing contracts, or record contracts.

Cory Doctorow:
Any super-concentrated industry is going to have a unified vision for what it wants in its lobbying efforts, and it's going to have a lot of money to spend on it.

Cindy Cohn:
So, let's shift because our focus here is fixing it. And my journey in this podcast is to have a vision of what a better future would look like, what does the world look like if we get this right? Because at the EFF and we spend a lot of time articulating all the ways in which things are broken, and that's a fine and awesome thing to do, but we need to fix them.

Cindy Cohn:
So, Cory, what would the world look like if we fixed interoperability? Give us the vision of this world.

Cory Doctorow:
I had my big kind of "road to Damascus" moment about this, when I gave a talk for the 15th anniversary of the computer science program at the University of Waterloo. They call themselves the MIT of Canada. I'm a proud University of Waterloo dropout. And I went back to give this speech and all of these super bright computer scientists were in the audience, grad students, undergrads, professors, and after I talked about compatibility, and so on, someone said, "How do we convince everyone to stop using Facebook and start using something else?"

Cory Doctorow:
And I had this, just this moment where I was like, "Why would you think that that was how you will get rid of Facebook?" Like, "When was it ever the case that if you decided you wanted to get a new pair of shoes, you throw away all your socks?" Why wouldn't we just give people the tool to use Facebook at the same time as something else, until enough of their friends have moved to the something else, that they're ready to quit Facebook?

Cory Doctorow:
And that, to me is the core of the vision, right? That rather than having this model that's a bit like the model that my grandmother lived through--my grandmother was a Soviet refugee. So, she left the Soviet Union, cut off all contact, didn't speak to her mother for 15 years, and was completely separated from her, it was a big momentous decision to leave the Soviet Union. We leave that, right? Where we tell people, "You either use Twitter or use Mastodon, but you don't read Twitter through Mastodon and have a different experience, and have different moderation rules," and so on. You're just either on Mastodon or you're on Twitter, they're on either sides of the iron curtain.

Cory Doctorow:
And instead, we have an experienced a lot more like the one I had when I moved to Los Angeles from London five years ago, where we not only got to take along the appliances that we liked and just fit them with adapters, we also get to hang out with our family back home by video call and visit them when we want to, and so on--that you let people take the parts of the offer that they like and stick with them, and leave behind the parts they don't like and go to a competitor. And that competitor might be another firm, it might be a co-op, it might be a thing just started by a tinkerer in their garage, it might be a thing started by a bright kid in a Harvard dorm room the way that Zuck did with Facebook.

Cory Doctorow:
And when those companies do stuff that makes you angry or sad, you take the parts of their service that you like, and you go somewhere else where people will treat you better. And you remain in contact with the people, and the hardware, and the services that you still enjoy, and you block the parts that you don't. So, you have technological self-determination, you have agency, and companies have to fight to keep your business because you are not a hostage, you're a customer.

Cindy Cohn:
Yeah. I think that that's exactly it, and well put. I think that we've gotten used to this idea of, what we called back in the days about the Apple app store--I don't know why Apple keeps coming up, because they're only one of the actors we're concerned about--but we used to call it the crystal prison, right? You buy an Apple device, and then it's really hard to get out of the Apple universe. It used to be that it was hard to use Microsoft Word unless you used a Windows machine. But we managed to pressure, and some of that was antitrust litigation, but we managed to make pressure so that that didn't work.

Cindy Cohn:
We want browsers that can take you to anywhere on the web, not just the ones that have made deals with the browsers. We want ISPs that offer you the entire web, just not the ones that pay for it. It really is an extension of network neutrality, this idea that we as users get to go where we want and get to dictate the terms of how we go there, at least to the extent of being able to interoperate.

Cory Doctorow:
I mean apropos of Apple, I don't want to pick on them either, because Apple are fantastic champions of interoperability when it suits them, right? As you say that, the document wars were won in part by the iWork suite, where Apple took some really talented engineers, reversed engineer the gnarly, weird hairball that is Microsoft Office formats, and made backwards compatible new office suites that were super innovative but could also save out to Word and Excel, even though you're writing into Numbers or Pages, and that part's great. And just like Amazon broke the DRM on music monopoly that Apple had when it launched the MP3 store, but now will not release its audiobooks from DRM through its Audible program.

Cory Doctorow:
Apple was really in favor of interoperability without permission when it came to document formats--it benefited--but doesn't like it when it comes to, say, rival app stores. Google is 100% all over interoperability when it comes to APIs, but not so much when it comes to the other areas where they enjoy lock-in. And I think that the lesson here is that we as users want interoperability irrespective of the effect that it has on a company's shareholders.

Cory Doctorow:
The companies have a much more instrumental view of interoperability: that they want interoperability when it benefits them, and they don't want it when it harms them. And I'm always reminded, Cindy, of the thing you used to say, when we were in the early days of the copyright wars around Napster. And we would talk to these lobbyists from the entertainment industry, and they would say, "Well, we are free speech organizations, that's where we cut our teeth." And you would say, "We know you love the First Amendment, we just wish you'd share."

Cindy Cohn:
Yeah, absolutely. Absolutely, and one of the things that we've done recently is, we started off talking about this as interoperability, then we called it adversarial interoperability to make it clear that you don't need to go on bended knee for permission. And recently, we started rebranding it to competitive compatibility. And Cory, you've used both of those terms in this conversation, and I just want to make sure our listeners get what we're doing. We're trying to really think about this. I mean, all of them are correct, but I think competitive compatibility, the reason we ended up there is not only is it fewer syllables, and we can call it ComCom, which we all like, but it's the idea that it's compatibility, it's competitive compatibility, it's being compatible with a competitive market-based environment, a place where users get to decide because people are competing for their interest and for their time.

Cindy Cohn:
And I really love the vision that if you're the old ... This was the product that Power Ventures tried to put out, was a service where it didn't matter whether you had a friend on LinkedIn, or you had a friend on Orkut--it's an old tool--or Facebook. You just knew you had a friend, and you just typed in, "Send the message to Cory," and the software just figured out where you were connected with them and sent the message through it.

Cindy Cohn:
I mean, that's just the kind of thing that should be easy, that isn't easy anymore. Because everybody is stuck in this platform mentality, where you're in one crystal prison or you're in the other, and you might be able to switch from one to the other, but you can't take anything with you in the way that you go.

Cindy Cohn:
The other tool that Power had that I thought was awesome was being able to look at all your social feeds in one screen. So, rather than switching in between them all the time, and trying to remember--which I spend a lot of time on right now, trying to remember whether I learned something on Twitter, or I learned it here on Facebook, or I learned it somewhere else--you have one interface, you have your own cockpit for your social media feeds, and you get to decide how it looks, instead of having to log into each of them separately and switch between them.

Cindy Cohn:
And those are just two ideas of how this world could serve you better. I think there are probably a dozen more. And I'd like for us to ... If there's other ones that you could think of like, how would my life be different if we fix this in ways that we can think about right now? And then of course, I think as with all tech and all innovation, the really cool things are going to be the things that we don't think about that show up anyway, because that's how it works.

Cory Doctorow:
Yeah, sure. I mean, a really good example right now is you'd be able to install Fortnite on your iPhone or your Android device, which is the thing you can't do as of the day that we record this. And again, that's what the app store lock-in is for, it's to take a bite out of Fortnite and other independent software vendors. But if you decided that you wanted to keep using the security system that your cable provider Comcast gave to you but then decided it wouldn't support anymore, which is the thing Comcast did last year, you could plug its cameras into a different control system.

Cory Doctorow:
If you decided that you liked the camera that Canary sent you and that you paid for, but you didn't like the fact that it's an unencrypted video--or rather, video that was an end-to-end encrypted to your phone and instead decrypted it in its data center so it could look at the video (and it does so for non-nefarious reasons, it wants to make sure it doesn't send you a motion control alert just because your cat walked by the motion sensor)--but you may decide that having a camera in your house that's always on, and that's sending video that third parties can look at, is not a thing you like, but you like the hardware, so you just plug that into something else too.

Danny O'Brien:
I love the Catalog of Missing Devices. This is the thing that Cory co-wrote, which was just a list of devices that we cannot see right now, because of some of the laws that prevent people confidently being able to innovate in this space. And I sort of concede, because we plod about this all the time, like, EFF's role in this, right? We're continuing to sort of lobby and also in the courts as well work out ways that we can challenge and redefine the legal environment here. But what's the message here if you're someone who's an open-source developer, or an entrepreneur, or a user? What's going to move the needle? What's going to take us into this future? And what can individuals do?

Cory Doctorow:
So, this is an iterative process, there isn't a path from A to Z. There is a heuristic for how we climb the hill towards a better future. And the way to understand this, as I tried to get at with this sort of technology and law and monopoly, is that our current situation is the result of companies having these monopoly rents, laws being bad because they got to spend them on it, companies being able to collude because their sectors are concentrated, and technology that works against users being in the marketplace.

Cory Doctorow:
And each one of those affects the other. So for example, if we had, say merger scrutiny, right? Say we said that firms were no longer allowed to buy nascent competitors, either to crush them or to acquire something that they couldn't do internally, the way you say Google has with most of its successful products. Really it's got search, and to a lesser extent Android, that was mostly an acquisition, and ... What's the other one? Search ... Oh, and Gmail are the really successful in-house products. Maybe Google Photos, although that's probably just successful because every Android device ships with it. But if we just say Google can't buy Fitbit, Google, a company that has tried repeatedly and failed to make a wearable, isn't allowed to buy Fitbit. In order to acquire that, then Google starts to lose some of its stranglehold on data, especially if you stop its rivals from buying Fitbit too. And that makes it weaker, so that it's harder for it to spend on legal entrepreneurship.

Cory Doctorow:
If we make devices that compete with Google, or tools that compete with Google--ad blockers, tracker blockers, and so on--then that also weakens them. And if they are weaker, they have fewer legal resources to deploy against these competitors as well. If we convince people that they can want more, right? If we can have a normative intervention, to say, "No one came down off a mount with two stone tablets," saying, 'Only the company that made your car can fix it,' or 'Only the company that made your phone can fix it,'" and we got them to understand that the right to repair has been stolen from them, then, when laws are contemplated that either improve our right to repair or take away our right to repair, there's a constituency to fight those laws.

Cory Doctorow:
So the norms, and the markets, and the technology, and the law, all work together with each other. And I'm not one of nature's drivers, I have very bad spatial sense, and when I moved to Los Angeles and became perforce a driver, I find myself spending a lot of time trying to parallel park. And the way that I parallel park is, I turn the wheel as far as I can, and then get a quarter of an inch of space, and then I turn it in the other direction, and I get a quarter of an inch of space. And I think as we try to climb this hill towards a more competitive market, we're going to have to see which direction we can pull in from moment to moment, to get a little bit more policy space that we can leverage to get a little bit more policy space.

Cory Doctorow:
And the four directions we can go in are: norms, conversations about what's right and wrong; laws, that tell you what's legal and not legal; markets, things that are available for sale; and tech, things that are technologically possible. And our listeners, our constituents, the people in Washington, the people in Brussels, they have different skill sets that they can use here, but everyone can do one of these things, right? If you're helping with the norm conversation, you are creating the market for the people who want to start the businesses, and you are creating the constituency for the lawmakers who want to make the laws.

Cory Doctorow:
So, everybody has a role to play in this fight, but there isn't a map from A to Z. That kind of plotting is for novelists, not political struggles.

Cindy Cohn:
I think this is so important. One of the things that ... And I want to close with this, because I think it's true for almost all of the things that we're talking about fixing, is that the answer to, "Should we do X or Y?" is "yes," right? We are in some ways, the kind of scrappy, underfunded side of almost every fight we're in around these kinds of things. And so, anybody who's going to force you to choose between strategies is undermining the whole cause.

Cindy Cohn:
These are multi-strategic questions. Should we break up big tech? Or should we create interoperability? The answer is, yes, we need to aim towards doing a bit of all of these things. There might be times when they conflict, but most of the time, they don't. And it's a false choice if somebody is telling you that you have to pick one strategy, and that's the only thing you can do.

Cindy Cohn:
Every big change that we have done has been a result of a whole bunch of different strategies, and you don't know which one is going to give way, which is going to pave the way faster. You just keep pushing on all things. So, we're finally moving on the Fourth Amendment on privacy, and we're moving in the courts. But we could have passed a privacy law, but the legislation got stuck. We got to do all of these things. They feed each other, they don't take away each other if we do it right.

Cory Doctorow:
Yeah, yeah. And I want to close by just saying, EFF's 30 years old, which is amazing. I've been with the organization nearly 20 years, which is baffling, and the thing that I've learned on the way is that these are all questions of movements and not individuals. Like as an individual, the best thing you can do is join a movement, right? If you're worried about climate change, it doesn't really ... How well you recycle is way less important than what you do with your neighbors to change the way that we think about our relationship to the climate.

Cory Doctorow:
And if you're worried about our technological environment, then your individual tech choices do matter. But they don't matter nearly so much as the choices that you make when you get together with other people to make this part of a bigger, wider struggle.

Cindy Cohn:
I think that's so right. And even those who are out there in their garages, innovating right now, they need all the rest of the conversation to work. Nobody just put something out there in the world and it magically caught fire and changed the world. I mean, we like that narrative, but that's not how it works. And so, even if you're one of those people--and there are many of them who are EFF fans and we love them--who are out there thinking about the next big idea, this whole movement has to move forward, so that that big idea finds the fertile ground it needs, take seed and grows, and then gives all the rest of us the really cool stuff in our fixed future.

Cindy Cohn:
So, thank you so much, Cory, for taking time with us. You never fail to bring exciting ideas. And I think that you also are really willing to talk to a sophisticated audience and not talk down to people and bring in complicated ideas without having ... And expect and get the audience to come up to the level of the conversation, so I certainly always learn from talking with you.

Cory Doctorow:
I was going to say, I learned it all from you guys, so thank you very much. And I miss you guys, I can't wait to see you in person again.

Danny O'Brien:
Cory is this little ball of pure idea concentrate, and I was madly scribbling notes through all of that discussion. But one of the phrases that stuck with me was that, he said the companies are blocking interoperability to control critics, customers, and competitors.

Cindy Cohn:
Yeah. I thought that was really good too, and obviously, the most important part of all of this is control. I mean, that's what the companies have. Of course, the part about critics is what especially triggers the First Amendment concerns, but control is the thing and I think that the ultimate power that we should have, the ultimate amount of control we should have is the ability to leave.

Cindy Cohn:
The ultimate power is the power to leave. That's the core thing that is needed to get companies to concentrate on their users. The conflict here is really between companies' desire to control users and users having the right to choose where they want to be.

Danny O'Brien:
One of the other things that I think comes out of this discussion is when you realize that companies, by blocking interoperability, can have exclusive power of censorship or control over their users. There's always someone else more powerful who has influenced itself over the companies and is ultimately going to take and use that power, right? And that's, generally speaking, governments.

Danny O'Brien:
We notice that when you have this capability to influence, or to censor, or to manipulate your users, governments and states ultimately would like access to that power also.

Cindy Cohn:
Yeah. We're seeing this all over the place, there's always a bigger fish. Right now, we see politicians in the United States, in very different directions, jockeying to force companies like Facebook to obey their preferences or agendas. And again, we have high-profile counter-examples, but where we live, EFF, in the trenches, we see that this power of censorship is most often used against those with the least voice in the political arena.

Cindy Cohn:
That kind of branches out to why we care about censorship and the First Amendment. I think that sometimes people forget this. We don't care about the First Amendment and free speech because we think it's okay for anybody to be able to say whatever they want, no matter how awful it is. The First Amendment isn't in our Constitution because we think it's really great to be an asshole. It's because the power to censor is so strong, and so easily misused.

Cindy Cohn:
As we've seen, once somebody has that power, everybody wants to control them. The other thing I think Cory really has a good grasp on is how we got here. We talked a little bit about the kill zone, that venture capitalists won't fund startups that attempt to compete. I think that's really right, and it's a piece that we're going to have to fix.

Danny O'Brien:
Yeah. I think one of the subtleties about the current VC environment that powers so much of current tech investment, at least, is the nature of the exit strategy. These days, a venture capitalist won't give ... expects to get their return, not by a company IPOing, or successfully overturning one of these monopolies, but actually by being bought out by those monopolies. And that really constrains and influences what new innovators or entrepreneurs plan on doing in the next few years. And I think that's one of the things that sticks in this current, less-than-useful cycle.

Danny O'Brien:
And usually, in these situations, I think that the community that I most expect to provide adversarial interoperability is, at least in theory, free of those financial incentives. And that's the free and open-source software community. So much of the history of open-source has been using interoperability to build and escape from existing proprietary systems, from the early days of Unix, to LibreOffice being a competitor, to Microsoft's word processing monopoly and so on.

Danny O'Brien:
And I think where these two things interact, is that these days a lot of open-source and free software gets its funding from the big companies themselves, because they don't necessarily want to fund interoperability. So, that means that the stuff that doesn't cater to interoperability gets a lot of rewards, and other communities who are fighting to shake off the shackles of proprietary software and dominant monopolies struggle without financial support.

Danny O'Brien:
And of course, there's legal liability there too. We just watched the youtube-dl case with GitHub throwing that off their service, because it's an attempt to interoperate with one of these big tech magnets.

Cindy Cohn:
Yeah. Free and open-source world is vital. They have those muscles, and it's always been how they work. They've always had to make sure that they can play on whatever hardware you have, as well as with other software. So, I think that this is a key to getting us into a place where we can make interoperability the norm, not the exception.

Cindy Cohn:
I also, am really pleased about the Internet Archive's work in really supporting the idea of a more distributed web. I think they really get the censorship possibilities, and are really supporting a lot of little companies, or little developers, or innovators who are trying to build a community to really get this done. And yes, the youtube-dl case, this is a situation in which you see the lack of protection for interoperability really meaning that the first thing that happened was this tool that so many people rely on went away as opposed to any other step. The first thing that happens is we lose the tool. That's because the legal system isn't set up to be really even or handle these kinds of situations, but rather just move to censorship first.

Cindy Cohn:
So in this, we've gone over what Cory talked about as the four levers of change. These four levers are things that were originated by Larry Lessig in the 90s. And those four levers are: law, like the DMCA, which is used in the YouTube case, the Computer Fraud and Abuse Act, and antitrust; norms; technology; and markets.

Cindy Cohn:
They all work together, and you can't just pick one. And there's a lot of efforts to try to say, "Well, you just have to pick one and let the others go." But in my experience, you really can't tell which one will create change, they all reinforce each other. And so, to really fix the Internet, we have to push on all four together.

Danny O'Brien:
But at least now we have four levers rather than no levers at all. On that note, I think we'll wrap up for today. Join us next time.

Danny O'Brien:
Thanks again for joining us. If you'd like to support the Electronic Frontier Foundation, here are three things you can do today. One: you can hit subscribe in your podcast player of choice. And if you have time, please leave a review. It helps more people find us. Two: please share on social media and with your friends and family. Three: please visit eff.org/podcasts, where you will find more episodes, learn about these issues, you can donate to become a member, and lots more.

Danny O'Brien:
Members are the only reason we can do this work, plus you can get cool stuff like an EEF hat, or a EFF hoodie, or even a camera cover for your laptop. Thanks once again for joining us, and if you have any feedback on this episode, please email podcast@eff.org. We do read every email. This podcast was produced by the Electronic Frontier Foundation with the help from Stuga Studios. Music by Nat Keefe of BeatMower.

rainey Reitman

The FCC’s Independence and Mission Are at Stake with Trump Nominee

6 days 18 hours ago

When there are only five people in charge of a major federal agency, the personal agenda of even one of them can have a profound impact. That’s why EFF is closely watching the nomination of Nathan Simington to the Federal Communications Commission (FCC).

Simington’s nomination appears to be the culmination of a several-month project to transform the FCC and expand its purview in ways that threaten our civil liberties online. The Senate should not confirm him without asking some crucial questions about whether and how he will help ensure that the FCC does the public interest job Congress gave it, which is to expand broadband access, manage the public’s wireless spectrum to their benefit, and protect consumers when they use telecommunications services.

There’s good reason to worry: Simington was reportedly one of the legal architects behind the president’s recent executive order seeking to have the FCC issue “clarifying” regulations for social media platforms. The executive order purports to give the FCC authority to create rules to which social media platforms must adhere in order to enjoy liability protections under Section 230, the most important law protecting our free speech online. Section 230 protects online platforms from liability for the speech of their users, while protecting their flexibility to develop their own speech moderation policies. The Trump executive order would upend that flexibility. 

As we’ve explained at length, this executive order was based on a legal fiction. The FCC’s role is not to enforce or interpret Section 230; its job is to regulate the United States’ telecommunications infrastructure: broadband, telephone, cable television, satellite, and all the various infrastructural means of delivering information to and from homes and businesses in the U.S. Throughout the Trump administration, the FCC has often shirked that duty—most dramatically, by abandoning any meaningful defense of net neutrality. Simington’s nomination seems to be an at-the-buzzer shot by an administration that’s been focused on undermining our protections for free speech online, instead of upholding the FCC’s traditional role of ensuring affordable access to the Internet and other communications technologies, and ensuring that those technologies don’t unfairly discriminate against specific users or uses.

The FCC Is Not the Speech Police—And Shouldn’t Be

Let’s take a look at the events leading up to Simington’s nomination. Twitter first applied a fact-check label to a tweet of President Trump’s in May, in response to his claims that mail-in ballots were part of a campaign of systemic voter fraud. As a private company, Twitter has the First Amendment right to implement such fact-checks, or even to choose not to carry someone’s speech for any reason.

The White House responded with its executive order that, among other things, directed the FCC to draft regulations that would narrow the Section 230 liability shield. As a result, it perverted the FCC’s role: it’s supposed to be a telecom regulator, not the social media police.

The White House executive order reflects a long-running (and unproven) claim in conservative circles that social media platforms are biased against conservative users. Some lawmakers and commentators have even claimed that their biased moderation practices somehow strip social media platforms of their liability protections under Section 230. As early as 2018, Sen. Ted Cruz incorrectly told Facebook CEO Mark Zuckerberg that in order to be shielded by 230, a platform had to be a “neutral public forum.” In the years since then, members of Congress have introduced multiple bills purporting to condition platforms’ 230 immunity on “neutral” moderation policies. As we’ve explained to Congress, a law demanding that platforms moderate speech in a certain way would be unconstitutional. The misguided executive order has the same inherent flaw as the bills: the government cannot dictate online platforms’ speech policies.

It’s not the FCC’s job to police social media, and it’s also not the president’s job to tell it to. By design, the FCC is an independent agency and not subject to the president’s demands. But when Republican FCC commissioner Michael O’Rielly correctly pointed out that government efforts to control private actor speech were unconstitutional, he was quickly punished. O’Rielly wrote [pdf], “the First Amendment protects us from limits on speech imposed by the government – not private actors – and we should all reject demands, in the name of the First Amendment, for private actors to curate or publish speech in a certain way.” The White House responded by withdrawing O’Rielly’s nomination and nominating Simington, one of the drafters of the executive order.

During a transition of power, it’s customary for independent agencies like the FCC to pause on controversial actions. The current FCC has so far adhered to that tradition, only moving forward items that have unanimous support. Every item the FCC has voted on since the election had the support of the Chair, the other four commissioners, and industry and consumer groups. For example, the FCC has moved forward on freeing up of 5.9 Ghz spectrum for unlicensed uses, a move applauded by EFF and most experts. But we worry that in nominating Simington, the administration is attempting to pave the way for a future FCC to go far beyond its traditional mandate and move into policing social media platforms’ policies. We’re glad to see Fight for the Future, Demand Progress, and several other groups rightfully calling on the Senate to not move forward on Nate Simington’s nomination.

The FCC’s Real Job Is More Important Than Ever 

There’s no shortage of work to do within FCC’s traditional role and statutory mandate. The FCC must begin to address the pressure test that the COVID-19 pandemic has posed to the U.S. telecommunications infrastructure. Much of the U.S. population must now rely on home Internet subscriptions for work, education, and socializing. Millions of families either have no home Internet access at all or lack sufficient access to meet this new demand. The new FCC has a monumental task in front of itself. 

During his Senate confirmation hearing, Simington gave no real indication on how he plans to work on the real issues facing the agency: broadband access, remote school challenges, spectrum management, improving competition, and public safety rules, for example. The only things we learned from the hearing are that he plans to continue the Trump-era policy of refusing to regulate large ISPs and that he refuses to recuse himself from decisions on the misguided executive order that he helped write. Before the Simington confirmation hearing started, Trump again urged Republicans to quickly confirm his nominee on a partisan basis.

In response, Senator Richard Blumenthal called for a hold on Simington’s nomination, indicating real concern for the FCC’s independence from the White House. That means the Senate would need to bypass his filibuster if it truly wanted to confirm Trump’s nominee.

Sen. Blumenthal’s concerns are real and important. President Trump effectively fired his own commissioner (O’Rielly) for expressing basic First Amendment principles. Before it confirms Simington, the Senate ought to consider what the nomination means for the future of the FCC. As the pandemic continues to worsen, there are too many mission critical issues for the FCC to tackle for it to continue with Trump’s misguided war on Section 230.

Ernesto Falcon

ICANN Can Stand Against Censorship (And Avoid Another .ORG Debacle) by Keeping Content Regulation and Other Dangerous Policies Out of Its Registry Contracts

1 week ago

The Internet’s domain name system is not the place to police speech. ICANN, the organization that regulates that system, is legally bound not to act as the Internet’s speech police, but its legal commitments are riddled with exceptions, and aspiring censors have already used those exceptions in harmful ways. This was one factor that made the failed takeover of the .ORG registry such a dangerous situation. But now, ICANN has an opportunity to curb this abuse and recommit to its narrow mission of keeping the DNS running, by placing firm limits on so-called “voluntary public interest commitments” (PICs, recently renamed Registry Voluntary Commitments, or RVCs).

For many years, ICANN and the domain name registries it oversees have given mixed messages about their commitments to free speech and to staying within their mission. ICANN’s bylaws declare that “ICANN shall not regulate (i.e., impose rules and restrictions on) services that use the Internet’s unique identifiers or the content that such services carry or provide.” ICANN’s mission, according to its bylaws, “is to ensure the stable and secure operation of the Internet's unique identifier systems.” And ICANN, by its own commitment, “shall not act outside its Mission.”

But…there’s always a but. The bylaws go on to say that ICANN’s agreements with registries (the managing entities of each top-level domain like .com, .org, and .horse) and registrars (the companies you pay to register a domain name for your website) automatically fall within ICANN’s legal authority, and are immune from challenge, if they were in place in 2016, or if they “do not vary materially” from the 2016 versions.

Therein lies the mischief. Since 2013, registries have been allowed to make any commitments they like and write them into their contracts with ICANN. Once they’re written into the contract, they become enforceable by ICANN. These “voluntary public interest commitments”  have included many promises made to powerful business interests that work against the rights of domain name users. For example, one registry operator puts the interests of major brands over those of its actual customers by allowing trademark holders to stop anyone else from registering domains that contain common words they claim as brands.

Further, at least one registry has granted itself “sole discretion and at any time and without limitation, to deny, suspend, cancel, or transfer any registration or transaction, or place any domain name(s) on registry lock, hold, or similar status” for vague and undefined reasons, without notice to the registrant and without any opportunity to respond.  This rule applies across potentially millions of domain names. How can anyone feel secure that the domain name they use for their website or app won’t suddenly be shut down? With such arbitrary policies in place, why would anyone trust the domain name system with their valued speech, expression, education, research, and commerce?

Voluntary PICs even played a role in the failed takeover of the .ORG registry earlier this year by the private equity firm Ethos Capital, which is run by former ICANN insiders. When EFF and thousands of other organizations sounded the alarm over private investors’ bid for control over the speech of nonprofit organizations, Ethos Capital proposed to write PICs that, according to them, would prevent censorship. Of course, because the clauses Ethos proposed to add to its contract were written by the firm alone, without any meaningful community input, they had more holes than Swiss cheese. If the sale had succeeded, ICANN would have been bound to enforce Ethos’s weak and self-serving version of anti-censorship.

A Fresh Look by the ICANN Board?

The issue of PICs is now up for review by an ICANN working group known as “Subsequent Procedures.” Last month, the ICANN Board wrote an open letter to that group expressing concern about PICs that might entangle ICANN in issues that fall “outside of ICANN’s technical mission.” It bears repeating that the one thing explicitly called out in ICANN’s bylaws as being outside of ICANN’s mission is to “regulate” Internet services “or the content that such services carry or provide.” The Board asked the working group [pdf] for “guidance on how to utilize PICs and RVCs without the need for ICANN to assess and pass judgment on content.”

A Solution: No Contractual Terms About Content Regulation

EFF supports this request, and so do many other organizations and stakeholders who don’t want to see ICANN become another content moderation battleground. There’s a simple, three-part solution that the Subsequent Procedures working group can propose:

  • PICs/RVCs can only address issues with domain names themselves—not the contents of websites or apps that use domain names;
  • PICs/RVCs should not give registries unbounded discretion to suspend domain names;
  • and PICs/RVCs should not be used to create new domain name policies that didn’t come through ICANN processes.

In short, while registries can run their businesses as they see fit, ICANN’s contracts and enforcement systems should have no role in content regulation, or any other rules and policies beyond the ones the ICANN Community has made together.

A guardrail on the PIC/RVC process will keep ICANN true to its promise not to regulate Internet services and content.  It will help avoid another situation like the failed .ORG takeover, by sending a message that censorship-for-profit is against ICANN’s principles. It will also help registry operators to resist calls for censorship by governments (for example, calls to suppress truthful information about the importation of prescription medicines). This will preserve Internet users’ trust in the domain name system.

Mitch Stoltz

Once Again, Facebook Is Using Privacy As A Sword To Kill Independent Innovation

1 week 2 days ago

Facebook claims that their role as guardian of users’ privacy gives them the power to shut down apps that give users more control over their own social media experience. Facebook is wrong. The latest example is their legal bullying of Friendly Social Browser.

Friendly is a web browser with plugins geared towards Facebook, Instagram, and other social media sites. It’s been around since 2010 and has a passionate following. Friendly offers ad and tracker blocking and simplifies downloading of photos and videos. It lets users search their news feeds by keyword, or reorder their feeds chronologically, and it displays Facebook pages with alternative “skins.”

To Facebook’s servers, Friendly is just a browser like any other. Users run Friendly much as they would Google Chrome, Mozilla Firefox, or any other standard web browser. According to Friendly, its software doesn’t call any developer interfaces (APIs) into Facebook or Instagram. Friendly has also stated that they don’t collect any personal information about users, including posts or uploads. Friendly does collect some anonymous usage data, and sends the ads that people view to a third-party analytics firm.

Over the summer, Facebook’s outside counsel demanded that Friendly stop offering its browser. Facebook’s lawyer claimed that Friendly violated Facebook’s terms of service by “chang[ing] the way Facebook and Instagram look and function” and “impairing [their] intended operation.” She claimed, incorrectly, that violating Facebook’s terms of service was also a violation of the federal Computer Fraud and Abuse Act (CFAA) and its California counterpart.

Although Friendly explained to Facebook’s lawyers that their browser didn’t access any Facebook developer APIs, Facebook hasn’t budged from its demand that Friendly drop dead. 

Today, EFF sent Facebook a letter challenging Facebook’s legal claims. We explained that the CFAA and its California counterpart are concerned with “access” to a protected computer:

California law defines “access” as “to gain entry to, instruct, cause input to, cause output from, cause data processing with, or communicate with” a computer. Friendly is a web browser, so it is our understanding that Friendly does not itself “gain entry to” or “communicate with” Facebook in any way. Like other popular browsers such as Google Chrome or Mozilla Firefox, therefore, Friendly does not “access” Facebook; Facebook users do. But presumably Facebook knows better than to directly accuse its users of being malicious hackers if they change the colors of websites they view.

While EFF is not representing Friendly at this time, we weighed in because Facebook’s claims are dangerous. Facebook is claiming the power to decide which browsers its users can use to access its social media sites, an extremely broad claim. According to the reasoning of Facebook’s demand, accessibility software like screen readers, magnifiers, and tools that change fonts or colors to make pages more readable for visually impaired people all exist by Facebook’s good will, and could be shut down anytime if Facebook decides they “change the way Facebook and Instagram look and function.”

Friendly is far from the only victim of the company’s strong-arming. Just last month, Facebook threatened the NYU Ad Observatory, a research project that recruits Facebook users to install a plugin to collect the ads they’re shown. And in 2016, Facebook convinced a federal court of appeals that the CFAA barred a third-party social media aggregator from interacting with user accounts, even when those users chose to sign up for the aggregator’s service. In sum, Facebook’s playbook—using the CFAA to enforce spurious privacy claims—has made it harder for innovators, security experts, and researchers of all stripes to use Facebook in their work. 

Facebook has claimed that it must bring its legal guns to bear on any software that interoperates with Facebook or Instagram without permission, citing to the commitments that Facebook made to the Federal Trade Commission after the Cambridge Analytica scandal. But there are different kinds of privacy threats. Facebook’s understandable desire to protect users (and its own reputation) against privacy abuses by third parties like Cambridge Analytica doesn’t take away users’ right to guard themselves against Facebook’s own collection and mishandling of their personal data by employing ad- and tracker-blocking software like Friendly (or EFF’s Privacy Badger, for that matter). 

Nor do Facebook’s privacy responsibilities justify stopping users from changing the way they experience Facebook, and choosing tools to help them do that. Attempts to lock out third-party innovators are not a good look for a company facing antitrust investigations, including a pending lawsuit from the Federal Trade Commission.

The web isn’t television. Website owners might want to control every detail about how their sites look and function, but since the very beginning, users have always been in control of their own experience—it’s one of the defining features of the Web. Users can choose to re-arrange the content they receive from websites, save it, send it along to others, or ignore some of it by blocking advertisements and tracking devices. The law can’t stop users from choosing how to receive Facebook content, and Facebook shouldn’t be trying to lock out competition under a guise of protecting privacy.

Related Cases: Facebook v. Power Ventures
Mitch Stoltz

Video Analytics User Manuals Are a Guide to Dystopia

1 week 3 days ago

A few years ago, when you saw a security camera, you may have thought that the video feed went to a VCR somewhere in a back office that could only be accessed when a crime occurs. Or maybe you imagined a sleepy guard who only paid half-attention, and only when they discovered a crime in progress. In the age of internet-connectivity, now it’s easy to imagine footage sitting on a server somewhere, with any image inaccessible except to someone willing to fast forward through hundreds of hours of footage.

That may be how it worked in 1990s heist movies, and it may be how a homeowner still sorts through their own home security camera footage. But that's not how cameras operate in today's security environment. Instead, advanced algorithms are watching every frame on every camera and documenting every person, animal, vehicle, and backpack as they move through physical space, and thus camera to camera, over an extended period of time. 

The term "video analytics" seems boring, but don't confuse it with how many views you got on your YouTube “how to poach an egg” tutorial. In a law enforcement or private security context, video analytics refers to using machine learning, artificial intelligence, and computer vision to automate ubiquitous surveillance. 

Through the Atlas of Surveillance project, EFF has found more than 35 law enforcement agencies that use advanced video analytics technology. That number is steadily growing as we discover new vendors, contracts, and capabilities. To better understand how this software works, who uses it, and what it’s capable of, EFF has acquired a number of user manuals. And yes, they are even scarier than we thought. 

Briefcam, which is often packaged with Genetec video technology, is frequently used at real-time crime centers. These are police surveillance facilities that aggregate camera footage and other surveillance information from across a jurisdiction. Dozens of police departments use Briefcam to search through hours of footage from multiple cameras in order to, for instance, narrow in on a particular face or a specific colored backpack. This power of video analytic software would  be particularly scary if used to identify people out practicing their First Amendment right to protest. 

Avigilon systems are a bit more opaque, since they are often sold to business, which aren't subject to the same transparency laws. In San Francisco, for instance, Avigilon provides the cameras and software for at least six business improvement districts (BIDs) and Community Benefit Districts (CBDs). These districts blanket neighborhoods in surveillance cameras and relay the footage back to a central control room. Avigilon’s video analytics can undertake object identification (such as whether things are cars and people), license plate reading, and potentially face recognition. 

You can read the Avigilon user manual here, and the Briefcam manual here. The latter was obtained through the California Public Records Act by Dylan Kubeny, a student journalist at the University of Nevada, Reno Reynolds School of Journalism. 

But what exactly are these software systems' capabilities? Here’s what we learned: 

Pick a Face, Track a Face, Rate a Face

If you're watching video footage on Briefcam, you can select any face, then add it to a "watchlist." Then, with a few more clicks, you can retrieve every piece of video you have with that person's face in it. 

Briefcam assigns all face images 1-3 stars. One star: the AI can't even recognize it as a person. Two stars: medium confidence. Three stars: high confidence.  

Detection of Unusual Events

Avigilon has a pair of algorithms that it uses to predict what it calls "unusual events." 

The first can detect "unusual motions," essentially patterns of pixels that don't match what you'd normally expect in the scene. It takes two weeks to train this self-learning algorithm.  The second can detect "unusual activity" involving cars and people. It only takes a week to train. 

Also, there's "Tampering Detection" which, depending on how you set it, can be triggered by a moving shadow:

Enter a value between 1-10 to select how sensitive a camera is to tampering Events. Tampering is a sudden change in the camera field of view, usually caused by someone unexpectedly moving the camera. Lower the setting if small changes in the scene, like moving shadows, cause tampering events. If the camera is installed indoors and the scene is unlikely to change, you can increase the setting to capture more unusual events.

Pink Hair and Short Sleeves 

With Briefcam’s shade filter, a person searching a crowd could filter by the color and length of items of clothing, accessories, or even hair. Briefcam’s manual even states the program can search a crowd or a large collection of footage for someone with pink hair. 

In addition, users of BriefCam can search specifically by what a person is wearing and other “personal attributes.” Law enforcement attempting to sift through crowd footage or hours of video could search for someone by specifying blue jeans or a yellow short-sleeved shirt.

Man, Woman, Child, Animal

BriefCam sorts people and objects into specific categories to make them easier for the system to search for. BriefCam breaks people into the three categories of “man,” “woman,” and “child.” Scientific studies show that this type of categorization can misidentify gender nonconforming, nonbinary, trans, and disabled people whose bodies may not conform to the rigid criteria the software looks for when sorting people. Such misidentification can have real-world harms, like triggering misguided investigations or denying access.

The software also breaks down other categories, including distinguishing between different types of vehicles and recognizing animals.

Proximity Alert

In addition to monitoring the total number of objects in a frame or the relative size of objects, BriefCam can detect proximity between people and the duration of their contact. This might make BriefCam a prime candidate for “COVID-19 washing,” or rebranding invasive surveillance technology as a potential solution to the current public health crisis. 

Avigilon also claims it can detect skin temperature, raising another possible assertion of public health benefit. But, as we’ve argued before, remote thermal imaging can often be very inaccurate, and fail to detect virus carriers that are asymptomatic. 

Public health is a collective effort. Deploying invasive surveillance technologies that could easily be used to monitor protestors and track political figures is likely to breed more distrust of the government. This will make public health collaboration less likely, not more. 

Watchlists 

One feature available both with Briefcam and Avigilon are watchlists, and we don't mean a notebook full of names. Instead, the systems allow you to upload folders of faces and spreadsheets of license plates, and then the algorithm will find matches and track the targets’ movement. The underlying watchlists can be extremely problematic. For example, EFF has looked at hundreds of policy documents for automated license plate readers (ALPRs) and it is very rare for an agency to describe the rules for adding someone to a watchlist. 

Vehicles Worldwide 

Often, ALPRs are associated with England, the birthplace of the technology, and the United States, where it has metastasized. But Avigilon already has its sights set on new markets and has programmed its technology to identify license plates across six continents

It's worth noting that Avigilon is owned by Motorola Solutions, the same company that operates the infamous ALPR provider Vigilant Solutions.

Conclusion

We’re heading into a dangerous time. The lack of oversight of police acquisition and use of surveillance technology has dangerous consequences for those misidentified or caught up in the self-fulfilling prophecies of AI policing

In fact,  Dr. Rashall Brackney, the Charlottesville Police Chief, described these video analytics as perpetuating racial bias at a recent panel. Video analytics "are often incorrect," she said. "Over and over they create false positives in identifying suspects."

This new era of video analytics capabilities causes at least two problems. First, police could rely more and more on this secretive technology to dictate who to investigate and arrest by, for instance, identifying the wrong hooded and backpacked suspect. Second, people who attend political or religious gatherings will justifiably fear being identified, tracked, and punished. 

Over a dozen cities across the United States have banned government use of face recognition, and that’s a great start. But this only goes so far. Surveillance companies are already planning ways to get around these bans by using other types of video analytic tools to identify people. Now is the time to push for more comprehensive legislation to defend our civil liberties and hold police accountable. 

To learn more about Real-Time Crime Centers, read our latest report here

Banner image source: Mesquite Police Department pricing proposal.

Dave Maass

Introducing Cover Your Tracks!

1 week 3 days ago

Today, we’re pleased to announce Cover Your Tracks, the newest edition and rebranding of our historic browser fingerprinting and tracker awareness tool Panopticlick. Cover Your Tracks picks up where Panopticlick left off. Panopticlick was about letting users know that browser fingerprinting was possible; Cover Your Tracks is about giving users the tools to fight back against the trackers, and improve the web ecosystem to provide privacy for everyone.

A screen capture of the front page of coveryourtracks.eff.org. The mouse clicks on “Test your browser” button, which loads a results page with a summary of protections the browser has in place against fingerprinting and tracking. The mouse scrolls down to toggle to “detailed view”, which shows more information about each metric, such as further information on System Fonts, Language, and AudioContext fingerprint, among many other metrics.

Over a decade ago, we launched Panopticlick as an experiment to see whether the different characteristics that a browser communicates to a website, when viewed in combination, could be used as a unique identifier that tracks a user as they browse the web. We asked users to participate in an experiment to test their browsers, and found that overwhelmingly the answer was yes—browsers were leaking information that allowed web trackers to follow their movements.

The old Panopticlick website.

In this new iteration, Cover Your Tracks aims to make browser fingerprinting and tracking more understandable to the average user.  With helpful explainers accompanying each browser characteristic and how it contributes to their fingerprint, users get an in-depth look into just how trackers can use their browser against them.

Our browsers leave traces of identifiable information just like an animal might leave tracks in the wild. These traces can be combined into a unique identifier which follows users’ browsing of the web, like wildlife which has been tagged by an animal tracker. And, on the web and in the wild, one of the best ways to confuse trackers and make it hard for them to identify you individually. Some browsers are able to protect their users by making all instances of their browser look the same, regardless of the computer it’s running on. In this way, there is strength in numbers. Users can also “cover their tracks,” protecting themselves by installing extensions like our own Privacy Badger.

A screenshot from Cover Your Tracks’ learning page, https://coveryourtracks.eff.org/learn

For beginners, we’ve created a new learning page detailing the methodology we use to mimic trackers and test browsers, as well as next steps users can take to learn more and protect themselves. Because tracking and fingerprinting are so complex, we wanted to provide users a way to deep-dive into exactly what kind of tracking might be happening, and how it is performed.

We have also worked with browser vendors such as Brave to provide more accurate results for browsers that are employing novel anti-fingerprinting techniques. Add-ons and browsers that randomize the results of fingerprinting metrics have the potential to confuse trackers and mitigate the effects of fingerprinting as a method of tracking. In the coming months, we will provide new infographics that show users how they can become safer by using browsers that fit in with large pools of other browsers.

We invite you to test your own browser and learn more - just head over to Cover Your Tracks!

Bill Budington

Find Out How Ad Trackers Follow You On the Web With EFF’s “Cover Your Tracks” Tool

1 week 3 days ago
Beginner-Friendly Tool Gives Users Options for Avoiding Browser Fingerprinting and Tracking

San Francisco—The Electronic Frontier Foundation (EFF) today launched Cover Your Tracks, a interactive tool that teaches users how advertisers follow them as they shop or browse online, and how to fight back against corporate trackers to protect their privacy, mitigate relentless ad targeting, and improve the web ecosystem for everyone.

With Black Friday and Cyber Monday just days away, when millions of users will be shopping online, Cover Your Tracks provides an in-depth learning experience—aimed at non-technical users—about how they are unwittingly being tracked online through their browsers.

“Our browsers leave traces of identifiable information when we visit websites, like animals might leave tracks in the wild, and that can be combined into a unique identifier that follows us online, like wildlife that’s been tagged,” said EFF Senior Staff Technologist Bill Budington. “We want users to take back control of their Internet experience by giving them a tool that lets them in on the hidden tricks and technical ploys online advertisers use to follow them so they can cover their tracks.”

Cover Your Tracks allows users to test their browsers to see what information about their online activities is visible to, and scooped up by, trackers. It shines a light on tracking mechanisms that utilize cookies, code embedded on websites, and more. Users can also learn how to cover some of their tracks by changing browser settings and using anti-tracking add-ons like EFF’s Privacy Badger.

Cover Your Tracks builds on EFF’s ground-breaking tracker awareness tool Panopticlick, which exposed how advertisers create “fingerprints” of users by capturing little bits of information given off by their browsers and using that to identify and follow them around the web and build profiles for ad targeting.

Panopticlick showed users that browser fingerprinting existed. Cover Your Tracks takes the next step, helping empower users to uncover and combat trackers. The goal is to provide easy-to-understand information about exactly what kind of fingerprint tracking might be happening and how it’s performed.

“Cover Your Tracks shows how Amazon, Facebook, Google, Twitter, and hundreds of lesser known entities work together to exploit browser information in order to track users. They then use that information to bombard users with ads,”  said Budington. “We want users to learn a few tricks of their own to confuse trackers by utilizing browsers and extensions that give off the same information regardless of what computers they’re running on, or randomize certain bits of information so they can’t be used as a reliable tracker.”

Cover Your Tracks offers a learning page about the methodology EFF uses to mimic trackers and test browsers. EFF plans to add new infographics demonstrating how users can employ add-ons and new kinds of anti-fingerprinting browsers to fight tracking.

Visit Cover Your Tracks:
https://coveryourtracks.eff.org/

For more on corporate surveillance:
https://www.eff.org/wp/behind-the-one-way-mirror

For more on Panopticlick:
https://panopticlick.eff.org/

Contact:  WilliamBudingtonSenior Staff Technologistbill@eff.org
Karen Gullo

macOS Leaks Application Usage, Forces Apple to Make Hard Decisions

1 week 4 days ago

Last week, users of macOS noticed that attempting to open non-Apple applications while connected to the Internet resulted in long delays, if the applications opened at all. The interruptions were caused by a macOS security service attempting to reach Apple’s Online Certificate Status Protocol (OCSP) server, which had become unreachable due to internal errors. When security researchers looked into the contents of the OCSP requests, they found that these requests contained a hash of the developer’s certificate for the application that was being run, which was used by Apple in security checks.[1] The developer certificate contains a description of the individual, company, or organization which coded the application (e.g. Adobe or Tor Project), and thus leaks to Apple that an application by this developer was opened.

Moreover, OCSP requests are not encrypted. This means that any passive listener also learns which application a macOS user is opening and when.[2] Those with this attack capability include any upstream service provider of the user; Akamai, the ISP hosting Apple’s OCSP service; or any hacker on the same network as you when you connect to, say, your local coffee shop’s WiFi. A detailed explanation can be found in this article.

Part of the concern that accompanied this privacy leak was the exclusion of userspace applications like LittleSnitch from the ability to detect or block this traffic. Even if altering traffic to essential security services on macOS poses a risk, we encourage Apple to allow power users the ability to choose trusted applications to control where their traffic is sent.

Apple quickly announced a new encrypted protocol for checking developer certificates and that they would allow users to opt out of the security checks. However, these changes will not roll out until sometime next year. Developing a new protocol and implementing it in software is not an overnight process, so it would be unfair to hold Apple to an impossible standard.

But why has Apple not simply turned the OCSP requests off for now? To answer this question, we have to discuss what the OCSP developer certificate check actually does. It prevents unwanted or malicious software from being run on macOS machines. If Apple detects that a developer has shipped malware (either through theft of signing keys or malice), Apple can revoke that developer’s certificate. When macOS next opens that application, Apple’s OCSP server will respond to the request (through a system service called `trustd`) that the developer is no longer trusted. So the application doesn’t open, thus preventing the malware from being run.

Fixing this privacy leak, while maintaining the safety of applications by checking for developer certificate revocations through OCSP, is not as simple as fixing an ordinary bug in code. This is a structural bug, so it requires structural fixes. In this case, Apple faces a balancing act between user privacy and safety. A criticism can be made that they haven’t given users the option to weigh the dilemma on their own, and simply made the decision for them. This is a valid critique. But the inevitable response is equally valid: that users shouldn’t be forced to understand a difficult topic and its underlying trade-offs simply to use their machines.

Apple made a difficult choice to preserve user safety, but at the peril of their more privacy-focused users. macOS users who understand the risks and prefer privacy can take steps to block the OCSP requests. We recommend that users who do this set a reminder for themselves to restore these OCSP requests once Apple adds the ability to encrypt them.

[1] Initial reports of the failure claimed Apple was receiving hashes of the application itself, which would have been even worse, if it were true.

[2] Companies such as Adobe develop many different applications, so an attacker would be able to establish that the application being opened is one of the set of all applications that Adobe has signed for macOS. Tor, on the other hand, almost exclusively develops a single application for end-users: the Tor Browser. So an attacker observing the Tor developer certificate will be able to determine that Tor Browser is being opened, even if the user takes steps to obscure their traffic within the app.

Bill Budington

Podcast Episode: Fixing a Digital Loophole in the Fourth Amendment

1 week 5 days ago
Episode 003 of EFF’s How to Fix the Internet

Jumana Musa joins EFF hosts Cindy Cohn and Danny O’Brien as they discuss how the third-party doctrine is undermining our Fourth Amendment right to privacy when we use digital services, and how recent court victories are a hopeful sign that we may reclaim these privacy rights in the future.

In this episode you’ll learn about:

  • How the third-party doctrine is a judge-created legal doctrine that impacts your business records held by companies, including metadata such as what websites you visit, who you talk to, your location information, and much more;
  • The Jones case, a vital Supreme Court case that found that law enforcement can’t use continuous location tracking with a GPS device without a warrant;
  • The Carpenter case, which found that the police must get a warrant before accessing cell site location information from a cell phone company over time;
  • How law enforcement uses geofence warrants to scoop up the location data collected by companies from every device that happens to be in a geographic area during a specific period of time in the past;
  • How getting the Fourth Amendment right is especially important because it is part of combatting racism: communities of color are more frequently surveilled and targeted by law enforcement, and thus slipshod legal standards for accessing data has a disproportionate impact on communities of color;
  • Why even a warrant may not be an adequate legal standard sometimes, and that there are circumstances in which accessing business records should require a “super warrant” – meaning law enforcement could only access the data for investigating a limited number of crimes, and only if the data would be important for the crime. 

Jumana Musa is a human rights attorney and racial justice activist. She is currently the Director of the Fourth Amendment Center at the National Association of Criminal Defense Lawyers. As director, Ms. Musa oversees NACDL's initiative to build a new, more durable Fourth Amendment legal doctrine for the digital age. The Fourth Amendment Center educates the defense bar on privacy challenges in the digital age, provides a dynamic toolkit of resources to help lawyers identify opportunities to challenge government surveillance, and establishes a tactical litigation support network to assist in key cases. Ms. Musa previously served as NACDL's Sr. Privacy and National Security Counsel.

Prior to joining NACDL, Ms. Musa served as a policy consultant for the Southern Border Communities Coalition, a coalition of over 60 groups across the southwest that address militarization and brutality by U.S. Customs and Border Protection agents in border communities. Previously, she served as Deputy Director for the Rights Working Group, a national coalition of civil rights, civil liberties, human rights, and immigrant rights advocates where she coordinated the “Face the Truth” campaign against racial profiling. She was also the Advocacy Director for Domestic Human Rights and International Justice at Amnesty International USA, where she addressed the domestic and international impact of U.S. counterterrorism efforts on human rights. She was one of the first human rights attorneys allowed to travel to the naval base at Guantanamo Bay, Cuba, and served as Amnesty International's legal observer at military commission proceedings on the base. You can find Jumana on Twitter at @musajumana.

Please subscribe to How to Fix the Internet via RSSStitcherTuneInApple PodcastsGoogle PodcastsSpotify or your podcast player of choice. You can also find the Mp3 of this episode on the Internet Archive. If you have any feedback on this episode, please email podcast@eff.org.

Below, you’ll find legal resources – including links to important cases, books, and briefs discussed in the podcast – as well a full transcript of the audio.

Resources

3rd Party Doctrine & Metadata

Third-Party Doctrine and DNA/Genetic Privacy

SCOTUS Cases and Decisions re. Third Party Doctrine

Cases re. Location Data, Privacy, and Warrant Requirements

Black Lives Matter, the 4th Amendment, and Surveillance

Transcript of Episode 003: Fixing a Digital Loophole in the Fourth Amendment

Danny O'Brien:
Welcome to How to Fix the Internet with the Electronic Frontier Foundation, the podcast that explores some of the biggest problems we face online right now, problems whose source and solution is often buried in the obscure twists of technological development, societal change, and the subtle details of Internet law.

Cindy Cohn:
Hi, everyone. I'm Cindy Cohn. I'm a lawyer, and I'm the Executive Director of the Electronic Frontier Foundation.

Danny O'Brien:
I'm Danny O'Brien. I'm also at EFF, and I guess I'm the opposite of a lawyer, whatever that is. Without giving anything away, I hope, the focus on this week's episode is how to fix the third-party doctrine. While not everyone even knows what the third-party doctrine is, I can absolutely declare that when I learned about it, the very first thing I thought was, "Wow, this really needs to be fixed," and yet here we are.

Cindy Cohn:
Oh, yes. We'll go into this in much more detail with our guest. But briefly, the third-party doctrine is why courts have held that you have no Fourth Amendment protections in your metadata when it's held by a third party, like your phone company or your bank.

Danny O'Brien:
Or a tech company, like Facebook, Google, or, of course, Amazon, which has a lot of metadata about me.

Cindy Cohn:
Yes, exactly. So, again, it's not the content, but it's all the other stuff, which is things like who you talk to, the websites you visit, where you are when you visit them, and how long you were there.

Danny O'Brien:
Okay. Now pretend I know nothing, and all my civic lessons at school were solely about the Magna Carta and the treacherousness of Americans. What are your Fourth Amendment protections of which you speak, Cindy?

Cindy Cohn:
Well, my British friend, I'm tempted to cue King George in Hamilton right now, because that's kind of what you sound like. But the Fourth Amendment governs your privacy relationship with the government and specifically law enforcement's right to grab you, and for us here today, it also governs when they get to dig through your stuff. It requires the cops to go before a judge and get a warrant and show probable cause in order to get permission to do so, and they only get to do so for some very serious crimes. The third party doctrine suspends your Fourth Amendment rights when it comes to your metadata. But clearly the person you need to talk to is our guest, Jumana Musa.

Danny O'Brien:
Jumana is the Director of the Fourth Amendment Center at the National Association of Criminal Defense Lawyers. The Fourth Amendment Center provides materials, training, and direct assistance to defense lawyers who are handling cases involving new surveillance tools, technologies, and tactics in order to create a new legal doctrine that protects constitutional rights in the digital age.

Cindy Cohn:
Jumana, thanks so much for joining us. So tell us more about the third-party doctrine and how it relates to the Fourth Amendment and why it's such a priority for you folks at the National Association of Criminal Defense Lawyers.

Jumana Musa:
Well, thank you for having me on. I want to wish EFF a happy 30th birthday. I'm thrilled to be able to do this in the context of this particular milestone for all of you. I think EFF for so long has been at the forefront of this issue, which even before people sort of recognized it as a fundamental issue, the idea of what happens with these advances in technology, how do they impact people's privacy rights, and so congratulations to you all for this milestone.

Jumana Musa:
So why do we care about the third-party doctrine? I guess in a nutshell, I will say it like this. We are now at a place where, because of the way things have been digitized, because of the technology that we rely on in our day-to-day life, law enforcement is able to investigate people, to accumulate information, and to utilize that kind of data and information against people in ways they've never been able to before.

Jumana Musa:
The issue with that is whereas previously if law enforcement decided they wanted to know where John Doe was going on any given day or to follow them to see, were they involved in X, Y, or Z crime. They would actually have to go through the process of thinking about, "Is this serious enough? Do we want to expend the resources? Do we have enough people on the force to put two or three or four officers on this to follow them around constantly 24/7?," whereas now all they need to do sometimes is just requisition a company and say, "Can we have all the records of where John Doe has been?" or "Can we just put something on their card? Can we just find another way of doing this?", where the technology has made it so easy for this information to both be utilized, to be scanned, to be sort of put together all kinds of different ways that it almost makes the Fourth Amendment moot, which is supposed to be not the sort of ...

Jumana Musa:
I know people always think of the Constitution as your affirmative rights, like my right to privacy. But what it really is, it's a restriction on state power, and it's supposed to be the thing that protects you against a government who just says, "I could just decide that I want to know what John Doe or Jane Doe is up to, and I kind of feel like they're up to no good. So I'm just going to fish through everything until I can find something to pin on them." It's what we used to call a general warrant, right? Which was the idea that you're just going to pick somebody and search everything until you can find something to pin on them. That is almost the state of affairs when you look at the amount of data that comes from all the technologies and all the different surveillance tools that are out there.

Cindy Cohn:
So the third-party doctrine is judge-created, created by the Supreme Court idea that certain information that you have or that is about you is placed outside of the protection of the Fourth Amendment. The argument is that because you've given this information, or this information is being held not by you, but by someone else, it loses the constitutional protection. But right now, we're living in a time, between cloud computing and our phones and the way we live our lives, that some very, very detailed information about us is held by third parties and is subject to the doctrine, everything from the telephone records to the websites you visit online to what you read online to the books you read if you use a Kindle or Audible.

Cindy Cohn:
Your ISP has metadata, too. So, it's not just when you go to read your Gmail, but it's the ISP that hosts you on the way. It also can include your car, if your car is connected, Internet of things. If you've got a smart refrigerator, what your refrigerator knows about you could be subject to the third party doctrine. It's just a huge amount of information, and it can reveal extremely sensitive information about you and your loved ones and your community, which is why it's on the top of our list.

Danny O'Brien:
So Jumana, just to clarify for me, so all of this data that's stored by third parties is now stripped of its Fourth Amendment protections. Is there any kind of block there? Is there any protection, once that goes away? You don't have to apply for a warrant anymore, but do the companies have ways of saying you only get this data?

Jumana Musa:
In theory, there's some restrictions and guardrails. In reality, they just don't always come through, and even with a warrant, I will say that is true. The reason for that is this. I think there are times, particularly with warrants ... Law enforcement goes before a magistrate, and they say, "This is what I need." They may not always be clear on what they're asking, or they may just get such a broad warrant, because the magistrate may not fully comprehend what it is they're being asked for.

Jumana Musa:
So to give an example, there was a period of time where law enforcement was using devices called ... Well, people commonly call them stingrays. They're cell site simulators. Essentially, they act like a cell tower. So it's a device that could actually get all the cell phones in an area to, instead of going straight to the cell tower to get a signal, route first through this device that would help law enforcement locate you.

Jumana Musa:
They were going to magistrates and saying, "We need a pen register warrant," which is basically ... A pen register is like you go to the phone company and say, "I want all the to and from numbers, every number that this phone has dialed and every number that has been dialed into this phone." That's a very different thing than a stingray, which even has the opportunity to take the content of calls, right? But they were sort of hiding that information.

Jumana Musa:
So they may be hiding the information, or you may have people sign off on warrants where they say, "Of course. You can take all the devices and search everything," and they sign off on that warrant, right? So even though there's a warrant, it is so broad that it should be impermissible. So I think that's one factor, even with a warrant.

Jumana Musa:
When it comes to companies and records, there is broad leeway in terms of the types of records that people can get with a subpoena. There are opportunities for companies to push back and say, "I think this is too broad. I don't want to do this." But there's a lot involved in that, in terms of making that call, how far do you push it, the question of what's the reason they're being asked for it. It puts companies in a very difficult position to be the ones defending the sort of privacy rights of the person who is likely not even aware that this search is happening.

Cindy Cohn:
So just to clarify a little bit and lift this up a little bit, we think warrants are needed for this kind of metadata information, but law enforcement is able to get that information through legal processes, like subpoenas and other things. The problem is that that's just too low a standard and often gets abused. So we think that moving metadata up into the category where a warrant is required, and I think both Jumana and I are concerned that even the warrant standard is too low for some times, but moving it from the subpoenas, which you get by pushing print on a printer, to a warrant, where you actually have to go in front of a judge, is an important step along the way to protecting your privacy.

Cindy Cohn:
So I want to talk a little bit about some of the more recent things we're seeing. I specifically wanted to ask you about these things we're seeing called geofence warrants, Jumana, because I think they're particularly troubling, and they're troubling not just from a Fourth Amendment context, but I think also from a First Amendment free speech context as well.

Jumana Musa:
Absolutely. So we have been involved in geofence cases at the Fourth Amendment Center, and I think people don't fully understand the way in which their information is being utilized. So to give people a sense of what is what we're calling a geofence warrant, it is when there's a crime that law enforcement is investigating. Somebody stole widgets from a factory, and in order to investigate this crime, they're trying. They're looking. They have no leads. They have no suspects, and they have no avenue towards a lead or a suspect.

Jumana Musa:
So what they do in that moment is they say, "Okay, we're going to go to Google, and we're going to say, 'Please tell us all of the phones that have connected in this geographic region, say 150, 250 feet, within this hour or two hour-span of time.'" So that maybe sounds not invasive, but you could actually go on our website. We have a series of documents in the Chatrie case, which is one that we've been working on.

Jumana Musa:
In one of them, Google actually filed an affidavit where they said that in order to go through that process, in order to figure out what phones may have been in this small geographic area in this couple of hour timeframe, they first have to search numerous of tens of millions of records. So the first step in this process is to actually search across all of their location history database of all of the people have connected anywhere to be able to identify who's been connected in that one geographic area.

Danny O'Brien:
Just to heighten this, right, so you talked a bit about general warrants, which I understand King George did, and I'm very sorry about that. But the difference here is that the Fourth Amendment warrant is aimed at a particular ... It's specific to a particular person, and that's to try and stop this fishing expedition idea. But when you talk about geofencing, if someone was to use this geofencing warrant, say, at a protest, right, that would mean that they would be essentially scooping up the identities of everyone who was at that protest, right?

Jumana Musa:
Absolutely. So, I mean, I think there's two different ways that people can get everybody at a protest, right? In this context, I think the first step is already you have to search numerous tens of millions of records to figure out who's connected within that timeframe. But in the context of a protest, you're absolutely correct in the sense that they can say, "This thing happened, and that was a serious crime," whatever the thing is. "In order to charge a serious crime, we need to identify who was there," and to identify who was there, I will also tell you, in the context of this back and forth with Google, they're supposed to hand over information that is anonymized and then go through a back and forth with law enforcement to get to a place where they may de-anonymize a few number of people.

Jumana Musa:
But having seen it up close, the anonymization is not so anonymous, and the idea that you can go and get the information of everybody who's connected in the context of a demonstration because somebody may have burned something or something may have been vandalized is extremely concerning, because that's a hugely powerful tool that can be really dissuasive to people who are feeling like they should be able to go out and exercise their First Amendment rights for whatever it is.

Cindy Cohn:
Yep. I think that's right. Well, our goal today is to talk about fixing the Internet and how we fix things. So let's switch a little bit our focus. I want to talk a little bit about, you mentioned earlier that we're chipping away at the third party doctrine. I actually even started out this by saying that I was quite confident that we were going to chip it down even further in the next few years. So where are we in terms of what the third party doctrine reaches right now and where we've won some victories?

Jumana Musa:
We've seen it come up in a few different ways, and it's sort of evolving. So the three cases that we always talk about are the Jones case, which was 2012, where essentially what they were looking at was, they did get a warrant to put a tracker on someone's car. They had ten days to get the tracker on the person's car. They didn't put it on until the eleventh day, so you're already outside of the window of the warrant, and then they left it on there for 28 days.

Jumana Musa:
Part of the argument is, "Well, the car was out driving around on public roads. That is not private. You don't have a right to privacy on public roads. Anybody can see you." That is certainly true. At the same time, what the court found was doing it outside this window meant you were outside of the warrant, and you did it for 28 days, which is, you do have an interest in your location over time, because that is very revealing, right? That's one of the things the court came to. They were very focused in the majority opinion, which was unanimous. It was an unanimous opinion, but in the majority, they were very focused on the trespass of having put the tracker on the car.

Jumana Musa:
So if we fast-forward a couple of years, there was another case which was not location tracking, but it was a question of the amount of data that is gathered with digital devices, and that's the Riley case in 2014. What that case basically said, at the end of the day, there used to be the idea that if you're arresting someone, maybe you stop the car, you decide you saw contraband, something happened, you're now arresting the person who was driving the car.

Jumana Musa:
What this case was about was the idea that if you arrest someone in that scenario, can you then open their phone and start to go through their phone? This is when smartphones are really starting to be widely used. What the court said is no, that is not the same thing. It is not a container. In fact, it contains all of the privacies of life. It has your emails and your photos and all this other information. As such, it is treated differently. So that was sort of the next step.

Jumana Musa:
The most recent stuff we've seen is the Carpenter case in 2018. So this case was a case where they were trying to tie people to a series of robberies, and they went through and looked for their historical cell site location information. So what that means is everywhere you go with your smartphone, it pings off of towers. It pings off of all kinds of things and creates a little digital trail of where you've been. It's not exactly where you've been. It doesn't say, "You were exactly in the spot, and then you walked ten steps over here," but it can locate you over time.

Jumana Musa:
The argument was, this was third party records, right? I mean, this is the phone company's records. You don't have an interest in that. There's no privacy interest. So what the court found in that was actually, you do, and they did not say there was no longer a third party doctrine. They said there is. It just doesn't apply here. So basically what they're saying is tracking you all over the place gives a lot of information about your very personal things. If you worship, it will say where you've been, what kind of doctor you've been to, if you go to AA meetings. It can locate you at a lot of sensitive places.

Jumana Musa:
But one of the arguments that was being made was, "Well, the technology back at the time this case happened wasn't that precise. It only could generally locate people." But the court said, "We hear that, but it's already better, and it's only going to get better. So the idea that we're going to sort of decide this, looking back at the old technology, is not of use to us."

Cindy Cohn:
I think that's exactly right. So when we think about the third party doctrine, I think we're making great strides in terms of protecting your location, especially your historical location over time. We're taking strides to say that just because you have a phone in your hand doesn't mean everything that's on that phone and everything you can get through that phone, like going to Facebook or any of those kinds of things, is not available to you. Then we've got both the cell phone towers and the car case to indicate this idea that where you travel over time should be protected. So that's what I mean, I think, when we talk about when we're chipping away at it.

Cindy Cohn:
So let's fast forward. We're into, now we're fixing it. So what's the world going to look like if we fix the third party doctrine, Jumana? How is my world going to change? How are your clients' worlds going to change? How does a protestor who wants to go out in the street ... How's our world going to be better if we fix this thing?

Jumana Musa:
So I think we're going to be better because we are going to reclaim some of our anonymity, right? I don't think that's something that people think about consciously, but part of it is if I just go walk down the street and I'm not in my neighborhood where everybody might know me, I might run into someone I know, but I might not see anybody I know, right? I could just be wandering down the street, looking in windows, looking at other people, thinking about life, doing whatever I'm doing. Nobody necessarily knows where I am.

Jumana Musa:
Historically, that's how it's been, right? You just walk off somewhere. Unless you physically run into somebody, there isn't necessarily a thought of where you are, and clearly that's not going to be possible in the digital age, where it's comprehensively like that. But to get some measure of that back, of that sort of anonymity, that control over your location, your movement, your idea of privacy from the government I think is really critical.

Jumana Musa:
So sort of looking forward, what does it look like? It looks like restricting government from being able to access these things writ large. I know sometimes people talk about, "Get a warrant." I've often said, "I know we say that, and it's great when at least they get a warrant, because there is that place where at least there's a judge or a magistrate," because the magistrate honestly doesn't actually have to be a judge in every state. It's not the same, but they may just have to have a college degree, right? So I don't want to make assumptions. But there is at least a person that may stand in the way and say, "Wait a minute, this doesn't look right. This looks too broad. You have to scale this back."

Danny O'Brien:
One of the visions I have for the future that is different from where we are now is that I feel that people have a generalized blanket anxiety about the data that they're giving to companies, and I think part of that anxiety comes from not knowing what's going to happen to it. I think one of the protections that a warrant gives you is you don't feel like data is going to be dug up on you if you're innocent or an innocent passerby, and I would like some clarity in the law that surrounds me that that isn't going to be the case.

Jumana Musa:
Well, coming from where I'm coming from, I'm going to say just because they're digging up the data, that doesn't speak to your innocence or non-innocence at all, right? It just speaks to their desire to investigate it. But I think that's true. I think that's very true, and I think we have sort of competing problems. One is it is hard to know just how much of your data is being gathered, right? I mean, I think some people who are deep in the weeds may have a really good sense. Most people don't really know, and I think when you compound that with the fact that there aren't really laws that restrict or govern that very well and then you add on top of that the fact that there's not a lot of things you can get anymore that aren't gathering data.

Jumana Musa:
For me, I use the example of, I drive a ten-year old Subaru, and it is low-tech. My kids tell me that all the time, right? I can't connect my phone to my car. I can't do this. I can't do that. I can't do anything that their friends' parents' cool cars do. What I know is right now, it's a Subaru. So it's going to last a long time. I appreciate that. It's got 100,000 miles on it. Eventually, I'm going to have to replace it, and by then, it is highly unlikely I'm going to be able to find a car that isn't connected in that way, that doesn't gather more data in that way, and it's true of all the things we're getting, smart appliances. You can't get a home security system that's sort of the old school that tells you if someone has opened the door or broken a window. So all of these things, the way they're developing that have positive aspects, they're developing ways to gather data, and data is really what companies are seeking.

Cindy Cohn:
Well, I think so. I would say, to me, this vision that you're bringing out around especially specifically the third party doctrine is really one of the presumption of innocence and, as you said, the presumption of anonymity, that what I read on what websites, what social media I have, who I'm friends with, who I'm not friends with, who I might spend the night with, who I don't spend the night with, what books I read, who I talk to, which way I talk to them, this is all information that ought to be under my control and that law enforcement needs to have a darn good reason to get access to. By darn good reason, I mean a darn good reason presented to somebody in a black robe who's going to evaluate this.

Cindy Cohn:
So to me, the end of the third party doctrine really resets our relationship with the government first. I think you're right. We still have to talk about companies, and we will do that as well. But this is about reclaiming the right of people to be secure in their papers and their effects against unreasonable searches and seizures. What we do in our lives, who we talk to, where we go, whether we're window shopping or seriously buying or whether we're just talking to a friend or whether we're researching an illness that we've heard a loved one had, we deserve to have a zone of protection against the government rummaging around in that information, because we might've made somebody mad or because we happen to have a friend who made somebody mad. I often say to people that just because you're never going to face ... Maybe law enforcement isn't going to come looking after you doesn't mean that you don't know anybody who is at risk. I think especially for people of color in our society right now, it doesn't need to be said.

Jumana Musa:
So Cindy, actually, I'm glad you said that. I think it needs to be said out loud, and I think the thing that people need to remember is that surveillance isn't new in society. Surveillance has been happening as long as there's been society, and it's been targeted largely at people of color, at people who dissent, at people who don't sort of go with the mainstream power structure. So people of color have been under scrutiny in this country since there've been people of color in this country, and particularly black people, but we can't sort of let that piece off.

Jumana Musa:
As we're in this moment where we're looking at policing in America, where Black Lives Matter is at the forefront, as it should be, we should also recognize when we're talking about these surveillance tools and technologies they are always going to be more heavily implemented in these communities, in communities of color, in low-income communities. They're going to be targeted towards black people. They're going to be targeted towards immigrant communities. That doesn't mean that there is no spillover effect into more affluent communities, into white communities, but the breakdown is no different than it is anywhere else in our criminal justice system.

Jumana Musa:
So I think that's a particularly acute point, even when you're talking about First Amendment rights, right, and the ability to protest. So I think that that needs to be a fundamental part of this conversation. Even if it never touches you or someone you know, if you care about those things, you should still care about this.

Cindy Cohn:
I think this is exactly right. Setting the Fourth Amendment right is part of standing up for Black Lives Matter. It's part of standing up for fairness in our society, because we know that the people who need these protections, the people who end up being overwhelmingly targeted by law enforcement are people of color. So standing up for protecting people's rights to just go around in the world, free of being vulnerable to surveillance is really a piece of the broader part of our efforts to try to make society less racist.

Danny O'Brien:
I'm hearing from both of you is that there is real progress happening on the court side, that we have this progressive recognition that the third party doctrine has to be reformed, and actual kind of concrete steps to that at the Supreme Court level. It sounds to me that this is a race between the courts coming to terms with new technology and also the advance of that technology itself.

Danny O'Brien:
One of the things that I remember from listening to the lawyers talk about this at EFF was an incident where the companies were getting so tired of getting these requests, the telcos in particular, that they wrote some tools for law enforcement to get this information more easily, right? They automated the process of getting this data. For me, that's one of those terrible kind of downhill progressions, where it's inevitable that if there's no legal speed bumps to getting this data, the take is that geeks like me are just going to grease that path, right? We're going to spiral from these arguments that are sort of like this is a specific warrant, but it's a little non-specific to a world where mass surveillance is just presumed and these companies actively are helping out the governments with it.

Cindy Cohn:
Yeah, I think it's a tremendously important point. It's one of the reasons why the third party doctrine has been on our hit list for a long time, because, again, I completely agree with Jumana that simply requiring a warrant doesn't get us everywhere we need to go. But when you get rid of the idea that a judge needs to be in the middle of it, you do end up with things like this portal where you could upload a recipe and it would open the portal to letting law enforcement have access to people's phone records.

Cindy Cohn:
We know from the Snowden documents on down that telephone records can be tremendously sensitive. They know if you're standing on the Golden Gate Bridge calling the suicide hotline, or whether you're calling the Planned Parenthood, or whether you're calling the local gun shop. Your phone records, even without knowing what you say, your telephone records, the websites you visit, the social media, all of your metadata can be tremendously revealing. Making sure that there's a lot of friction for law enforcement, such that they have to have a good reason and be able to demonstrate it, and demonstrate it to somebody other than themselves, before they get information about you is one of the ways that we keep the balance between us and our government in the right place.

Danny O'Brien:
Jumana, can I just ask, what is the next step? So what comes after Carpenter, what are organizations like you doing in the public litigation space to move this ahead?

Jumana Musa:
Well, I think one of the things we're doing is looking at all the parameters that were put into Carpenter and trying to operationalize them in other circumstances, right? Because it's a question of, do you have to have all of those things? Does it have to be of a deeply revealing nature, and the depth, breadth, and comprehensive reach of it all and the inescapable and automatic nature of the collection? Can it be two of those things? Can it just be one of those things? So we're trying to look at it in every aspect, in terms of whether it's a tower dump, where they say, "Something happened in this area, and we want to get the information on all the cell phones or devices that have connected to this cell tower within this period of time," or is it a geofence warrant, or is it some other way that they're gathering it to try and take it and start to apply these? Of course, one of the high ones on the hit list, they looked at historical cell site location in Carpenter, but how does it apply to real-time tracking?

Jumana Musa:
So, I mean, I think it's really important to think creatively about all the places this may apply. Of course, the end goal is what Cindy said. It is to get rid of the third party doctrine, which really has limited utility in the digital age. So I think in that context, really sort of for us in this space, that is one of the end games, but really, it's about trying to carve out what privacy means in the digital age, right, the question of, do you have privacy in public? It was a very different assessment years ago, when you said, "Of course you don't. You're out, and you're walking around. People can see you." But now if you're out and you're walking around and your phone can track you and you're showing up in surveillance cameras, and maybe they're connected to face recognition and something else, it's sort of gotten to be such a comprehensive surveillance that we really need to fight to claw back what privacy means, what privacy is protected, and how we can go about our lives in a way that is free of government intrusion.

Cindy Cohn:
Yep. Thank you so much, Jumana. Of course, EFF will be with you guys every step of the way. One of the big things that NACDL does is make sure that all of the defense attorneys across the country, who you might need someday, have access to these arguments and these skills. We love working with you, and we're all together in this effort to try to keep chipping away at this doctrine until it is just a tiny little remnant of another time when phone records were not nearly as invasive as they are now. So thank you so, so much for taking the time to talk with us. Third party doctrine, definitely need to fix it. Now we know why.

Jumana Musa:
Well, thank you for having me. I'll say it's a mutual love affair. We are frequently referring people to EFF and utilizing the information that you all put out. So thank you very much.

Danny O'Brien:
Thank you.

Danny O'Brien:
Okay, I found that really fascinating. I think one of the bits that leapt out for me is how, actually, technology, by removing friction, by making particular processes easier, including getting access to this data, actually transforms how invasive it can become, with the government being able to just kind of press a few buttons and then pull out as much metadata as it wants without a warrant.

Cindy Cohn:
Yeah, I think that's right. I mean, one of the reasons why we really want to get rid of the third party doctrine is because we need law enforcement to basically do the work and make the showing before they get access to this information, because it's far more revealing than it was when this doctrine was first created, and there's a lot more of it.

Cindy Cohn:
One of the things that Jumana mentioned that I think is important as well is that she said sometimes we may need to get more than a warrant. A warrant might not be enough. Lawyers like us are talking a lot more, and there are situations already when you have to get a super warrant, which is basically much more limited in the crimes that it can apply to, and the data has to be important to the crime. So I think we're beginning to move a lot of things towards warrants, but I think also in this age, when so much of our information is available and in the hands of third parties, we might need to think beyond warrants as well. I think that was a good point she made.

Danny O'Brien:
I think the other thing that comes out of this conversation is that ... You pointed this out, that pervasive surveillance is not a theoretical threat. It's in particular a threat that is already being felt by disenfranchised groups, right? Groups that don't get to speak up traditionally in the sort of political debate, and that includes, in the United States, communities of color and so forth.

Cindy Cohn:
Yeah. I mean, I think it's really clear that if we care about Black Lives Matter, that means we have to get the Fourth Amendment right, because people of color are disproportionately targeted by this kind of surveillance. Even if they're not targeted, they're disproportionally impacted by it.

Danny O'Brien:
That's a really good point. I think it's even more important when we realize that the presumption of privacy, I think, has been flipped because of the amount of metadata that is collected about us. If I walked down the street in the 1970s, I think it would have been pretty unusual for me to be followed around by someone or data about me to be collected in any way. Now every moment we spend in public is surveyed and recorded in some way. That data is just sitting there, waiting to be accessed by a company, but then indirectly by the government asking that company to hand over the data.

Cindy Cohn:
Yeah, I think that this is one of the situations in which the realities of the world have really changed and a doctrine and that used to be kind of annoying and innocuous has become a really, really big problem. I think the fundamental problem at the bottom of the third party doctrine is it confuses secrecy and privacy. It really takes the position that if even one other entity knows this, something about you, in these instances, your ISP in order to make sure that your phone rings where you are, that that somehow waives your Fourth Amendment rights and is equated with you kind of taking out a billboard and putting it on the side of the highway. But secrecy and privacy are not the same things, and there are many situations in which we need to stand up for privacy, even when something isn't completely secret. To me, I think the third party doctrine is one of those situations.

Danny O'Brien:
So are you optimistic or pessimistic about where we'll be with the third party doctrine?

Cindy Cohn:
I think this was a hopeful conversation, and it was a hopeful conversation because, as Jumana laid out, we have three solid Supreme Court decisions moving away from this kind of absolute rule that the third party doctrine had represented, or at least had been argued by the Justice Department. It's a judge-made doctrine. The third party doctrine doesn't exist in statute. So the judges can take it away, can decide that it is no longer applicable. Again, we've got three solid Supreme Court decisions where the third party doctrine was argued by the government on the other side, and the Supreme Court rejected that argument and said, "No, we need to care about privacy more than that." So that's very hopeful to me, and it's why I think that the third party doctrine is one of the things that needs to be fixed on the Internet, but it's the one where I'm quite hopeful that we're going to get it fixed.

Danny O'Brien:
Well, I always like to end to on an optimistic note. So I think I'll declare that's all we've got time for. See you next time.

Danny O'Brien:
Thanks again for joining us. If you'd like to support the Electronic Frontier Foundation, here are three things you can do today. One, you can hit subscribe in your podcast player of choice, and if you have time, please leave a review. It helps more people find us. Two, please share on social media and with your friends and family. Three, please visit eff.org/podcasts, where you will find more episodes, learn about these issues, you can donate to become a member, and lots more. Members are the only reason we can do this work. Plus, you can get cool stuff like an EFF hat or an EFF hoodie or even a camera cover for your laptop.

Danny O'Brien:
Thanks once again for joining us, and if you have any feedback on this episode, please email podcast@eff.org. We do read every email. This podcast was produced by the Electronic Frontier Foundation with help from Stuga Studios. Music by Nat Keefe of Beat Mower. 


This work is licensed under a Creative Commons Attribution 4.0 International License

rainey Reitman

GitHub Reinstates youtube-dl After RIAA’s Abuse of the DMCA

1 week 5 days ago

GitHub recently reinstated the repository for youtube-dl, a popular free software tool for downloading videos from YouTube and other user-uploaded video platforms. GitHub had taken down the repository last month after the Recording Industry Association of America (RIAA) abused the Digital Millennium Copyright Act’s notice-and-takedown procedure to pressure GitHub to remove it.

By shoehorning DMCA 1201 into the notice-and-takedown process, RIAA potentially sets a very dangerous precedent.

The removal of youtube-dl’s source code caused an outcry. The tool is used by journalists and activists to save eyewitness videos, by YouTubers to save backup copies of their own uploaded videos, and by people with slow or unreliable network connections to download videos in high resolution and watch them without buffering interruptions, to name just a few of the uses we’ve heard about. youtube-dl is a lot like the videocassette recorders of decades past: a flexible tool for saving personal copies of video that’s already accessible to the public.

Under the DMCA, an online platform like GitHub is not responsible for the allegedly infringing activities of its users so long as that platform follows certain rules, including complying when a copyright holder asks it to take down infringing material. But unlike most DMCA takedowns, youtube-dl contained no material belonging to the RIAA or its member companies. RIAA’s argument hinges on a separate section of the DMCA, Section 1201, which says that it’s illegal to bypass a digital lock in order to access or modify a copyrighted work—or to provide tools to others that bypass digital locks. The RIAA argued that since youtube-dl could be used to infringe on copyrighted music, GitHub must remove it. By shoehorning DMCA 1201 into the notice-and-takedown process, RIAA potentially sets a very dangerous precedent, making it extremely easy for copyright holders to remove software tools from the Internet based only on the argument that those tools could be used for copyright infringement.

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2Fck7utXYcZng%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

youtube-dl’s code did contain the titles and URLs of certain commercial music videos as a part of a list of videos to use to test the tool’s functionality. Of course, simply mentioning a video’s URL is not an infringement, nor is streaming a few seconds of that video to test a tool’s functionality. As EFF explained in a letter to GitHub on behalf of youtube-dl’s team of maintainers:

First, youtube-dl does not infringe or encourage the infringement of any copyrighted works, and its references to copyrighted songs in its unit tests are a fair use. Nevertheless, youtube-dl’s maintainers are replacing these references. Second, youtube-dl does not violate Section 1201 of the DMCA because it does not “circumvent” any technical protection measures on YouTube videos.

Fortunately, after receiving EFF’s letter, GitHub has reversed course. From GitHub’s announcement:

Although we did initially take the project down, we understand that just because code can be used to access copyrighted works doesn’t mean it can’t also be used to access works in non-infringing ways. We also understood that this project’s code has many legitimate purposes, including changing playback speeds for accessibility, preserving evidence in the fight for human rights, aiding journalists in fact-checking, and downloading Creative Commons-licensed or public domain videos. When we see it is possible to modify a project to remove allegedly infringing content, we give the owners a chance to fix problems before we take content down. If not, they can always respond to the notification disabling the repository and offer to make changes, or file a counter notice.

Again, although our clients chose to remove the references to the specific videos that GitHub requested, including them did not constitute copyright infringement.

RIAA’s letter accused youtube-dl of being a “circumvention device” that bypasses a digital lock protected by section 1201 of the DMCA. Responding on behalf of the developers, EFF explained that the “signature” code used by YouTube (what RIAA calls a “rolling cipher”) isn’t a protected digital lock—and if it were, youtube-dl doesn’t “circumvent” it but simply uses it as intended. For some videos, YouTube embeds a block of JavaScript code in its player pages. That code calculates a number called “sig” and sends this number back to YouTube’s video servers as part of signaling the actual video stream to begin. Any client software that can interpret JavaScript can run YouTube’s “signature” code and produce the right response, whether it’s a standard web browser or youtube-dl. The actual video stream isn’t encrypted with any DRM scheme like the ones used by subscription video sites.

It’s no secret that EFF doesn’t like Section 1201 and the ways it’s used to shut down innovation and competition. In fact, we’re challenging the law in court. But in the case of youtube-dl, Section 1201 doesn’t apply at all. GitHub agreed, and put the repository back online with its full functionality intact.

GitHub recognized that accusations about software projects violating DMCA 1201 don’t make valid takedowns under section 512. GitHub committed to have technical experts review Section 1201-related accusations and allow code repository owners to dispute those accusations before taking down their code. This is a strong commitment to the rights of GitHub’s developer community, and we hope it sets an example.

EFF is proud to have helped the free software community keep this important tool online. Please consider celebrating this victory with us by making a donation to EFF.

Donate to EFF

Defend Innovation and Free Speech

Elliot Harmon

Computer Security Experts Urge White House to Keep Politics Out of Election Security

1 week 5 days ago
Elections Are Partisan Affairs - Election Security Isn't

San Francisco - The Electronic Frontier Foundation (EFF) has joined more than three dozen cybersecurity experts and professional security organizations in calling for the White House to keep politics out of securing this month’s election. Election security officials and computer security experts must be able to tell the truth about the security of Americans’ votes without fear of retribution.

The experts and organizations were moved to action after reports that the White House is pressuring the Cybersecurity and Infrastructure Security Agency (CISA), and its director Chris Krebs, to change CISA’s reports on election security. CISA has pushed back against baseless allegations of voter fraud and security problems—including many promoted by President Trump— through its “Rumor Control” website, and recently published a statement renouncing “unfounded claims and opportunities for misinformation about the process of our elections.”

“Elections are partisan by their very nature, but the workings of the machinery that helps us cast and count votes should be completely independent,” said EFF Deputy Executive Director Kurt Opsahl. “Election security is vital to our right to choose our government, and we can’t let the White House stop experts from telling the truth about where we stand.”

Just yesterday, another group of cybersecurity and election security experts issued an open letter, warning that claims of voter fraud in this month’s election are “unsubstantiated or are technically incoherent.” Some of today’s letter signers also joined yesterday’s effort.

“Voting is the cornerstone of our democracy. Americans must be able to trust the experts when they say there is—or isn’t—a problem,” said Opsahl. “The White House should reverse course and support election security, as well as the processes and people who safeguard our vote.”

For the full open letter:
https://www.eff.org/deeplinks/2020/11/elections-are-partisan-affairs-election-security-isnt

Contact:  KurtOpsahlDeputy Executive Director and General Counselkurt@eff.org
Rebecca Jeschke

Elections Are Partisan Affairs. Election Security Isn't.

1 week 6 days ago

An Open Letter on Election Security

Voting is the cornerstone of our democracy. And since computers are deeply involved in all segments of voting at this point, computer security is vital to the protection of this fundamental right.  Everyone needs to be able to trust that the critical infrastructure systems we rely upon to safeguard our votes are defended, that problems are transparently identified, assessed and addressed, and that misinformation about election security is quickly and effectively refuted.  

While the work is not finished, we have made progress in making our elections more secure, and ensuring that problems are found and corrected. Paper ballots and risk-limiting audits have become more common.  Voting security experts have made great strides in moving elections to a more robust system that relies less on the hope of perfect software and systems.

This requires keeping partisan politics away from cybersecurity issues arising from elections. Obviously elections themselves are partisan. But the machinery of them should not be.  And the transparent assessment of potential problems or the assessment of allegations of security failure—even when they could affect the outcome of an election—must be free of partisan pressures.  Bottom line: election security officials and computer security experts must be able to do their jobs without fear of retribution for finding and publicly stating the truth about the security and integrity of the election. 

We are profoundly disturbed by reports that the White House is pressuring Chris Krebs, director of the Cybersecurity and Infrastructure Security Agency (CISA), to change CISA’s reports on election security. This comes just after Bryan Ware, assistant director for cybersecurity at CISA, resigned at the White House’s request. Director Krebs has said he expects to be fired but has refused to join the effort to cast doubt on the systems in place to support election technology and the election officials who run it. (Update Nov 18: Chris Krebs was fired on November 17.) Instead, CISA published a joint statement renouncing “unfounded claims and opportunities for misinformation about the process of our elections.”  The White House pressure threatens to introduce partisanship, and unfounded allegations, into the expert, nonpartisan, evaluation of election security. 

We urge the White House to reverse course and support election security and the processes and people necessary to safeguard our vote.  

Signed,

(Organizations and companies)

Electronic Frontier Foundation

Bugcrowd

Center for Democracy & Technology

Disclose.io

ICS Village

SCYTHE, Inc.

Verified Voting

(Affiliations are for identification purposes only; listed alphabetically by surname.)

William T. Adler, Senior Technologist, Elections & Democracy, Center for Democracy & Technology
Matt Blaze, McDevitt Chair of Computer Science and Law, Georgetown University
Jeff Bleich, U.S. Ambassador to Australia (ret.)
Jake Braun, Executive Director, University of Chicago Harris Cyber Policy Initiative
Graham Brookie, Director and Managing Editor, Digital Forensic Research Lab, The Atlantic Council
Emerson T. Brooking, Resident Fellow, Digital Forensic Research Lab of the Atlantic Council.
Duncan Buell, NCR Professor of Computer Science and Engineering, University of South Carolina
Jack Cable, Independent Security Researcher.
Joel Cardella, Director, Product & Software Security, Thermo Fisher Scientific
Stephen Checkoway, Assistant Professor of Computer Science, Oberlin College
Larry Diamond, Senior Fellow, Hoover Institution and Principal Investigator, Global Digital Policy Incubator, Stanford University
Renée DiResta, Research Manager, Stanford Internet Observatory
Kimber Dowsett, Director of Security Engineering, Truss
Joan Donovan, Harvard Kennedy’s Shorenstein Center on Media, Politics and Public Policy
Casey Ellis, Chairman/Founder/CTO, Bugcrowd
David J. Farber. Distinguished Career Professor of Computer Science and Public Policy, Carnegie Mellon University
Michael Fischer, Professor of Computer Science, Yale University
Camille François, Chief Innovation Officer, Graphika
The Gruqq, Independent Security Researcher
Joseph Lorenzo Hall, Senior Vice President for a Strong Internet at The Internet Society (ISOC)
Candice Hoke, Founding Co-Director, Center for Cybersecurity & Privacy Protection, Cleveland State University
David Jefferson, Computer Scientist, Lawrence Livermore National Laboratory (retired)
Douglas W. Jones, Associate Professor of Computer Science, University of Iowa
Lou Katz, Commissioner, Oakland Privacy Advisory Commission
Joseph Kiniry, Principal Scientist, Galois, CEO and Chief Scientist, Free & Fair
Katie Moussouris, CEO, LutaSecurity
Peter G. Neumann, Chief Scientist, SRI International Computer Science Lab
Brandie M. Nonnecke, Director, CITRIS Policy Lab, CITRIS and the Banatao Institute, UC Berkeley
Sean O’Connor, Threat Intelligence Researcher
Marc Rogers, Director of Cybersecurity, Okta
Aviel D. Rubin, Professor of Computer Science, Johns Hopkins University
John E. Savage, An Wang Emeritus Professor of Computer Science, Brown University
Bruce Schneier, Cyber Project Fellow and Lecturer, Harvard Kennedy SchoolAlex Stamos, Director, Stanford Internet Observatory
Barbara Simons, IBM Research (retired)
Philip B. Stark, Associate Dean, Mathematical and Physical Sciences, University of California, Berkeley
Camille Stewart, Cyber Fellow, Harvard Belfer Center
Megan Stifel, Executive Director, Americas; and Director, Craig Newmark Philanthropies Trustworthy Internet and Democracy Program, Global Cyber Alliance
Sara-Jayne Terp, CEO Bodacea Light Research
Cris Thomas (Space Rogue), Global Strategy Lead, IBM X-Force Red
Maurice Turner, Election Security Expert
Poorvi L. Vora, Professor of Computer Science, The George Washington University
Dan S. Wallach, Professor, Departments of Computer Science and Electrical & Computer Engineering, Rice Scholar, Baker Institute for Public Policy, Rice University
Nate Warfield, Security Researcher
Elizabeth Wharton, Chief of Staff, SCYTHE, Inc.
Tarah Wheeler, Belfer Center Cyber Fellow, Harvard University Kennedy School, and member EFF Advisory Board
Beau Woods, Founder/CEO of Stratigos Security and Cyber Safety Innovation Fellow at the Atlantic Council.
Daniel M. Zimmerman, Principal Researcher, Galois and Principled Computer Scientist, Free & Fair

(Updated November 18 to add news of Chris Krebs being fired, and to add six signers) 

Kurt Opsahl

EFF Publishes New Research on Real-Time Crime Centers in the U.S.

1 week 6 days ago

EFF has published a new report, "Surveillance Compounded: Real-Time Crime Centers in the United States," which profiles seven surveillance hubs operated by local law enforcement, plus data on dozens of others scattered across the country. 

Researched and written in collaboration with students at the Reynolds School of Journalism at the University of Nevada, Reno, the report focuses on the growth of real-time crime centers (RTCCs). These police facilities serve as central nodes and control rooms for a variety of surveillance technologies, including automated license plate readers, gunshots detection, and predictive policing. Perhaps the most defining characteristic of a RTCC is that a network of video cameras installed in the community that analysts watch on a wall of monitors, often in combination with sophisticated, automated analytical software. 

As we write in the report: 

RTCCs are similar to Fusion Centers, to the extent the terms are sometimes used interchangeably. We distinguish between the two: fusion centers are technology command centers that function on a larger regional level, are typically controlled by a state-level organization, and are formally part of the U.S. Department of Homeland Security's fusion center network. They also focus on distributing information about national security "threats," which are often broadly interpreted. RTCCs are generally focused on municipal or county level activities and focus on a general spectrum of public safety issues, from car thefts to gun crime to situational awareness at public events. 

The term “real-time” is also somewhat misleading: while there is often a focus on accessing data in real-time to communicate to first responders, many law enforcement agencies use RTCC to mine historical data to make decisions about the future through "predictive policing," a controversial and largely unproven strategy to identify places where crime could occur or people who might commit crimes.

We identified more than 80 RTCCs in the U.S. in 29 states, with the largest number concentrated in New York and Florida.  The report includes case studies of RTCCs in: Albuquerque, NM; Atlanta, GA; Detroit, MI; Miami Gardens, FLA; New Orleans, LA; Ogden, UT; and Sacramento,CA. We have also included a profile of the Fresno Real-Time Crime Center, which was suspended prior to publication of our report. These profiles break down the costs, what technology is installed in neighborhoods, and what type of equipment and software is accessible by RTCC staff. We also document controversies that have arisen in response to the creation of these RTCCs. 

"Surveillance Compounded" is part of the Atlas of Surveillance project, an ongoing collaboration with the Reynolds School of Journalism that aims to build a central database and map of police technologies using open source intelligence. This is the second such report, following 2019 "Atlas of Surveillance: Southwestern Border Communities," which documented surveillance technology in the 23 U.S. counties that border Mexico. 

As of November 15, 2020, the Atlas contains more than 6,100 data points related to automated license plate readers, drones, body-worn cameras, cell-site simulators, and other law enforcement technologies. 

Visit the Atlas of Surveillance.

Dave Maass

EFF Urges Universities to Commit to Transparency and Privacy Protections For COVID-19 Tracing Apps

1 week 6 days ago
Campus Communities Shouldn’t Be Forced to Use Apps They Can’t Trust

San Francisco—The Electronic Frontier Foundation (EFF) called on universities that have launched or plan to launch COVID-19 tracking technologies—which sometimes collect sensitive data from users’ devices and lack adequate transparency or privacy protections—to make them entirely voluntary for students and disclose details about data collection practices.

Monitoring public health during the pandemic is important to keep communities safe and reduce the risk of transmission. But requiring students, faculty, and staff returning to campus to commit to using unspecified tracking apps that record their every movement, and failing to inform them about what personal data is being collected, how it’s being used, and with whom it’s being shared, is the wrong way to go about it.

EFF is urging university officials to commit to its University App Mandate Pledge, a set of seven transparency-and privacy-enhancing policies that will help ensure a higher standard of protection for the health and personal information of students, faculty, and staff.

In committing to EFF’s pledge, university officials are agreeing to make COVID-19 apps opt-in, disclose app vendor contracts, disclose data collection and security practices, reveal the entities inside and outside the school that have access to the data, tell users if the university or app vendors are giving law enforcement access to data, and stay on top of any vulnerabilities found in the technologies.

“The success of public health efforts depends on community participation, and if students are being forced to download COVID-19 apps they don’t trust to their phones, and are being kept in the dark about who’s collecting their personal information and whether it’s being shared with law enforcement, they’re not going to want to participate,” said EFF Grassroots Advocacy Organizer Rory Mir. “University leaders should support the app mandate pledge and show that they are committed to respecting the privacy, security, and consent of everyone that is returning to campus.”

Universities have rushed to adopt apps and devices to monitor public health, with some mandating that students download apps that track their locations in real time or face suspension. Location data using GPS, for example, can reveal highly personal information about people, such as when they attend a protest or go to a bar, where their friends live, and what groups they associate with. It should be up to users to decide whether to download and use a COVID-19-related app, and up to universities and public health authorities to communicate the technology’s benefits, protections, and risks.

For the pledge:
https://www.eff.org/app-mandate/pledge

For more about COVID-19 and digital rights:
https://www.eff.org/issues/covid-19

Contact:  RoryMirGrassroots Advocacy Organizerrory@eff.org
Karen Gullo

InternetLab’s Report Sets Direction for Telecom Privacy in Brazil

1 week 6 days ago

Five years have passed since InternetLab published “Quem Defende Seus Dados?" (“Who defends your data?"), a report that holds ISPs accountable for their privacy and data protection policies in Brazil. Since then, major Brazilian telecom companies have provided more transparency about their data protection and privacy policies, a shift primarily fueled by Brazil’s new data protection law. 

InternetLab’s fifth annual report launches today, identifies steps companies should take to protect Brazil’s telecom privacy and data protection. This edition, featuring eight telecom providers for mobile and broadband services, shows Brazil telecom provider TIM leading the way, followed by Vivo and Oi right behind. TIM scored high marks for defending privacy in public policy debates and the judiciary, publishing transparency reports, and transparent data protection policies. In contrast, Nextel scored in the last place as it did in 2019, very far away from the rest of its competitors. Nextel did take a step forward in defending privacy in the judiciary, in contrast to 2019, when it received no stars in any category.

In stark contrast to InternetLab’s first report in 2016, half of the covered providers (Claro, NET, TIM, and Algar) have made significant progress in the data protection category. After being poorly rated in 2019, Algar obtained a full star this year in this category, a positive change as Brazil starts embracing its new GDPR-inspired data protection law. 

This year’s report also assessed which companies stood out in publicly defending privacy against unprecedented government pressure to access telecom data during the COVID-19 pandemic.  For context, Brazil’s Supreme Court suspended the government's provisional measure 954/2020 that ordered telecom providers to disclose their customers' data with the Brazilian Institute of Geography and Statistics (IBGE) during the health emergency situation. The court ruled the measure as overbroad and failing to clarify the purpose of the request.  Oi called upon IBGE to sign a term of responsibility before disclosing the data.

Unfortunately, telecom providers also signed non-transparent data-sharing agreements with states and municipalities to help public authorities fight the COVID-19 pandemic. Here, Vivo and Tim publicly committed in the media that only anonymous and aggregated data, via heat maps and pivot tables, would be shared with the government. In São Paulo, for example, the deal allows public authorities access to a data visualization tool that includes anonymous and aggregated location data to measure social distancing orders' effectiveness. After a São Paulo court ruled the agreement should be public, many telecom providers have published the relevant policies on their sites, including TIM, Vivo Claro, NET, and OI. The companies' policies, however, did not specify the security practices and techniques adopted to ensure the shared data's anonymity. In the future, companies should publish their policies proactively and immediately, and not after public pressure.

Most providers continue to seriously lag on notifying users when the government requests their data. As we’ve explained, no Brazilian law compels either the State or companies to notify targets of surveillance. Judges may require notice, and companies are not prevented from notifying users when secrecy is not legally or judicially required. Prior user notice is essential to restrict improper government data requests of service providers. It is usually impossible for the user to know that the government demanded their data unless it leads to criminal charges. As a result, the innocent are least likely to discover the violation of their privacy rights.

The report also evaluates for the first time if the companies publish their own Data Protection Impact Assessment; unfortunately, none did so. In the face of controversy on the interpretation of laws compelling companies to disclose data to the government, this year's report, for the first time, looks at companies’ transparency regarding their legal understanding of such laws.

Overall, this year's report evaluates providers in six criteria: data protection policies, law enforcement guidelines, defending users in the judiciary, defending privacy in policy debates or the media, transparency reports and data protection impact assessment, and user notification. The full report is available in Portuguese and English. These are the main results:

Data protection policies

Some providers are now telling users about what data they collect about them, how long the information is kept, and whom they share with (although frequently in an overly generic way). In some cases, providers notify users about changes in their privacy policy. Nathalie Fragoso, InternetLab’s Head of Research on Privacy and Surveillance, told EFF.

In contrast to 2016, there has been a significant advance in the content and form of privacy and data protection policies. They are now complete and accessible. However, information on data deletion is often missing, and changes in their privacy policies are rarely proactively reported. While Claro and TIM send messages to their users about their privacy policy changes, Oi only tells users that any change will be available on their website. Far behind is Vivo, which reserves the right to change its policy at any time and does not commit to notifying users of such updates. 

The report also sheds light on how providers respond to users’ requests to access their data, and it evaluates the effectiveness of such responses. Nathalie Fragoso told EFF:

We sent requests for our personal data to all the providers surveyed in this report, and gave them one month to respond. Our requests included any information relating to us. All providers, however, comply by disclosing only our subscriber information, except Claro and Oi, who fail to do so. We also learned that Algar and Tim took additional steps to certify the requestor's identity before disclosing the data, a good practice that deserves to be highlighted. 

Defending users’ privacy in the media or public policy debates

This year, Quem Defende Seus Dados? assesses if providers defended users’ privacy and data protection in public policy debates or the media. The first parameter evaluates the companies’ public contributions to congressional discussions and public policy consultations around data protection.

Even though Vivo wrote a public submission to the "National Strategy for Artificial Intelligence” consultation, it made no concrete, normative or technical proposals to protect its customers. On the other hand, InternetLab found that TIM's policy statements took a clear and robust pro-privacy stand on the same consultation. TIM calls for transparency and an explanation about AI systems. It also recommends providing sufficient information to those affected by an AI system to understand the reasons behind the results and allow those adversely affected to contest such results.

Law enforcement guidelines

Most providers seriously lag in publishing detailed guidelines for government data demands. Vivo Broadband and Mobile lead the way in this category; However, none obtained a full star. This category includes five parameters, which you can read in more detail in the report. Below we summarize two that deserve attention:

Identifying which competent authorities can demand subscriber data without a court order

Brazil's Civil Rights Framework generally requires a court order to access communications data, including location data and connection logs. It has an exception for when  "competent administrative authorities" demand subscriber data when authorized by law. There is controversy about which government officials are included within the term “competent administrative authorities.” Thus, the report focuses closely on whether each company publicly explains its interpretations of this legal term, and if so, how it does. The report also focuses on whether the companies publicly explain which kinds of data they will disclose without a warrant and which they will only disclose with a warrant.

Vivo Broadband and Mobile are far ahead of the other companies. According to its policies, Vivo discloses subscriber data only upon request from representatives of the Public Prosecutor's Office, police authorities (police commissioners), and judges. Its policies say it makes connection logs and location data available only by court order.

Claro and TIM have mixed results. Claro tells users that it discloses subscriber data to competent authorities--but fails to identify them. Likewise, TIM does not pinpoint the competent authorities that it believes can request subscriber data without a court order. However, TIM promises to comply with legislation in making “data and communications” available to “competent authorities.”

InternetLab recommends that TIM expressly identify these authorities. Oi tells users that it shares data with competent authorities and names them. However, the report shows that the company fails to clarify which of the cited competent authorities do not require a court order and which need one. Algar and Nextel scored zero stars for their law enforcement guidelines. There is still much more that all companies can do in this category. 

Identifying which crimes justify disclosure of subscriber data without a warrant

As we explained in our legal FAQs for Brazil, authorizes prosecutors and police officers (usually the Chief of the Civil Police) to access subscriber data without a warrant to investigate money laundering and criminal organizations. The Criminal Procedure Code allows equal access for human trafficking, kidnapping, organ trafficking, and sexual exploitation crimes. Unfortunately, police authorities have claimed the power to access subscriber data without a warrant during the investigation of other crimes. As we’ve explained, they improperly assert a general authorization that regulates criminal investigation by the Civil Police Chief. 

We are happy that InternetLab challenges erroneous legal interpretation regarding police power by assessing companies’ responses to such requests. Here again, in the face of controversy on the interpretation of the law, InternetLab calls for corporate transparency about the law's interpretations.

InternetLab results show that NET, OI Mobile, TIM Broadband, Tim Mobile, Nextel, Algar, and Sky failed to identify the crimes for which competent authorities may obtain subscriber records without a warrant. 

Conclusion

Given this year's results, InternetLab encourages companies to improve their channels for data access requests to facilitate full access to ones' data. It recommends companies to adopt proactive user notification practices when changing their privacy policies. It also encourages them to publish law enforcement guidelines disclosing all the possibilities when disclosing subscriber data, location logs, and connection records, and for which crimes. Companies should ensure transparency regarding their legal interpretation of laws compelling them to disclose data to the government. Companies should be clear and precise when dealing with judicial orders vs. administrative requests for data demands. In the face of exceptional circumstances, such as the COVID-19 pandemic, InternetLab calls upon companies to take an active transparency approach regarding possible collaboration and data sharing agreements with the State, and ensure that such exceptional measure is carried out in the public interest, limited in time and proportional.

Finally, InternetLab encourages companies to publish comprehensive transparency reports and notify users when disclosing their customers' data upon law enforcement demands. Through ¿Quien Defiende Tus Datos? reports, a project coordinated by EFF, local organizations have been comparing companies' commitments to transparency and user privacy in different Latin American countries and Spain. Today’s InternetLab report on Brazil joins similar reports earlier this year from= Fundación Karisma in Colombia, ADC in Argentina, Hiperderecho in Peru, ETICAS in Spain, IPANDETEC in Panama, and TEDIC in Paraguay. New editions in Nicaragua are on their way. All of these critical reports spot which companies stand with their users and which fall short.

Katitza Rodriguez

End University Mandates for COVID Tech

1 week 6 days ago

Since the COVID-19 crisis began, many universities have looked to novel technologies to assist their efforts to retain in-person operations. Most prominent are untested contact tracing and notification applications or devices. While universities must commit to public health, too often these programs invade privacy and lack transparency. To make matters worse, some universities mandate these technologies for students, faculty, staff, and even visitors.  As we’ve stated before, forcing people to install COVID-related technology on their personal devices is the wrong call.            

This is why the EFF is launching our new campaign: End University App Mandates.  Please help us call on university officials to publicly commit to the University App Mandate Pledge (UAMP). It contains seven transparency and privacy-enhancing policies that university officials must adopt to protect the privacy, security, and transparency of their community members. Whether you are a student, a worker, a community member, or an alum, we need your support in defending privacy on campus.

TAKE ACTION

CALL ON YOUR UNIVERSITY TO TAKE THE PLEDGE

Surveillance Is No Cure-All 

Technology is not a silver bullet for solving a public health crisis. If COVID-related apps or devices will help at all, they must be part of a larger public health strategy, including participation and trust from the affected community. In other words, even the best contact tracing and notification software cannot be a substitute for regular testing, PPE, access to care, and interview-based contact tracing. And no public health strategy will work if coercive and secretive measures undermine trust between the educational community and university administrators.

Beyond the invasion of our privacy, public health measures that use digital surveillance also can chill our free speech. These programs, and the ways they are implemented and enforced, also can have a disproportionate impact on vulnerable groups. This is why university leadership can encourage participation in these measures, but ultimately these programs must remain voluntary.  

Users can’t offer their informed consent to the app or device if it is a privacy black box. For example, leadership must make it clear whether any collected information can be accessed by law enforcement, and must disclose the privacy policies of external vendors. 

Universities must also outline exactly what precautions and protocols they are implementing to protect their community from data breaches. Novel technologies created in rapid response to a crisis have a greater potential for security vulnerabilities, as they have not fully received the sort of rigorous testing that would happen in a normal development process. This makes it even more essential to open these programs to public scrutiny and allow individuals to assess the risks.

How You Can Help

There are 4,000 colleges and universities in the United States, all impacted by the current pandemic. There is a vast variety of tools and policies being implemented at educational institutions across the United States. 

So we are targeting every college and university with our campaign. Every time a college or university receives 100 new petitioners, we will deliver the petition letter to the institution’s leadership. We will also work with local advocates to implement these necessary and urgent changes.

To make this campaign possible, we’re turning to our nation-wide network of grassroots and community activists in the Electronic Frontier Alliance and beyond. If you are part of a student group or community group potentially impacted by these app mandate policies, please sign the petition and consider applying to join the Alliance.  We want to work with you to push leadership to adopt this pledge through direct action, and assist your local efforts in defending privacy on college campuses

TAKE ACTION

CALL ON YOUR UNIVERSITY TO TAKE THE PLEDGE

Rory Mir

Don’t Blame Section 230 for Big Tech’s Failures. Blame Big Tech.

1 week 6 days ago

Next time you hear someone blame Section 230 for a problem with social media platforms, ask yourself two questions: first, was this problem actually caused by Section 230? Second, would weakening Section 230 solve the problem? Politicians and commentators on both sides of the aisle frequently blame Section 230 for big tech companies’ failures, but their reform proposals wouldn’t actually address the problems they attribute to Big Tech. If lawmakers are concerned about large social media platforms’ outsized influence on the world of online speech, they ought to confront the lack of meaningful competition among those platforms and the ways in which those platforms fail to let users control or even see how they’re using our data. Undermining Section 230 won’t fix Twitter and Facebook; in fact, it risks making matters worse by further insulating big players from competition and disruption.

While large tech companies might clamor for regulations that would hamstring their competitors, they’re notably silent on reforms that would curb the practices that allow them to dominate the Internet today.

Section 230 says that if you break the law online, you should be the one held responsible, not the website, app, or forum where you said it. Similarly, if you forward an email or even retweet a tweet, you’re protected by Section 230 in the event that that material is found unlawful. It has some exceptions—most notably, that it doesn’t shield platforms from liability under federal criminal law—but at its heart, Section 230 is just common sense: you should be held responsible for your speech online, not the platform that hosted your speech or another party.

Without Section 230, the Internet would be a very different place, one with fewer spaces where we’re all free to speak out and share our opinions. Social media wouldn’t exist—at least in its current form—and neither would important educational and cultural platforms like Wikipedia and the Internet Archive. The legal risk associated with operating such a service would deter any entrepreneur from starting one, let alone a nonprofit.

As commentators of all political stripes have targeted large Internet companies with their ire, it’s become fashionable to blame Section 230 for those companies’ failings. But Section 230 isn’t why five companies dominate the market for speech online, or why the marketing and behavior analysis decisions that guide Big Tech’s practices are so often opaque to users.

The Problem with Social Media Isn’t Politics; It’s Power

A recent Congressional hearing with the heads of Facebook, Twitter, and Google demonstrated the highly politicized nature of today’s criticisms of Big Tech. Republicans scolded the companies for “censoring” and fact-checking conservative speakers while Democrats demanded that they do more to curb misleading and harmful statements.

There’s a nugget of truth in both parties’ criticisms: it’s a problem that just a few tech companies wield immense control over what speakers and messages are allowed online. It’s a problem that those same companies fail to enforce their own policies consistently or offer users meaningful opportunity to appeal bad moderation decisions. There’s little hope of a competitor with fairer speech moderation practices taking hold given the big players’ practice of acquiring would-be competitors before they can ever threaten the status quo.

Unfortunately, trying to legislate that platforms moderate “neutrally” would create immense legal risk for any new social media platform—raising, rather than lowering, the barrier to entry for new platforms. Can a platform filter out spam while still maintaining its “neutrality”? What if that spam has a political message? Twitter and Facebook would have the large legal budgets and financial cushions to litigate those questions, but smaller platforms wouldn’t.

We shouldn’t be surprised that Facebook has joined Section 230’s critics: it literally has the most to gain from decimating the law.

Likewise, if Twitter and Facebook faced serious competition, then the decisions they make about how to handle (or not handle) hateful speech or disinformation wouldn’t have nearly the influence they have today on online discourse. If there were twenty major social media platforms, then the decisions that any one of them makes to host, remove, or factcheck the latest misleading post about the election results wouldn’t have the same effect on the public discourse. The Internet is a better place when multiple moderation philosophies can coexist, some more restrictive and some more permissive.

The hearing showed Congress’ shortsightedness when it comes to regulation of large Internet companies. In their drive to use the hearing for their political ends, both parties ignored the factors that led to Twitter, Facebook, and Google’s outsized power and remedies to bring competition and choice into the social media space.

Ironically, though calls to reform Section 230 are frequently motivated by disappointment in Big Tech’s speech moderation policies, evidence shows that further reforms to Section 230 would make it more difficult for new entrants to compete with Facebook or Twitter. It shouldn’t escape our attention that Facebook was one of the first tech companies to endorse SESTA/FOSTA, the 2018 law that significantly undermined Section 230’s protections for free speech online, or that Facebook is now leading the charge for further reforms to Section 230 (PDF). Any law that makes it more difficult for a platform to maintain Section 230’s liability shield will also make it more difficult for new startups to compete with Big Tech. (Just weeks after SESTA/FOSTA passed and put multiple dating sites out of business, Facebook announced that it was entering the online dating world.) We shouldn’t be surprised that Facebook has joined Section 230’s critics: it literally has the most to gain from decimating the law.

Remember, speech moderation at scale is hard. It’s one thing for platforms to come to a decision about how to handle divisive posts by a few public figures; it’s quite another for them to create rules affecting everyone’s speech and enforce them consistently and transparently. When platforms err on the side of censorship, marginalized communities are silenced disproportionately. Congress should not try to pass laws dictating how Internet companies should moderate their platforms. Such laws would not pass Constitutional scrutiny, would harden the market for social media platforms from new entrants, and would almost certainly censor innocent people unfairly.

Then How Should Congress Keep Platforms in Check? Some Ideas You Won’t Hear from Big Tech

While large tech companies might clamor for regulations that would hamstring their competitors, they’re notably silent on reforms that would curb the practices that allow them to dominate the Internet today. That’s why EFF recommends that Congress update antitrust law to stop the flood of mergers and acquisitions that have made competition in Big Tech an illusion. Before the government approves a merger, the companies should have to prove that the merger would not increase their monopoly power or unduly harm competition.

But even updating antitrust policy is not enough: big tech companies will stop at nothing to protect their black box of behavioral targeting from even a shred of transparency. Facebook recently demonstrated this when it threatened the Ad Observatory, an NYU project to shed light on how the platform was showing different political advertising messages to different segments of its user base. Major social media platforms’ business models thrive on practices that keep users in the dark about what information they collect on us and how it’s used. Decisions about what material (including advertising) to deliver to users are informed by a web of inferences about users, inferences that are usually impossible for users even to see, let alone correct.

Because of the link between social media’s speech moderation policies and its irresponsible management of user data, Congress can’t improve Big Tech’s practices without addressing its surveillance-based business models. And although large tech companies have endorsed changes to Section 230 and may endorse further changes to Section 230 in the future, they will probably never endorse real, comprehensive privacy-protective legislation.

That the Internet Association and its members have fought tooth-and-nail to stop privacy protective legislation while lobbying for bills undermining Section 230 says all you need to know about which type of regulation they see as the greater threat to their bottom line.

Any federal privacy bill must have a private right of action: if a company breaks the law and infringes on our privacy rights, it’s not enough to put a government agency in charge of enforcing the law. Users should have the right to sue the companies, and it should be impossible to sign away those rights in a terms-of-service agreement. The law must also forbid companies from selling privacy as a service: all users must enjoy the same privacy rights regardless of what we’re paying—or being paid—for the service.

The recent fights over the California Consumer Privacy Act serve as a useful example of how tech companies can give lip service to the idea of privacy-protecting legislation while actually insulating themselves from it. After the law passed in 2018, the Internet Association—a trade group representing Big Tech powerhouses like Facebook, Twitter, and Google—spent nearly $176,000 lobbying the California legislature to weaken the law. Most damningly, the IA tried to pass a bill exempting surveillance-based advertising from the practices from which the law protects consumers. That’s right: big tech companies tried to pass a law protecting their own invasive advertising practices that helped cement their dominance in the first place. That the Internet Association and its members have fought tooth-and-nail to stop privacy protective legislation while lobbying for bills undermining Section 230 says all you need to know about which type of regulation they see as the greater threat to their bottom line.

Section 230 has become a hot topic for politicians and commentators on both sides of the aisle. Whether it’s Republicans criticizing Big Tech for allegedly censoring conservatives or Democrats alleging that online platforms don’t do enough to fight harmful speech online, both sides seem increasingly convinced that they can change Big Tech’s social media practices by undermining Section 230. But history has shown that making it more difficult for platforms to maintain Section 230 protections will further isolate a few large tech companies from meaningful competition. If Congress wants to keep Big Tech in check, it must address the real problems head-on, passing legislation that will bring competition to Internet platforms and curb the unchecked, opaque user data practices at the heart of social media’s business models.

You’ll never hear Big Tech advocate that.

Elliot Harmon
Checked
1 hour 18 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed