Why You Can’t Sue Your Broadband Monopoly

2 days 18 hours ago

EFF Legal Fellow Josh Srago co-wrote this blog post

The relationship between the federal judiciary and the executive agencies is a complex one. While Congress makes the laws, they can grant the agencies rulemaking authority to interpret the law. So long as the agency’s interpretation of any ambiguous language in the statute is reasonable, the courts will defer to the judgment of the agency.

For broadband access, the courts have deferred to the Federal Communications Commission’s (FCC’s) judgment on the proper classification of broadband services twice in the last several years. In 2015, the Court deferred to the FCC when it classified broadband as Title II in the Open Internet Order. In 2017, it deferred again when broadband internet was reclassified as Title I in the Restoring Internet Freedom Order. A Title II service is subject to strict FCC oversight, rules, and regulations, but a Title I service is not.

Classification of services isn’t the only place where the courts defer to the FCC’s authority. Two Supreme Court decisions – Verizon Communications, Inc. v. Law Offices of Curtis V. Trinko, LLP, and Credit Suisse Securities (USA) LLC v. Billing – have established the precedent that if an industry is overseen by an expert regulatory agency (such as broadband being overseen by the FCC) then the courts will defer to the agency’s judgment on competition policy because the agency has the particular and specific knowledge to make the best determination.

In other words, civil antitrust law has to overcome multiple barriers in applying to broadband providers, potentially denying it as a remedy for monopolization for consumers. EFF's conducted an in-depth analysis on this issue. For a summary, read on.

The Judicial Deference Circle and How It Blocks Antitrust Enforcement Over Broadband

What this creates is circular deferential reasoning. The FCC has the authority to determine whether or not broadband will be subject to strict oversight or subject to no oversight and the courts will defer to the FCC’s determination. If the service is subject to strict rules and regulations, then the FCC has the power to take action if a provider acts in an anti-competitive way. Courts will defer to the FCC’s enforcement powers to ensure that the market is regulated as it sees fit.

However, if the FCC determines that the service should not be subject to the strict rules and regulations of Title II and a monopoly broadband provider acts in an anticompetitive way, the courts will still defer to the FCC’s determination as to whether the bad actor is doing something they should not. If the courts did otherwise, then their determination would be in direct conflict with the regulatory regime established by the FCC to ensure that the market is regulated as it sees fit.

What this means is that individuals and municipalities are left without a legal pathway when a broadband service provider abuses its monopoly powers under our antitrust laws. A complaint can be filed with the FCC regarding the behavior, but how that complaint is handled is subject to the FCC’s decisions, not on whether the conduct is anti-competitive.

A Better Broadband World Under Robust Antitrust Enforcement

The best path forward to resolve this is for Congress to pass legislation that overturns Trinko and Credit Suisse, ensuring that people, or representatives of people such as local governments, can protect their interests and aren’t being taken advantage of by incumbent monopoly broadband providers. But what will that world look like? EFF analyzed that question and theorized how things could improve for consumers. You can read our memo here. As Congress debates reforming antitrust laws with a focus on Big Tech, there are a lot of downstream positive impacts that can stem from such reforms, namely in giving people the ability to sue their broadband monopolist and use the courts to bring in competition.

Ernesto Falcon

Google’s FLoC Is a Terrible Idea

4 days 16 hours ago

The third-party cookie is dying, and Google is trying to create its replacement. 

No one should mourn the death of the cookie as we know it. For more than two decades, the third-party cookie has been the lynchpin in a shadowy, seedy, multi-billion dollar advertising-surveillance industry on the Web; phasing out tracking cookies and other persistent third-party identifiers is long overdue. However, as the foundations shift beneath the advertising industry, its biggest players are determined to land on their feet. 

Google is leading the charge to replace third-party cookies with a new suite of technologies to target ads on the Web. And some of its proposals show that it hasn’t learned the right lessons from the ongoing backlash to the surveillance business model. This post will focus on one of those proposals, Federated Learning of Cohorts (FLoC), which is perhaps the most ambitious—and potentially the most harmful. 

FLoC is meant to be a new way to make your browser do the profiling that third-party trackers used to do themselves: in this case, boiling down your recent browsing activity into a behavioral label, and then sharing it with websites and advertisers. The technology will avoid the privacy risks of third-party cookies, but it will create new ones in the process. It may also exacerbate many of the worst non-privacy problems with behavioral ads, including discrimination and predatory targeting. 

Google’s pitch to privacy advocates is that a world with FLoC (and other elements of the “privacy sandbox”) will be better than the world we have today, where data brokers and ad-tech giants track and profile with impunity. But that framing is based on a false premise that we have to choose between “old tracking” and “new tracking.” It’s not either-or. Instead of re-inventing the tracking wheel, we should imagine a better world without the myriad problems of targeted ads. 

We stand at a fork in the road. Behind us is the era of the third-party cookie, perhaps the Web’s biggest mistake. Ahead of us are two possible futures. 

In one, users get to decide what information to share with each site they choose to interact with. No one needs to worry that their past browsing will be held against them—or leveraged to manipulate them—when they next open a tab. 

In the other, each user’s behavior follows them from site to site as a label, inscrutable at a glance but rich with meaning to those in the know. Their recent history, distilled into a few bits, is “democratized” and shared with dozens of nameless actors that take part in the service of each web page. Users begin every interaction with a confession: here’s what I’ve been up to this week, please treat me accordingly.

Users and advocates must reject FLoC and other misguided attempts to reinvent behavioral targeting. We implore Google to abandon FLoC and redirect its effort towards building a truly user-friendly Web.

What is FLoC?

In 2019, Google presented the Privacy Sandbox, its vision for the future of privacy on the Web. At the center of the project is a suite of cookieless protocols designed to satisfy the myriad use cases that third-party cookies currently provide to advertisers. Google took its proposals to the W3C, the standards-making body for the Web, where they have primarily been discussed in the Web Advertising Business Group, a body made up primarily of ad-tech vendors. In the intervening months, Google and other advertisers have proposed dozens of bird-themed technical standards: PIGIN, TURTLEDOVE, SPARROW, SWAN, SPURFOWL, PELICAN, PARROT… the list goes on. Seriously. Each of the “bird” proposals is designed to perform one of the functions in the targeted advertising ecosystem that is currently done by cookies.

FLoC is designed to help advertisers perform behavioral targeting without third-party cookies. A browser with FLoC enabled would collect information about its user’s browsing habits, then use that information to assign its user to a “cohort” or group. Users with similar browsing habits—for some definition of “similar”—would be grouped into the same cohort. Each user’s browser will share a cohort ID, indicating which group they belong to, with websites and advertisers. According to the proposal, at least a few thousand users should belong to each cohort (though that’s not a guarantee).

If that sounds dense, think of it this way: your FLoC ID will be like a succinct summary of your recent activity on the Web.

Google’s proof of concept used the domains of the sites that each user visited as the basis for grouping people together. It then used an algorithm called SimHash to create the groups. SimHash can be computed locally on each user’s machine, so there’s no need for a central server to collect behavioral data. However, a central administrator could have a role in enforcing privacy guarantees. In order to prevent any cohort from being too small (i.e. too identifying), Google proposes that a central actor could count the number of users assigned each cohort. If any are too small, they can be combined with other, similar cohorts until enough users are represented in each one. 

For FLoC to be useful to advertisers, a user’s cohort will necessarily reveal information about their behavior.

According to the proposal, most of the specifics are still up in the air. The draft specification states that a user’s cohort ID will be available via Javascript, but it’s unclear whether there will be any restrictions on who can access it, or whether the ID will be shared in any other ways. FLoC could perform clustering based on URLs or page content instead of domains; it could also use a federated learning-based system (as the name FLoC implies) to generate the groups instead of SimHash. It’s also unclear exactly how many possible cohorts there will be. Google’s experiment used 8-bit cohort identifiers, meaning that there were only 256 possible cohorts. In practice that number could be much higher; the documentation suggests a 16-bit cohort ID comprising 4 hexadecimal characters. The more cohorts there are, the more specific they will be; longer cohort IDs will mean that advertisers learn more about each user’s interests and have an easier time fingerprinting them.

One thing that is specified is duration. FLoC cohorts will be re-calculated on a weekly basis, each time using data from the previous week’s browsing. This makes FLoC cohorts less useful as long-term identifiers, but it also makes them more potent measures of how users behave over time.

New privacy problems

FLoC is part of a suite intended to bring targeted ads into a privacy-preserving future. But the core design involves sharing new information with advertisers. Unsurprisingly, this also creates new privacy risks. 

Fingerprinting

The first issue is fingerprinting. Browser fingerprinting is the practice of gathering many discrete pieces of information from a user’s browser to create a unique, stable identifier for that browser. EFF’s Cover Your Tracks project demonstrates how the process works: in a nutshell, the more ways your browser looks or acts different from others’, the easier it is to fingerprint. 

Google has promised that the vast majority of FLoC cohorts will comprise thousands of users each, so a cohort ID alone shouldn’t distinguish you from a few thousand other people like you. However, that still gives fingerprinters a massive head start. If a tracker starts with your FLoC cohort, it only has to distinguish your browser from a few thousand others (rather than a few hundred million). In information theoretic terms, FLoC cohorts will contain several bits of entropy—up to 8 bits, in Google’s proof of concept trial. This information is even more potent given that it is unlikely to be correlated with other information that the browser exposes. This will make it much easier for trackers to put together a unique fingerprint for FLoC users.

Google has acknowledged this as a challenge, but has pledged to solve it as part of the broader “Privacy Budget” plan it has to deal with fingerprinting long-term. Solving fingerprinting is an admirable goal, and its proposal is a promising avenue to pursue. But according to the FAQ, that plan is “an early stage proposal and does not yet have a browser implementation.” Meanwhile, Google is set to begin testing FLoC as early as this month.

Fingerprinting is notoriously difficult to stop. Browsers like Safari and Tor have engaged in years-long wars of attrition against trackers, sacrificing large swaths of their own feature sets in order to reduce fingerprinting attack surfaces. Fingerprinting mitigation generally involves trimming away or restricting unnecessary sources of entropy—which is what FLoC is. Google should not create new fingerprinting risks until it’s figured out how to deal with existing ones.

Cross-context exposure

The second problem is less easily explained away: the technology will share new personal data with trackers who can already identify users. For FLoC to be useful to advertisers, a user’s cohort will necessarily reveal information about their behavior. 

The project’s Github page addresses this up front:

This API democratizes access to some information about an individual’s general browsing history (and thus, general interests) to any site that opts into it. … Sites that know a person’s PII (e.g., when people sign in using their email address) could record and reveal their cohort. This means that information about an individual's interests may eventually become public.

As described above, FLoC cohorts shouldn’t work as identifiers by themselves. However, any company able to identify a user in other ways—say, by offering “log in with Google” services to sites around the Internet—will be able to tie the information it learns from FLoC to the user’s profile.

Two categories of information may be exposed in this way:

  1. Specific information about browsing history. Trackers may be able to reverse-engineer the cohort-assignment algorithm to determine that any user who belongs to a specific cohort probably or definitely visited specific sites. 
  2. General information about demographics or interests. Observers may learn that in general, members of a specific cohort are substantially likely to be a specific type of person. For example, a particular cohort may over-represent users who are young, female, and Black; another cohort, middle-aged Republican voters; a third, LGBTQ+ youth.

This means every site you visit will have a good idea about what kind of person you are on first contact, without having to do the work of tracking you across the web. Moreover, as your FLoC cohort will update over time, sites that can identify you in other ways will also be able to track how your browsing changes. Remember, a FLoC cohort is nothing more, and nothing less, than a summary of your recent browsing activity.

You should have a right to present different aspects of your identity in different contexts. If you visit a site for medical information, you might trust it with information about your health, but there’s no reason it needs to know what your politics are. Likewise, if you visit a retail website, it shouldn’t need to know whether you’ve recently read up on treatment for depression. FLoC erodes this separation of contexts, and instead presents the same behavioral summary to everyone you interact with.

Beyond privacy

FLoC is designed to prevent a very specific threat: the kind of individualized profiling that is enabled by cross-context identifiers today. The goal of FLoC and other proposals is to avoid letting trackers access specific pieces of information that they can tie to specific people. As we’ve shown, FLoC may actually help trackers in many contexts. But even if Google is able to iterate on its design and prevent these risks, the harms of targeted advertising are not limited to violations of privacy. FLoC’s core objective is at odds with other civil liberties.

The power to target is the power to discriminate. By definition, targeted ads allow advertisers to reach some kinds of people while excluding others. A targeting system may be used to decide who gets to see job postings or loan offers just as easily as it is to advertise shoes. 

Over the years, the machinery of targeted advertising has frequently been used for exploitation, discrimination, and harm. The ability to target people based on ethnicity, religion, gender, age, or ability allows discriminatory ads for jobs, housing, and credit. Targeting based on credit history—or characteristics systematically associated with it— enables predatory ads for high-interest loans. Targeting based on demographics, location, and political affiliation helps purveyors of politically motivated disinformation and voter suppression. All kinds of behavioral targeting increase the risk of convincing scams.

Instead of re-inventing the tracking wheel, we should imagine a better world without the myriad problems of targeted ads.

Google, Facebook, and many other ad platforms already try to rein in certain uses of their targeting platforms. Google, for example, limits advertisers’ ability to target people in “sensitive interest categories.” However, these efforts frequently fall short; determined actors can usually find workarounds to platform-wide restrictions on certain kinds of targeting or certain kinds of ads

Even with absolute power over what information can be used to target whom, platforms are too often unable to prevent abuse of their technology. But FLoC will use an unsupervised algorithm to create its clusters. That means that nobody will have direct control over how people are grouped together. Ideally (for advertisers), FLoC will create groups that have meaningful behaviors and interests in common. But online behavior is linked to all kinds of sensitive characteristics—demographics like gender, ethnicity, age, and income; “big 5” personality traits; even mental health. It is highly likely that FLoC will group users along some of these axes as well. FLoC groupings may also directly reflect visits to websites related to substance abuse, financial hardship, or support for survivors of trauma.

Google has proposed that it can monitor the outputs of the system to check for any correlations with its sensitive categories. If it finds that a particular cohort is too closely related to a particular protected group, the administrative server can choose new parameters for the algorithm and tell users’ browsers to group themselves again. 

This solution sounds both orwellian and sisyphean. In order to monitor how FLoC groups correlate with sensitive categories, Google will need to run massive audits using data about users’ race, gender, religion, age, health, and financial status. Whenever it finds a cohort that correlates too strongly along any of those axes, it will have to reconfigure the whole algorithm and try again, hoping that no other “sensitive categories” are implicated in the new version. This is a much more difficult version of the problem it is already trying, and frequently failing, to solve.

In a world with FLoC, it may be more difficult to target users directly based on age, gender, or income. But it won’t be impossible. Trackers with access to auxiliary information about users will be able to learn what FLoC groupings “mean”—what kinds of people they contain—through observation and experiment. Those who are determined to do so will still be able to discriminate. Moreover, this kind of behavior will be harder for platforms to police than it already is. Advertisers with bad intentions will have plausible deniability—after all, they aren’t directly targeting protected categories, they’re just reaching people based on behavior. And the whole system will be more opaque to users and regulators.

Google, please don’t do this

We wrote about FLoC and the other initial batch of proposals when they were first introduced, calling FLoC “the opposite of privacy-preserving technology.” We hoped that the standards process would shed light on FLoC’s fundamental flaws, causing Google to reconsider pushing it forward. Indeed, several issues on the official Github page raise the exact same concerns that we highlight here. However, Google has continued developing the system, leaving the fundamentals nearly unchanged. It has started pitching FLoC to advertisers, boasting that FLoC is a “95% effective” replacement for cookie-based targeting. And starting with Chrome 89, released on March 2, it’s deploying the technology for a trial run. A small portion of Chrome users—still likely millions of people—will be (or have been) assigned to test the new technology.

Make no mistake, if Google does through on its plan to implement FLoC in Chrome, it will likely give everyone involved “options.” The system will probably be opt-in for the advertisers that will benefit from it, and opt-out for the users who stand to be hurt. Google will surely tout this as a step forward for “transparency and user control,” knowing full well that the vast majority of its users will not understand how FLoC works, and that very few will go out of their way to turn it off. It will pat itself on the back for ushering in a new, private era on the Web, free of the evil third-party cookie—the technology that Google helped extend well past its shelf life, making billions of dollars in the process.

It doesn’t have to be that way. The most important parts of the privacy sandbox, like dropping third-party identifiers and fighting fingerprinting, will genuinely change the Web for the better. Google can choose to dismantle the old scaffolding for surveillance without replacing it with something new and uniquely harmful.

We emphatically reject the future of FLoC. That is not the world we want, nor the one users deserve. Google needs to learn the correct lessons from the era of third-party tracking and design its browser to work for users, not for advertisers.

Note: We reached out to Google to verify certain facts presented in this post, as well as to request more information about the upcoming Origin Trial. We have not received a response at the time of posting.

Bennett Cyphers

The Justice in Policing Act Does Not Do Enough to Rein in Body-Worn Cameras

5 days 18 hours ago

Reformers often tout police use of body-worn cameras (BWCs) as a way to prevent law enforcement misconduct. But, far too often, this technology becomes one more tool in a toolbox already overflowing with surveillance technology that spies on civilians. Worse, because police often control when BWCs are turned on and how the footage is stored, BWCs often fail to do the one thing they were intended to do: record video of how police interact with the public. So EFF opposes BWCs absent strict safeguards.

While it takes some useful steps toward curbing nefarious ways that police use body-worn cameras, the George Floyd Justice in Policing Act, H.R. 1280, does not do enough. It places important limits on how federal law enforcement officials use BWCs. And it is a step forward compared to last year’s version: it bans federal officials from applying face surveillance technology to any BWC footage. However, H.R. 1280 still falls short: it funds BWCs for state and local police, but does not apply the same safeguards that the bill applies to federal officials. We urge amendments to this bill as detailed below. Otherwise, these federally-funded BWCs will augment law enforcement’s already excessive surveillance capabilities. 

At a minimum, H.R. 1280 must be amended to extend the face surveillance ban it mandates for federal BWCs, to federally-funded BWCs employed by state and local law enforcement agencies. 

As has been our position, BWCs should adhere to the following regulations: 

Mandated activation of body-worn cameras. Officers must be required to activate their cameras at the start of all investigative encounters with civilians, and leave them on until the encounter ends. Otherwise, officers could subvert any accountability benefits of BWCs by simply turning them off when misconduct is imminent, or not turning them on. In narrow circumstances where civilians have heightened privacy interests (like crime victims and during warrantless home searches), officers should give civilians the option to deactivate BWCs.

No political spying with body-worn cameras. Police must not use BWCs to gather information about how people are exercising their First Amendment rights to speak, associate, or practice their religion. Government surveillance chills and deters such protected activity.

Retention of body-worn camera footage. All BWC footage should be held for a few months, to allow injured civilians sufficient time to come forward and seek evidence. Then footage should be promptly destroyed, to reduce the risks of data breach, employee misuse, and long-term surveillance of the public. However, if footage depicts an officer’s use of force or an episode subject to a civilian’s complaint, then the footage must be retained for a lengthier period. 

Officer review of footage. If footage depicts use of force or an episode subject to a civilian complaint, then an officer must not be allowed to review the footage until after they make an initial statement about the event. Given the malleability of human memory, a video can alter or even overwrite a recollection. And some officers might use footage to better “testily,” or stretch the truth about encounters.

Public access to footage. If footage depicts a particular person, then that person must have access to it. If footage depicts police use of force, then all members of the general public must have access to it. If a person seeks footage that does not depict them or use of force, then whether they may have access must depend on a weighing by a court of (a) the benefits of disclosure to police accountability, and (b) the costs of disclosure to the privacy of a depicted member of the public. If the footage does not depict police misconduct, then disclosure will rarely have a police accountability benefit. In many cases, blurring of civilian faces might diminish privacy concerns. In no case should footage be withheld on the grounds it is a police investigatory record.

Enforcement of these rules. If footage is recorded or retained in violation of these rules, then it must not be admissible in court. If footage is not recorded or retained in violation of these rules, then a civil rights plaintiff or criminal defendant must receive an evidentiary presumption that the missing footage would have helped them. And departments must discipline officers who break these rules.

Community control over body-worn cameras. Local police and sheriffs must not acquire or use BWCs, or any other surveillance technology, absent permission from their city council or county board, after ample opportunity for residents to make their voices heard. This is commonly called community control over police surveillance (CCOPS). 

EFF supported a California law (A.B. 1215) that placed a three-year moratorium on use of face surveillance with BWCs. Likewise, EFF in 2019, 2020, and 2021 joined scores of privacy and civil rights groups in opposing any federal use of face surveillance, and also any federal funding of state and local face surveillance. 

So we are pleased with Section 374 of H.R. 1280, which states: “No camera or recording device authorized or required to be used under this part may be equipped with or employ facial recognition technology, and footage from such a camera or recording device may not be subjected to facial recognition technology.” We are also pleased with Section 3051, which says that federal grant funds for state and local programs “may not be used for expenses related to facial recognition technology.” Both of these provisions validate civil society and over-policed communities’ long-standing assertion that government use of face recognition is dangerous and must be banned. However, this bill does not go far enough. EFF firmly supports a full ban of all government use of face recognition technology. At a minimum, H.R. 1280 must be amended to extend the face surveillance ban it mandates for federal BWCs, to federally-funded BWCs employed by state and local law enforcement agencies. For body-worn cameras to be a small part of a solution, rather than part of the problem, their operation and footage storage must be heavily regulated, and they must used solely to record video of how police interact with the public, and not serve as trojan horses for increased surveillance.

Matthew Guariglia

Officials in Baltimore and St. Louis Put the Brakes on Persistent Surveillance Systems Spy Planes

5 days 22 hours ago

Baltimore, MD and St. Louis, MO, have a lot in common. Both cities suffer from declining populations and high crime rates. In recent years, the predominantly Black population in each city has engaged in collective action opposing police violence. In recent weeks, officials in both cities voted unanimously to spare their respective residents from further invasions on their privacy and essential liberties by a panoptic aerial surveillance system designed to protect soldiers on the battlefield, not resident's rights and public safety.

Baltimore’s Unanimous Vote to Terminate  

From April to October of 2020, Baltimore residents were subjected to a panopticon-like system of surveillance facilitated by a partnership between the Baltimore Police Department and a privately-funded Ohio company called Persistent Surveillance Systems (PSS). During that period, for at least 40 hours a week, PSS flew surveillance aircraft over 32 square miles of the city, enabling police to identify specific individuals from the images captured by the planes. Although no planes had flown as part of the collaboration since late October—and the program was scheduled to end later this year—the program had become troubling enough that on February 3, the City's spending board voted unanimously to terminate Baltimore's contract with Ohio-based Persistent Surveillance Systems.

St. Louis Rules Committee Says ‘Do Not Pass’

Given the program's problematic history and unimpressive efficacy, it may come as some surprise that on December 11, 2020, City of St. Louis Alderman Tom Oldenburg introduced legislation that would have forced the mayor, and comptroller, to enter into a contract with PSS closely replicating Baltimore's spy plane program.

With lobbyists for the privately-funded Persistent Surveillance Systems program padding campaign coffers, Alderman Oldenburg's proposal was initially well received by the City's Board of Alders. However, as EFF and local advocates—including the ACLU of Missouri and Electronic Frontier Alliance member Privacy Watch STL—worked to educate lawmakers and their constituents about the bill’s unconstitutionality, that support began to waver. While the bill narrowly cleared a preliminary vote in late January, by Feb. 4 the Rules Committee voted unanimously to issue a "Do Not Pass" recommendation.

A supermajority of the Board could vote to override the Committee's guidance when they meet for the last time this session on April 19. However, the bill's sponsor has acknowledged that outcome to be unlikely—while also suggesting he plans to introduce a similar bill next session. If the Board does approve the ordinance when they meet on April 19, it is doubtful that St. Louis Mayor Lyda Krewson would sign the bill after her successor has been chosen in the City's April 6 election.

Next Up: Fourth Circuit Court of Appeals 

While municipal lawmakers are weighing in unanimously against the program, it may be the courts that make the final call. Last November, EFF along with the Brennan Center for Justice, Electronic Privacy Information Center, FreedomWorks, National Association of Criminal Defense Lawyers, and the Rutherford Institute filed a friend-of-the-court brief in a federal civil rights lawsuit challenging Baltimore’s aerial surveillance program. A divided three-judge panel of the U.S. Court of Appeals for the Fourth Circuit initially upheld the program, but the full court has since withdrawn that decision and decided to rehear the case en banc. Oral arguments are scheduled for March 8. While the people of St. Louis and Baltimore are protected for now, we're hopeful that the court will find that the aerial surveillance program violates the Fourth Amendment’s guarantee against warrantless dragnet surveillance, potentially shutting down  the program for good. 

Nathan Sheard

What the AT&T Breakup Teaches Us About a Big Tech Breakup

6 days 21 hours ago

The multi-pronged attempt by state Attorneys General, the Department of Justice, and the Federal Trade Commission to find Google and Facebook liable for violating antitrust law may result in breaking up these giant companies. But in order for any of this to cause lasting change, we need to look to the not-so-recent past.

In the world of antitrust, the calls to “break up” Big Tech companies translate to the fairly standard remedy of “structural separation,” where companies are barred from selling services and competing with the buyers of those services (for example, rail companies have been forced to stop selling freight services that compete with their own customers). It has been done before as part of the fight against communication monopolies. However, history shows us that the real work is not just breaking up companies, but following through afterward.

In order to make sure that the Internet becomes a space for innovation and competition, there has to be a vision of an ideal ecosystem. When we look back at the United States’ previous move from telecom monopoly into what can best be described as “regulated competition,” we can learn a lot of lessons—good and bad—about what can be done post-breakup.

The AT&T of Yore and the Big Tech of Today

Cast your mind back, back to when AT&T was a giant corporation. No, further back. When AT&T was the world’s largest corporation and the telephone monopoly. In the 1970s, AT&T resembled Big Tech companies in scale, significance, and influence.

AT&T grew by relentlessly gobbling up rival companies and eventually struck a deal with the government to make its monopolization legal in exchange for universal service (known as the Kingsbury Commitment). As a monopolist, AT&T's unilateral decisions dictated the way people communicated. The company exerted extraordinary influence over public debate and used its influence to argue that its monopoly was in the public interest. Its final antitrust battle was a quagmire that spanned two political administrations, and despite this, its political power was so great that it was able to get the Department of Defense to claim its monopoly was vital to national security.  

Today, Big Tech is reenacting the battle of the AT&T of yore. Facebook CEO Mark Zuckerberg assertion that his company’s dominance is the only means to compete with China is a repeat of AT&T’s attempt to use national security to bypass competition concerns. Similarly, Facebook's recent change of heart on whether Section 230 of the Communications Decency Act should be gutted is an effort to appease policymakers looking to scrutinize the company's dominance. Not coincidentally, Section 230 is the lifeblood of every would-be competitor to Facebook. In trading 230 in for policy concessions, Facebook both escapes a breakup and salts the earth against the growth of any new competitors to become the regulated monopoly that remains.

Google is a modern AT&T, too. Google acquired its way to dominance by purchasing a multitude of companies to extend its vertical reach over the years. Mergers and acquisitions were key to AT&T's monopoly strategy. That's why the government then sought to break up the company – and that's why the US government today is proposing breakups for  Google. Now, with AT&T, there were clear geographic lines on which the company could be broken into smaller regional companies. It's different for Google and Facebook: those lines will have to be drawn along different parts of the companies "stack," such as advertising and platforms.

When the US Department of Justice broke up AT&T, it traded one national monopoly for a set of regional monopolies. Over time Congress learned that it wasn't enough. Likewise, breakups for Google and Facebook will only be step one.  

Without a Broader Vision, Big Tech Will Be the Humpty Dumpty That Put Himself Back Together Again

Supporters of structural separation for Big Tech need to learn the lessons of the past. Our forebears got it right initially with telecom but then failed to sustain a consistent vision of competition eventually allowing dozens of companies to consolidate into a mix of regional monopolies or super dominant national companies.

When originally passed, the 1996 telecom law Congress passed to follow the AT&T breakup enabled the creation of the Competitive Local Exchange Carrier (aka CLEC) industry. These were smaller companies that already existed but had been severely hamstrung by the local monopolies. Their reach was severely limited because there was no federal competition law.

The  1996 Act lowered the start-up costs for new phone companies: they wouldn't have to build an entire network from scratch. The Act forced the Baby Bells (the regional parts of the original AT&T monopoly) to share their "essential facilities" with these new competitors at a fair price, opening the market to much smaller players with much less capital.

But the incumbent monopolies still had friends in statehouses and Congress. By 2001, federal and state governments began adopting a new theory of competition in communications: "deregulated competition"—which whittled away the facilities sharing rules and rules banning the broken up parts of AT&T from merging with one another again (as well as cable and wireless companies). If the purpose of this untested, unproven approach was to promote competition, then clearly it was a failure. A majority of Americans today have only one choice for high-speed broadband access that meets 21st century needs. There has been no serious reckoning for "deregulated competition" and it remains the heart of telecom policy despite nearly every prediction of the benefits of "deregulated competition" having been proven wrong. This only happened because policymakers and the public forgot how they received competition in telecom in the first place and allowed the unwinding that remains with us still today. 

Steve Coll, author of The Deal of the Century: The Breakup of AT&T, predicted this problem shortly after the AT&T's breakup:

It is quite possible - some would argue it is more than likely - that the final landscape of the Bell System breakup will include a bankrupted MCI and an AT&T returned to its original state as a regulated, albeit smaller and less effective, telephone monopoly. The source of this specter lies not in anyone's crystal ball but in the history of U.S. v. AT&T. Precious little in that history - the birth of MCI, the development of phone industry competition, the filing of the Justice lawsuit, the prolonged inaction of Congress, the aborted compromise deals between Justice and AT&T, the Reagan administration's tortured passivity, the final inter-intra settlement itself - was the product of a single coherent philosophy, or a genuine, reasoned consensus, or a farsighted public policy strategy.

A Post-Breakup Internet Tech Vision: Decentralization, Empowerment of Disruptive Innovation, and Consumer Protection

Anyone thinking about Big Tech breakups needs to learn the lesson of AT&T.  Breakups are just step one. Before we take that step, we need to know what steps we'll take next. We need a plan for post-break-up regulated competition, or we'll squander years and years of antitrust courtroom battles, only to see the fragments of the companies reform into new, unstoppable juggernauts. We need a common narrative about where competition comes from and how we sustain it.

Like phone companies, internet platforms have “network effects”: to compete with them, a new company needs access, not the company's "ecosystem" – the cluster of products and services monopolists weave around themselves to lock in users, squeeze suppliers, and fend off competitors. In '96, we forced regional monopolies to share their facilities and thousands of local ISPs sprung up across the country, almost overnight. Creating a durable competitive threat to tech monopolists means finding similar measures to promote a flourishing, pluralistic, diverse Internet.

We've always said that tech industry competition is a multifaceted project that calls for multiple laws and careful regulation. Changes to antitrust law, intellectual property law, intermediary liability, and consumer privacy legislation all play critical and integral parts in a more competitive future. Strike the wrong balance and you drain away the Internet's capacity for putting power in the hands of people and communities. Get any of the policies wrong and you risk strangling a hundred future Googles and Facebooks in their cradles—companies whose destiny is to grow for a time but to eventually be replaced by new upstarts better suited for the unforeseeable circumstances of the future.

Here are two examples of policies that are every bit as important as breakups for creating and maintaining a competitive digital world:

The Internet once stood for a world where people with good ideas and a little know-how could change the world, attracting millions of users and spawning dozens of competitors. That was the Net's lifecycle of competition. We can get that future back, but only if we commit to a shared and durable vision of competition. It's fine to talk about breaking up Big Tech, but the hard part starts after the companies are split up. Now is the time to start asking what competition should look like, or we'll get dragged back to our current future before we get started down the road to a better one.

Ernesto Falcon

Federal Court Agrees: Prosecutors Can’t Keep Forensic Evidence Secret from Defendants

1 week 2 days ago

When the government tries to convict you of a crime, you have a right to challenge its evidence. This is a fundamental principle of due process, yet prosecutors and technology vendors have routinely argued against disclosing how forensic technology works.

For the first time, a federal court has ruled on the issue, and the decision marks a victory for civil liberties.

EFF teamed up with the ACLU of Pennsylvania to file an amicus brief arguing in favor of defendants’ rights to challenge complex DNA analysis software that implicates them in crimes. The prosecution and the technology vendor Cybergenetics opposed disclosure of the software’s source code on the grounds that the company has a commercial interest in secrecy.

The court correctly determined that this secrecy interest could not outweigh a defendant’s rights and ordered the code disclosed to the defense team. The disclosure will be subject to a “protective order” that bars further disclosure, but in a similar previous case a court eventually allowed public scrutiny of source code of a different DNA analysis program after a defense team found serious flaws.

This is the second decision this year ordering the disclosure of the secret TrueAllele software. This added scrutiny will help ensure that the software does not contribute to unjust incarceration.

Kit Walsh

From Creativity to Exclusivity: The German Government's Bad Deal for Article 17

1 week 2 days ago

The implementation process of Article 17 (formerly Article 13) of the controversial Copyright Directive into national laws is in full swing, and it does not look good for users' rights and freedoms. Several EU states have failed to present balanced copyright implementation proposals, ignoring the concerns off EFF, other civil society organizations, and experts that only strong user safeguards can help preventing Article 17 from turning tech companies and online services operators into copyright police.

A glimpse of hope was presented by the German government in a recent discussion paper. While the draft proposal fails to prevent the use of upload filters to monitor all user uploads and assess them against the information provided by rightsholders, it showed creativity by giving users the option of pre-flagging uploads as "authorized" (online by default) and by setting out exceptions for everyday uses. Remedies against abusive removal requests by self-proclaimed rightsholders were another positive feature of the discussion draft.

Inflexible Rules in Favor of Press Publishers

However, the recently adopted copyright implementation proposal by the German Federal Cabinet has abandoned the focus on user rights in favor of inflexible rules that only benefit press publishers. Instead of opting for broad and fair statutory authorization for non-commercial minor uses, the German government suggests trivial carve-outs for "uses presumably authorized by law," which are not supposed to be blocked automatically by online platforms. However, the criteria for such uses are narrow and out of touch with reality. For example, the limit for minor use of text is 160 characters.

By comparison, the maximum length of a tweet is 280 characters, which is barely enough substance for a proper quote. As those uses are only presumably authorized, they can still be disputed by rightsholders and blocked at a later stage if they infringe copyright. However, this did not prevent the German government from putting a price tag on such communication as service providers will have to pay the author an "appropriate remuneration." There are other problematic elements in the proposal, such as the plan to limit the use of parodies to uses that are "justified by the specific purpose"—so better be careful about being too playful.

The German Parliament Can Improve the Bill

It's now up to the German Parliament to decide whether to be more interested in the concerns of press publishers or in the erosion of user rights and freedoms. EFF will continue to reach out to Members of Parliament to help them make the right decision.

Christoph Schmon

The SAFE Tech Act Wouldn't Make the Internet Safer for Users

1 week 3 days ago

Section 230, a key law protecting free speech online since its passage in 1996, has been the subject of numerous legislative assaults over the past few years. The attacks have come from all sides. One of the latest, the SAFE Tech Act, seeks to address real problems Internet users experience, but its implementation would harm everyone on the Internet. 

The SAFE Tech Act is a shotgun approach to Section 230 reform put forth by Sens. Mark Warner, Mazie Hirono and Amy Klobuchar earlier this month. It would amend Section 230 through the ever-popular method of removing platform immunity from liability arising from various types of user speech. This would lead to more censorship as social media companies seek to minimize their own legal risk. The bill compounds the problems it causes by making it more difficult to use the remaining immunity against claims arising from other kinds of user content. 

Addressing Big Tech’s surveillance-based business models can’t, and shouldn’t, be done through amendments to Section 230—but that doesn’t mean it shouldn’t be done at all. 


The act would not protect users’ rights in a way that is substantially better than current law. And it would, in some cases, harm marginalized users, small companies, and the Internet ecosystem as a whole. Our three biggest concerns with the SAFE Tech Act are: 1) its failure to capture the reality of paid content online, 2) the danger that an affirmative defense requirement creates and 3) the lack of guardrails around injunctive relief that would open the door for a host of new suits that simply remove certain speech.

Section 230 Benefits Everyone

Before considering what this bill would change, it’s useful to take a look at the benefits that Section 230 provides for all internet users. The Internet today allows people everywhere to connect and share ideas—whether that’s for free on social media platforms and educational or cultural platforms like Wikipedia and the Internet Archive, or on paid hosting services like Squarespace or Patreon. Section 230’s legal protections benefit Internet users in two ways. 

Section 230 Protects Intermediaries That Host Speech: Section 230 enables services to host the content of other speakers—from writing, to videos, to pictures, to code that others write or upload—without those services generally having to screen or review that content before being published. Without this partial immunity, all of the intermediaries who help the speech of millions and billions of users reach their audiences would face unworkable content moderation requirements that inevitably lead to large scale censorship. The immunity has some important exceptions, including for violations of federal criminal law and intellectual property claims. But the legal immunity’s protections extend to services far beyond social media platforms. Thus everyone who sends an email, makes a Kickstarter, posts on Medium, shares code on Github, protects their site from DDOS attacks with Cloudflare, makes friends on Meetup, or posts on Reddit, benefits from Section 230’s immunity for all intermediaries. 

Section 230 Protects Users Who Create Content: Section 230 directly protects Internet users who themselves act as online intermediaries from being held liable for the content created by others. So when people publish a blog and allow reader comments, for example, Section 230 protects them. This enables Internet users to create their own platforms for others’ speech, such as when an Internet user created the Shitty Media Men list that allowed others to share their own experiences involving harassment and sexual assault. 

The SAFE Tech Act Fails to Capture the Reality of Paid Content Online

In what appears to be an attempt to limit deceptive advertising, the SAFE Tech Act would amend Section 230 to remove the service’s immunity for user-generated content when that content is paid speech. According to the senators, the goal of this change is to stop Section 230 from applying to ads, “ensuring that platforms cannot continue to profit as their services are used to target vulnerable consumers with ads enabling frauds and scams.” 

With this change, even if the defendant ultimately prevails against a plaintiff’s claims, they will have to defend themselves in court for longer, driving up their costs.

But the language in the bill is much broader than just ads. The bill says Section 230’s platform immunity for user-generated content does not apply if, “the provider or user has accepted payment to make the speech available or, in whole or in part, created or funded the creation of the speech.” Much, much more of the Internet is likely included behind this definition than advertising, and it is unclear how much paid or sponsored content this language would sweep up. This change would undoubtedly force a massive, and dangerous, overhaul to Internet services at every level. 

Although much of the legislative conversation around Section 230 reform focuses on the dominant social media services that are generally free to users, most of the intermediaries people rely on involve some form of payment or monetization: from more obvious content that sits behind a paywall on sites like Patreon, to websites that pay for hosting from providers like GoDaddy, to the comment section of a newspaper only available to subscribers. If all companies that host speech online and whose businesses depend on user payments lose Section 230 protections, the relationship between users and many intermediaries will change significantly, in several unintended ways:

Harm to Data Privacy: Services that previously accepted payments from users may decide to change to a different business model based on collecting and selling users’ personal information. So in seeking to regulate advertising, the SAFE TECH Act may perversely expand the private surveillance business model to other parts of the Internet, just so those services can continue to maintain Section 230’s protections. 

Increased Censorship: Those businesses that continue to accept payments will have to make new decisions about what speech they can risk hosting and how they vet users and screen their content. They would be forced to monitor and filter all content that appears whenever money has exchanged hands—a dangerous and unworkable solution that would find much important speech disappeared, and would turn everyone from web hosts to online newspapers into censors. The only other alternative—not hosting user speech—would also not be a step forward. 

As we’ve said many times, censorship has been shown to amplify existing imbalances in society. History shows us that when faced with the prospect of having to defend lawsuits, online services (like offline intermediaries before them) will opt to remove and reject user speech rather than try to defend it, even when it is strongly defensible. These decisions, as history has shown us, are applied disproportionately against the speech of marginalized speakers. Immunity, like that provided by Section 230, alleviates that prospect of having to defend such lawsuits. 

Unintended Burdens on a Complex Ecosystem: While minimizing dangerous or deceptive advertising may be a worthy goal, and even if the SAFE Tech Act were narrowed to target ads in particular, it would not only burden sites like Facebook that function as massive online advertising ecosystems; it would also burden the numerous companies that comprise the complex online advertising ecosystem. There are numerous intermediaries between the user seeing an ad on a website and the ad going up. It is unclear which companies would lose Section 230 immunity under the SAFE TECH Act; arguably it would be all of them. The bill doesn’t reflect or account for the complex ways that publishers, advertisers, and scores of middlemen actually exchange money in today’s online ad ecosystem, which happens often in a split second through Real-Time Bidding protocols. It also doesn’t account for more nuanced advertising regimes. For example, how would an Instagram influencer—someone who is paid by a company to share information about a product—be affected by this loss of immunity? No money has exchanged hands with Instagram, and therefore one can imagine influencers and other more covert forms of advertising becoming the norm to protect advertisers and platforms from liability. 

For a change in Section 230 to work as intended and not spiral into a mass of unintended consequences, legislators need to have a greater understanding of the Internet ecosystem of paid and content, and the language needs to be more specifically and narrowly tailored.

The Danger That an Affirmative Defense Requirement Creates 

The SAFE Tech Act also would alter the legal procedure around when Section 230’s immunity for user-generated content would apply in a way that would have massive practical consequences for users’ speech. Many people upset about user-generated content online bring cases against platforms, hosts, and other online intermediaries. Congressman Devin Nunes’ repeated lawsuits against Twitter for its users’ speech are a prime example of this phenomenon. 

The increased legal costs of even meritless lawsuits will have serious consequences for users’ speech. 

Under current law, Section 230 operates as a procedural fast-lane for online services—and users who publish another user’s content—to get rid of frivolous lawsuits. Platforms and users subjected to these lawsuits can move to dismiss the cases before having to even respond to the legal complaint or going through the often expensive fact-gathering portion of a case, known as discovery. Right now, if it’s clear from the face of a legal complaint that the underlying allegations are based on a third party’s content, the statute’s immunity requires that the case against the platform or user who hosted the complained-of content be dismissed. Of course, this has not stopped plaintiffs from bringing (often unmeritorious) lawsuits in the first place. But in those cases, Section 230 minimizes the work the court must go through to grant a motion to dismiss the case, and minimizes costs for the defendant. This protects not only platforms but users; it is the desire to avoid litigation costs that leads intermediaries to default to censoring user speech.

The SAFE Tech Act would subject both provider and user defendants to much more protracted and expensive litigation before a case could be dismissed. By downgrading Section 230’s immunity to an “affirmative defense … that an interactive computer service provider has a burden of proving by a preponderance of the evidence,” defendants could no longer use Section 230 to dismiss cases at the beginning of a suit and would be required to prove—with evidence—that Section 230 applies. Right now, Section 230 saves companies and users significant legal costs when they are subjected to frivolous lawsuits. With this change, even if the defendant ultimately prevails against a plaintiff’s claims, they will have to defend themselves in court for longer, driving up their costs.

The increased legal costs of even meritless lawsuits will have serious consequences for users’ speech. An online service that cannot quickly get out of frivolous litigation based on user-generated content is likely going to take steps to prevent such content from becoming a target of litigation in the first place, including screening user’s speech or prohibiting certain types of speech entirely. And in the event that someone upset by a user’s speech sends a legal threat to an intermediary, the service is likely to be much more willing to remove the speech—even when it knows the speech cannot be subject to legal liability—just to avoid the new, larger expense and time to defend against the lawsuit.

As a result, the SAFE Tech Act would open the door for a host of new suits that by design are not filed to vindicate a legal wrong but simply to remove certain speech from the Internet—also called SLAPP lawsuits. These would remove a much greater volume of speech that does not, in fact, violate the law. Large services may find ways to absorb these new costs. But for small intermediaries and growing platforms that may be competing with those large companies, a single costly lawsuit, even if the defendant small company eventually prevails, may be the difference between success and failure. This is not to mention the many small businesses who use social media to market their company or service to respond to (and moderate) comments on their pages or sites, and who would likely be in danger of losing immunity from liability under this change. 

No Guardrails Around Injunctive Relief Would Open the Door to Dangerous Takedowns

The SAFE Tech Act also modifies Section 230’s immunity in another significant way, by permitting aggrieved individuals to seek non-monetary relief from platforms whose content has harmed them. Under the bill, Section 230 would not apply when a plaintiff seeks injunctive relief to require an online service to remove or restrict user-generated content that is “likely to cause irreparable harm.” 

The SAFE Tech Act’s injunctive relief carveout fails to account for how the provision will be misused to suppress lawful speech.

This extremely broad change may be designed to address a legitimate concern about Section 230. Some people who are harmed online simply want the speech taken down instead of seeking monetary compensation. While giving certain Internet users an effective remedy that they currently lack under 230, the SAFE Tech Act’s injunctive relief carveout fails to account for how the provision will be misused to suppress lawful speech.

The SAFE Tech Act’s language appears to permit enforcement of all types of injunctive relief at any stage in a case. Litigants often seek emergency and temporary injunctive relief at an extremely early stage of the case, and judges frequently grant it without giving the speaker or platform an opportunity to respond. Courts already issue these kinds of takedown orders against online platforms, and they are prior restraints in violation of the First Amendment. If Section 230 does not bar these types of preliminary takedown orders, plaintiffs are likely to misuse the legal system to force down legal content without a final adjudication about the actual legality of the user-generated content.

Also, the injunctive relief carveout could be abused in another type of case, known as a default judgment, to remove speech without any judicial determination that the content is illegal. Default judgments are when the defendant does not fight the case, allowing the plaintiff to win without any examination of the underlying merits. In many cases, defendants avoid litigation simply because they don’t have the time or money for it. 

Because of its one-sided nature, default judgments are subject to great fraud and abuse. Others have documented the growing phenomenon of fraudulent default judgments, typically involving defamation claims, in which a meritless lawsuit is crafted for the specific purpose of getting a default judgment and to avoid a consideration of its merits. If the SAFE Tech Act were to become law, fraudulent lawsuits like these would be incentivized and become more common, because Section 230 would no longer provide a barrier against their use to legally compel intermediaries to remove lawful speech.

A recent Section 230 case called Hassel v. Bird illustrates how a broad injunctive relief carveout to the law that would apply to default judgments would incentivize censorship of protected user speech. In Hassel, a lawyer sued a user of Yelp (Bird) who gave her law office a bad review, claiming defamation. The court never ruled on whether the speech was defamatory, but because the reviewer did not defend the lawsuit, the trial judge entered a default judgment against the reviewer, ordering the removal of the post.  Section 230 prevented a court from ordering Yelp to remove the post. 

Despite the potential for litigants to abuse the SAFE Tech Act’s injunctive relief carveout, the bill contains no guardrails for online intermediaries hosting legitimate speech targeted for removal. As it stands, the injunctive relief exception to Section 230 poses a real danger to legitimate speech. 

In Conclusion, For Safer Tech, Look Beyond Section 230

This only scratches the surface of the SAFE Tech Act. But the bill’s shotgun approach to amending Section 230, and the broadness of its language, make it impossible to support as it stands. 

If legislators take issue with deceptive advertisers, they should use existing laws to protect users from them. Instead of making sweeping changes to Section 230, they should update antitrust law to stop the flood of mergers and acquisitions that have made competition in Big Tech an illusion, creating much of the problems we see in the first place. If they want to make Big Tech more responsive to the concerns of consumers, they should pass a strong consumer data privacy law with a robust private right of action.

If they disagree with the way that large companies like Facebook benefit from Section 230, they should carefully consider that changes to Section 230 will mostly burden smaller platforms and entrench the large companies that can absorb or adapt to the new legal landscape (large companies continue to support amendments to Section 230, even as those companies simultaneously push back against substantive changes that actually seek to protect users, and therefore harm their bottom line). Addressing Big Tech’s surveillance-based business models can’t, and shouldn’t, be done through amendments to Section 230—but that doesn’t mean it shouldn’t be done at all. 

It’s absolutely a problem that just a few tech companies wield such immense control over what speakers and messages are allowed online. And it’s a problem that those same companies fail to enforce their own policies consistently or offer users meaningful opportunities to appeal bad moderation decisions. But this bill would not create a fairer system.

Aaron Mackey

Virginia's Weak Privacy Bill Is Just What Big Tech Wants

1 week 3 days ago

Virginia’s legislature has passed a bill meant to protect consumer privacy—but the bill, called the Virginia Consumer Data Protection Act, really protects the interests of business far more than the interests of everyday consumers.

Take Action

Virginia: Speak Up for Real Privacy

The bill, which both Microsoft and Amazon supported, is now headed to the desk of Governor Ralph Northam. This week, EFF joined with the Virginia Citizens Consumer Council, Consumer Federation of America, Privacy Rights Clearinghouse, U.S. PIRG to ask for a veto on this bill, or for the governor to add a reenactment clause—a move that would send the bill back to the legislature to try again.

If you’re in Virginia and care about true privacy protections, let the governor know that the VCDA doesn’t give consumers the protections they need. In fact, it stacks the deck against them, by offering an “opt-out” framework that doesn’t protect privacy by default, allowing companies to force consumers that exercise their privacy rights to pay higher prices or accept a lower quality of service, and offering no meaningful enforcement—making it very unlikely that consumers will be able to hold companies to account if any of the few rights this bill grants them are violated.

As passed by the legislature, the bill is set to go into effect in 2023 and will establish a working group to make improvements between now and then. That offers some chance for improvements—but it likely won’t be enough to get real consumer protections. As we noted in a joint press release, “These groups appreciate that Governor Northam’s office has engaged with the concerns of consumer groups and committed to a robust stakeholder process to improve this bill. Yet the fundamental problems with the CDPA are too big to be fixed after the fact.”

Consumer privacy rights must be the foundation of any real privacy bill. The CDPA was written without meaningful input from consumer advocates; in fact, as Protocol reported, it was handed to the bill’s sponsor by an Amazon lobbyist. Some have suggested the Virginia bill could be a model for other states or for federal legislation. That’s bad for Virginia and bad for all of us.

Virginians, it’s time to take a stand. Tell Governor Northam that this bill is not good enough, and urge him to veto it or send it back for another try.  

TAKE ACTION

VIRGINIA: SPEAK UP FOR REAL PRIVACY

Hayley Tsukayama

Interoperability Gains Support at House Hearing on Big Tech Competition

1 week 3 days ago

With a new year and a new Congress, the House of Representatives’ subcommittee covering antitrust has turned its attention to “reviving competition.” On Thursday, the first in a series of hearings was held, focusing on how to help small businesses challenge Big Tech. One very good idea kept coming up, backed by both parties. And it is one EFF also considers essential: interoperability.

This was the first hearing since the House Judiciary Committee issued its antitrust report from its investigation into the business practices of Big Tech companies. This week’s hearing was exclusively focused on how to re-enable small businesses to disrupt the dominance of Big Tech. A critical aspect of the Internet EFF calls the life cycle of competition has vanished from the Internet as small new entrants no longer seek (nor could even if they tried) to displace well-established giants, but rather seek to be acquired by them.

Strong Bipartisan Support for Interoperability

Across the committee Members of Congress appeared to agree that some means of requiring Big Tech to grant access to competitors through interoperability will be an essential piece of the competition puzzle. The need is straightforward, the larger these networks became, the more their value rose, making it harder for a new business to enter into direct competition. One expert witness, Public Knowledge’s Competition Policy Director Charlotte Slaiman, noted that these “network effects” meant that one company with double the network size as a competitor wasn’t twice as attractive, it was exponentially more attractive to users.

But even in cases where you have large competitors with sizeable networks, Big Tech companies are using their dominance in other markets as a means to push out existing competitors. One of the most powerful testimonies in favor of interoperability provided to Congress was by the CEO of Mapbox, Eric Gunderson who detailed how Google is leveraging its dominance in search to exert dominance in Google Maps. Specifically, Google through a colorful trademark “brand confusion” contract term requires developers who wish to use Google Search to only integrate their products with Google Maps. Mr. Gunderson made clear that this tying of products that really do not need to be tied together at all is not only foreclosing on market opportunities for Mapbox, but it is also forcing their existing clients to abandon doing anything that doesn’t use Google Maps outright.

The solution to this type of corporate incumbent anticompetitive behavior is not revolutionary and has deep roots in tech history. As Ranking Member Ken Buck (R-CO) stated, “interoperability is a time-honored practice in the tech industry that allows competing technologies to speak to one another so that consumers can make a choice without being locked into any one technology.” We at EFF have long agreed that interoperability will be essential to reopening the Internet market to vibrant competition and recently published a white paper laying out in detail how we can get to a more competitive future. Seeing growing consensus from Congress is encouraging, but doing it right will require careful calibration in policy.

Ernesto Falcon

EFF Joins Dozens of Organizations Urging More Government Transparency

1 week 3 days ago

EFF has joined 42 other organizations, including the ACLU, the Knight Institute, and the National Security Archive calling for the new Biden administration to fulfill its promise to “bring transparency and truth back to government.” 

Specifically, these organizations are asking the administration and the federal government at large to update policy and implementation regarding the collection, retention, and dissemination of public records as dictated in the Freedom of Information Act (FOIA), the Federal Records Act (FRA), and the Presidential Records Act (PRA).

Our call for increased transparency with the administration comes in the wake of many years of extreme secrecy and increasingly unreliable enforcement of record retention and freedom of information laws. 

The letter request that the following actions be taken by the Biden administration:

  • Emphasize to All Federal Employees the Obligation to Give Full Effect to Federal Transparency Laws.
  • Direct Agencies to Adopt New FOIA Guidelines That Prioritize Transparency and the Public Interest.
  • Direct DOJ to Fully Leverage its Central Role in Agencies’ FOIA Implementation. 
  • Issue New FOIA Guidance by the Office of Management and Budget (OMB) and Update the National FOIA Portal.
  • Assess, Preserve, and Disclose the Key Records of the Previous Administration. 
  • Champion Funding Increases for the Public Records Laws.
  • Endorse Legislative Improvements for the Public Records Laws.
  • Embrace Major Reforms of Classification and Declassification. 
  • Issue an Executive Order Reforming the Prepublication Review System. 

You can read the full letter here: 

Matthew Guariglia

Coded Resistance: Freedom Fighting and Communication

1 week 4 days ago

It’s nearing the end of Black History Month, and that history is inherently tied to strife, resistance, and organizing related to government surveillance and oppression. Even though programs like COINTELPRO are more well-known now, the other side of these kinds of stories are the ways the Black community has fought back through intricate networks and communication aimed at avoiding surveillance.

The Borderland Network

The Trans-Atlantic Slave Trade was a dark, cruel time in the history of much of the Americas. The horrors of slavery still cast their shadow through systemic racism today. One of the biggest obstacles enslaved Africans faced when trying to organize and fight was the fact that they were closely watched, along with being separated, abused, tortured, and brought onto a foreign land to work until their death for free. They often spoke different languages from each other, with different cultures, and beliefs. Organizing under these conditions seemed impossible. Yet even under these conditions including overbearing surveillance, they developed a way to fight back. Much of this is attributed to the brilliance of these Africans using everything they had to develop communications with each other under chattel slavery. The continued fight today reflects much of the history that was established from dealing with censorship and authoritarian surveillance.

“The white folks down south don’t seem to sleep much, nights. They are watching for runaways, and to see if any other slaves come among theirs, or theirs go off among others.” - Former Runaway, Slavery’s Exiles - Sylviane A. Diouf

As Sylvane Diouf chronicled in the book, Slavery’s Exiles, slavery was not only catastrophic for many Africans, but also thankfully never a peaceful time for white owners and overseers either. Those captured from Africa and brought to the Americas seldom gave their captors a night of rest. Through rebellion, resistance, and individual sabotage with everyday life during this horrible period, freedom remained an objective. And with that objective came a deep history of secret communications and cunning intelligence.

Runaways often returned to plantations at night for years unnoticed and undetected, mostly to stay connected to family or relay information. One married couple, as Diouf tells it,  had a simple yet effective signaling system where the wife placed a garment in a particular spot that was visible from her husband’s covert. Ben and his wife (whose name is unknown) had other systems in place if it was too dark to see. For example, shining a bright light through the cracks in their cabin for an instant, and then repeating it at intervals of two or three minutes, three or four times.

These close-proximity runaways were deemed “Borderland Maroons''. They’d create tight networks of communication from plantation to plantation. Information, like the amount of reward for capture and punishment, traveled quickly through the grapevine of the Borderland Maroons. Based on this intelligence, many would make plans around either traveling away completely or staying around longer to gather others. Former Georgia Delegates from the Continental Congress recounted:

“The negroes have a wonderful art of communicating intelligence among themselves, it will run several hundred miles in a week or fortnight”

These networks often gained runaways years out of captivity and thus the ability to maintain a network among the enslaved. Coachmen, draymen, boatmen, and others who were allowed to move around off plantations were the backbone for this chain of intelligence. The shadow network of the Borderlands was the entry point of organizing for potential runaways. So even if someone was captured, they could tap into this network again later. No one would be getting rest or sleep. As Diouf recounts, keeping a high level of surveillance took a lot of resources from the slaveholders, and that fact was well-exploited by the enslaved.

Moses

Perhaps the most famous artisan of secret communications during this period is the venerable Harriet Tubman. Her character and will is undisputed, and her impeccable timing and remarkable intuition strengthened the Underground Railroad.

Dr. Bryan Walls notes much of her written and verbal communication was through plain language that acted as a metaphor:

  • “tracks” (routes fixed by abolitionist sympathizers)
  • “stations” or “depots” (hiding places)
  • “conductors” (guides on the Underground Railroad)
  • “agents” (sympathizers who helped the slaves connect to the Railroad)
  • “station masters” (those who hid slaves in their homes)
  • “passengers,” “cargo,” “fleece,” or “freight” (escaped slaves)
  • “tickets” (indicated that slaves were traveling on the Railroad)
  • “stockholders” (financial supporters who donated to the Railroad)
  • “the drinking gourd” (the Big Dipper constellation—a star in this constellation pointed to the North Star, located on the end of the Little Dipper’s handle)

The most famous example of verbal communication on plantations was the usage of song. The tradition of verbal history and storytelling remained strong among the enslaved, and acted as a way to “hide in plain sight”. Tubman said she changed the tempo of the songs to indicate whether it was safe to come out or not.

Harriet Tubman’s famous claim is “she never lost a passenger.” This rang true not only as she freed others, but also when she acted as a spy during the Civil War aiding the Union. As the first and only woman to organize and lead a military operation during the Civil War, her reputation was solidified as an expert in espionage. Her information was so detailed and accurate it often saved Black troops in the Union from harm.

Many of these tactics won’t be found written down, but passed verbally. It was illegal or prohibited for Black people to read and write. Therefore, it was a lethal risk to write more traditional ciphertext as communications.

Language as Resistance

Even though language was a barrier in the beginning and written communication was out of the question, over time English was forced onto enslaved Africans and many found a way to each other by creating an entirely new language on their own—Creole. There are many different kinds of Creole across the African Diaspora, which served as not only a way to communicate and develop a “home” language-wise, but also a way to communicate information to each other under the eyes of overseers.

"Anglican clergy were still reporting that Africans spoke little or no English but stood around in groups talking among themselves in “strange languages". ([Lorena] Walsh 1997:96–97)  -  Notes on the Origins and Evolution of African American Language

Coded Resistance in the African Diaspora

Of course, resistance against slavery didn’t just occur in the U.S., but also in Central and South America. Under domineering surveillance, many tactics had to be devised quickly and planned under the eye of white supremacy. Quilombos, or what can be viewed as the “Maroons” of Brazil, developed a way to fight against the Portuguese rule of that time:

“Prohibited from celebrating their cultural customs and strictly forbidden from practicing any martial arts, capoeira is thought to have emerged as a way to bypass these two imposing laws.” - Disguised in Dance: The Secret History of Capoeira

The rebellions in Jamaica, Haiti, and Mexico had extensive planning. They were not, as they are sometimes portrayed, merely the product of spontaneous and rightful rage against their oppressors. Some rebellions, such as Tacky’s War in Jamaica, were documented to be in the works for over a year before the first strike.

Modern Communication, Subversion, and Circumvention Radio

As technology progressed, the oppressed adapted. During the height of the Civil Rights Movement, radio became an integral part of informing supporters of the movement. While churches may have been centers of gathering outside of worship, the radio was present even in these churches to give signals and other vital info. As Brian Ward notes in Radio and the Struggle for Civil Rights in the South, this info was conveyed in covert ways as well. Such as reporting traffic jams to indicate police roadblocks.

Radio made information accessible to those who could not afford newspapers or who were denied access to literacy education due to Jim Crow. Black DJs relayed information about protests, misinformation, and police checkpoints. Keeping the community informed and as safe as possible became these DJ’s mission outside of music and propelled them into civic engagement, from protest to walking new Black voters through the voting procedure and system. Radio became a central place to enter a different world past Jim Crow.

WATS Phone Lines

Wide Area Telephone Services (WATS) also became a vital tool for the Civil Rights Movement to disperse information during important moments that often meant life or death. To circumvent the monopolistic Bell System (“Ma Bell”) that only employed white operators and colluded with law enforcement, vital civil rights organizations used WATS phone lines. These numbers were dedicated and paid lines such as 800 numbers. Directly patching through to organizations like the Student Nonviolent Coordinating Committee (SNCC), Congress of Racial Equality (CORE), Council of Federated Organizations (COFO), and the Southern Christian Leadership Conference (SCLC). These organization’s bases had code names to refer to when relaying information to another base either via WATS or radio.

CORE Radio Rules, Dick Tinsley. CORE

SNCC WATS Line Instructions & Policies, James Forman. SNCC. June 24-26, 1964

Looking at Today: Reverse Surveillance

While Black and other marginalized communities still struggle to communicate despite surveillance, we do have digital tools to help. With encryption widely available, we can now use protected communications with each other for sensitive information. Of course, not everyone today is free to roam or use these services equally. Encryption itself is also under constant risk of being undermined in different areas of the world. Technology can feel nefarious and “Big Tech'' seems to have a constant eye on millions.

In addition, just as with the DJs of the past, current activist groups like Black Lives Matter used this hypervisibility under Big Tech to get police brutality highlighted in the mainstream conversation and in real life. The world has seen police brutality up close because of on-site video, live recordings from phones and police scanners. Databases like EFF’s Atlas of Surveillance increasingly map police technology in your city.  And all of us, whether activists or not, can use tools to scan for the probing of communications during protests.

Atlas of Surveillance Map of Police Technology https://atlasofsurveillance.org/atlas, 2021-2-24

The Black community has been fighting what essentially is the technological militarization of the police force since the 1990s. While the struggle continues, we have seen recent wins where police use of facial recognition technology is now being limited or banned in many areas in the U.S., with support from groups around the country, we can help close this especially dangerous window of surveillance. 

Being able to communicate with each other and organize is embedded in the roots of resistance around the world, but it has a long and important history in the Black community in the United States. Whether online or off, we are keeping a public eye on those who are sworn to serve and protect us, with the hope one day we can freely move without the chains of surveillance and white supremacy. Until then, we’ll continue to see, and to celebrate, the spirit of resistance as well as the creativity of efforts to build and keep a strong line of communication despite surveillance and repression.

Happy Black History Month.

Alexis Hancock

Student Surveillance Vendor Proctorio Files SLAPP Lawsuit to Silence A Critic

1 week 5 days ago

During the pandemic, a dangerous business has prospered: invading students’ privacy with proctoring software and apps. In the last year, we’ve seen universities compel students to download apps that collect their face images, driver’s license data, and network information. Students who want to move forward with their education are sometimes forced to accept being recorded in their own homes and having the footage reviewed for “suspicious” behavior.

Given these invasions, it’s no surprise that students and educators are fighting back against these apps. Last fall, Ian Linkletter, a remote learning specialist at the University of British Columbia, became part of a chorus of critics concerned with this industry.

Now, he’s been sued for speaking out. The outrageous lawsuit—which relies on a bizarre legal theory that linking to publicly viewable videos is copyright infringement—will become an important test of a 2019 British Columbia law passed to defend free speech, the Protection of Public Participation Act, or PPPA.

Sued for Linking

This isn’t the first time U.S.-based Proctorio has taken a particularly aggressive tack in responding to public criticism. In July, Proctorio CEO Mike Olsen even publicly posted the chat logs of a student who complained about the software’s support, posting the conversation on Reddit, a move he later apologized for.

Shortly after that, Linkletter dove in deep to analyze the software that many students at his university were being forced to adopt, an app called Proctorio. He became concerned about what Proctorio was—and wasn’t—telling students and faculty about how its software works.

In Linkletter’s view, customers and users were not getting the whole story. The software performed all kinds of invasive tracking, like watching for “abnormal” eye movements, head movements, and other behaviors branded suspicious by the company. The invasive tracking and filming were of great concern to Linkletter, who was worried about students being penalized academically on the basis of Proctorio’s analysis.

“I can list a half dozen conditions that would cause your eyes to move differently than other people,” Linkletter said in an interview with EFF. “It’s a really toxic technology if you don’t know how it works.”

In order to make his point clear, Linkletter published some of his criticism on Twitter, where he linked to Proctorio’s own published YouTube videos describing how their software works. In those videos, Proctorio describes its own tracking functions. The videos described functions with titles like “Behaviour Flags,” “Abnormal Head Movement,” and “Record Room.”

Instead of replying to Linkletter’s critique, Proctorio sued him. Even though Linkletter didn’t copy any Proctorio materials, the company says Linkletter violated Canada’s Copyright Act just by linking to its videos. The company also said those materials were confidential, and alleged that Linkletter’s tweets violated the confidentiality agreement between UBC and Proctorio, since Linkletter is a university employee. 

Test of New Law

Proctorio’s legal attack on Ian Linkletter is meritless. It’s a classic SLAPP, an acronym that stands for Strategic Lawsuit Against Public Participation. Fortunately, British Columbia’s PPPA is a type of “anti-SLAPP” law. This is a type of law that’s being widely adopted throughout U.S. states and also exists in two Canadian provinces. In Canada, anti-SLAPP laws typically allow a defendant to bring an early challenge to the lawsuit against them on the basis that their speech is on a topic of “public interest.”  If the court accepts that characterization, the court shall dismiss the action—unless the plaintiff can prove that their case has substantial merit, the defendant has no valid defense, and that the public interest in allowing the suit to continue outweighs the public’s interest in protecting the expression.  That’s a very high bar for plaintiffs and changes the dynamics of a typical lawsuit dramatically.

Without anti-SLAPP laws, well-funded companies like Proctorio are often able to litigate their critics into silence—even in situations where the critics would have prevailed on the legal merits.

“Cases like this are exactly why anti-SLAPP laws were invented,” said Ren Bucholz, a litigator in Toronto. 

Linkletter should prevail here. It isn’t copyright infringement to link to a published video on the open web, and the fact that Proctorio made the video “unlisted” doesn’t change that. Even if Linkletter had copied parts or all of the videos—which he did not—he would have broad fair dealing rights (similar to U.S. "fair use" rights) to criticize the software that has put many UBC students under surveillance in their own homes.

Linkletter had to create a GoFundMe page to pay for much of his legal defense. But Proctorio’s bad behavior has inspired a broad community of people to fight for better student privacy rights, and hundreds of people donated to Linkletter’s defense fund, which raised more than $50,000. And the PPPA gives him a greater chance of getting his fees back. 

We hope the PPPA is proven effective in this, one of its first serious tests, and that lawmakers in both the U.S. and Canada adopt laws that prevent such abuses of the litigation system. Meanwhile, Proctorio should cease its efforts to muzzle critics from Vancouver to Ohio.

Legal documents
Joe Mullin

How Do Copyright Rules Affect Internet Creators? And What Can They Do About It?

2 weeks 2 days ago

This event has ended. Click here to watch a recording of the event.


If you make and share things online, professionally or for fun, you’ve been affected by copyright law. You may use a service that depends on the Digital Millennium Copyright Act (DMCA) in order to survive. You may have gotten a DMCA notice if you used part of a movie, TV show, or song in your work. You have almost certainly run up against the weird and draconian world of copyright filters like YouTube’s Content ID. EFF wants to help.

The end of last year was a flurry of copyright news, from the mess with Twitch to the “#StopDMCA” campaign that took off as new copyright proposals became law. The new year has proven that this issue is not going away, as a story emerged about cops using music in what looked like an attempt to trigger copyright filters to take videos of them offline. And throughout the pandemic, people stuck at home have tried to move their creativity online, only to find filters standing in their way. Enough is enough.

Next Friday, February 26th, at 10 AM Pacific, EFF will be hosting a town hall for Internet creators. There’s been a lot of actual and proposed changes to copyright law that you should know about and be able to ask questions about.

We will go over the copyright laws that got snuck into the omnibus spending package at the end of last year and what they mean for you. We will also use what we learned in writing our whitepaper on Content ID to help creators understand how it works and what to do with it. Finally, we will talk about the latest copyright proposal, the Digital Copyright Act, and how dangerous it is for online creativity. Most importantly, we will give you a way to stay informed and fight back.

Half of the 90-minute town hall will be devoted to answering your questions and hearing your concerns. Please join us for a conversation about the state of copyright in 2021 and what you need to know about it.

RSVP

Katharine Trendacosta

Cops Using Music to Try to Stop Being Filmed Is Just the Tip of the Iceberg

2 weeks 2 days ago

Someone tries to livestream their encounters with the police, only to find that the police started playing music. In the case of a February 5 meeting between an activist and the Beverly Hills Police Department, the song of choice was Sublime’s “Santeria.” The police may not got no crystal ball, but they do seem to have an unusually strong knowledge about copyright filters.

The timing of music being played when a cop saw he was being filmed was not lost on people. It seemed likely that the goal was to trigger Instagram’s over-zealous copyright filter, which would shut down the stream based on the background music and not the actual content. It’s not an unfamiliar tactic, and it’s unfortunately one based on the reality of how copyright filters work.

Copyright filters are generally more sensitive to audio content than audiovisual content. That sensitivity causes real problems for people performing, discussing, or reviewing music online. It’s a problem of mechanics. It is easier for filters to find a match just on a piece of audio material compared to a full audiovisual clip. And then there is the likelihood that a filter is merely checking to see if a few seconds of a video file seems to contain a few seconds of an audio file.

It’s part of why playing music is a better way of getting a video stream you don’t want seen shut down. (The other part is that playing music is easier than walking around with a screen playing a Disney film in its entirety. Much fun as that would be.)

The other side of the coin is how difficult filters make it for musicians to perform music that no one owns. For example, classical musicians filming themselves playing public domain music—compositions that they have every right to play, as they are not copyrighted—attract many matches. This is because the major rightsholders or tech companies have put many examples of copyrighted performances of these songs into the system. It does not seem to matter whether the video shows a different performer playing the song—the match is made on audio alone. This drives lawful use of material offline.

Another problem is that people may have licensed the right to use a piece of music or are using a piece of free music that another work also used. And if that other work is in the filter’s database, it’ll make a match between the two. This results in someone who has all the rights to a piece of music being blocked or losing income. It’s a big enough problem that, in the process of writing our whitepaper on YouTube’s copyright filter, Content ID, we were told that people who had experienced this problem had asked for it to be included specifically.

Filters are so sensitive to music that it is very difficult to make a living discussing music online. The difficulty of getting music clips past Content ID explains the dearth of music commentators on YouTube. It is common knowledge among YouTube creators, with one saying “this is why you don’t make content about music.”

Criticism, commentary, and education of music are all areas that are legally protected by fair use. Using parts of a thing you are discussing to show what you mean is part of effective communication. And while the law does not make fair use of music more difficult to prove than any other kind of work, filters do.

YouTube’s filter does something even more insidious than simply taking down videos, though. When it detects a match, it allows the label claiming ownership to take part or all of the money that the original creator would have made. So a video criticizing a piece of music ends up enriching the party being critiqued. As one music critic explained:

Every single one of my videos will get flagged for something and I choose not to do anything about it, because all they’re taking is the ad money. And I am okay with that, I’d rather make my videos the way they are and lose the ad money rather than try to edit around the Content ID because I have no idea how to edit around the Content ID. Even if I did know, they’d change it tomorrow. So I just made a decision not to worry about it.

This setup is also how a ten-hour white noise video ended up with five copyright claims against it. This taking-from-the-poor-and-giving-to-the-rich is a blatantly absurd result, but it’s the status quo on much of YouTube.

A group, like the police, who is particularly tech-savvy could easily figure out which songs result in videos being removed rather than have the money stolen. Internet creators talk on social media about the issues they run into and from whom. Some rightsholders are infamously controlling and litigious.

Copyright should not be a fast-track to getting speech removed that you do not like. The law is meant to encourage creativity by giving artists a limited period of exclusive rights to their creations. It is not a way to make money off of criticism or a loophole to be exploited by authorities.

Katharine Trendacosta

Racial and Immigrant Justice Groups Sue Government for Records of COVID-19 Data Surveillance

2 weeks 2 days ago
Just Futures Law, MediaJustice, Mijente, Immigrant Defense Project and Electronic Frontier Foundation say public must know details of COVID-19 related data collection and sharing

San Francisco - The Electronic Frontier Foundation (EFF) is representing four racial and immigrant justice groups— Just Futures Law, MediaJustice, Mijente Support Committee, and the Immigrant Defense Projectsuing the U.S. Departments of Homeland Security and Health and Human Services under the Freedom of Information Act (FOIA) for withholding critical records about the collection and sharing of data during the COVID-19 pandemic.

The four groups all filed FOIA requests for information about COVID-related surveillance and data analysis last year. In particular, the groups are worried about HHS Protect, a vast secretive data platform designed by controversial data software company Palantir. Palantir has a long history of building surveillance systems for the Department of Homeland Security that facilitate criminal prosecutions, family separation, and raids that lead to detention and deportation. In July of last year, the government required all hospitals to report COVID-19 infection data to HHS Protect, instead of the system operated by the Centers for Disease Control.

However, the public has little to no information about COVID-19 data collection and tracking, including on the more than 200 data sources included in HHS Protect. The plaintiffs in this case asked both the Department of Homeland Security and the Department of Health and Human Services for any records describing the data sources, as well as limits on the use of data collected and the duration of retention, but have yet to receive anything responsive to their requests. Without this information, the public cannot evaluate either the efficacy of these invasive technologies now or the risks they might pose in the future.

“Secrecy from the government is not helping us fight this pandemic. We’ve already seen how privacy fears have deterred some from getting important medical care for COVID,” said Steven Renderos, Executive Director of MediaJustice. “Yet the government is still withholding this information. If we can’t say with confidence what the government is doing, we have an uphill battle to protect public health. Immediate answers are essential.”

“We know that the government is collecting huge amounts of health data on us for the purported purpose of public health and combating COVID,” said Julie Mao, Deputy Director from Just Futures Law. “For example, we’ve seen a lot of location data gathered from mobile phones or contract tracing apps, but scientists have questioned the effectiveness of such mass surveillance at mitigating disease spread. The public has the right to know what sensitive information these agencies are collecting and to evaluate its utility.”

The lawsuit demands the government immediately process the groups’ FOIA request, and make the records available to them.

"It's unacceptable that we have no idea how the HHS Protect platform is collecting data or how long it's holding it," said Jacinta Gonzalez, Senior Campaign Organizer with Mijente. "It's imperative that the public understands how personal data is being funnelled into large databases like this and how long that data is being stored. But it's especially critical here, because HHS has a history of sharing personal data with ICE for deportation purposes, to say nothing of the fact that the company that designed this platform, Palantir, is a well-known ICE contractor. The government's secrecy here is very alarming."

“The potential privacy and human rights impact of this data surveillance is deeply concering,” said Mizue Aizeki, Interim Executive Director of the Immigrant Defense Project. “We cannot allow tech corporations and the government take advantage of the pandemic to expand surveillance and policing powers. The Department of Health and Human Services is set to spend half a billion dollars on surveillance and data technologies in the coming months and years, so the time for answers is now.”

For the full complaint in Just Futures v DHS:
https://www.eff.org/document/mediajustice-v-dhs-covid-19-foia-complaint

Just Futures Law (JFL) is a women-of-color led transformative immigration law project rooted in movement lawyering. @justfutureslaw.

MediaJustice is dedicated to building a grassroots movement for a more just and participatory media—fighting for racial, economic, and gender justice in a digital age. MediaJustice boldly advances communication rights, access, and power for communities harmed by persistent dehumanization, discrimination and disadvantage. Home of the #MediaJusticeNetwork, we envision a future where everyone is connected, represented, and free.

Mijente Support Committee​ is a Latinx/Chicanx political, digital, and grassroots organizing hub. Launched in 2015, Mijente seeks to strengthen and increase the participation of Latino people in the broader movements for racial, economic, climate, and gender justice. @conmijente

The Immigrant Defense Project (IDP) works to secure fairness and justice for immigrants in the racialized U.S. criminal and immigration systems. IDP fights to end the current era of unprecedented mass criminalization, detention and deportation through a multi-pronged strategy including advocacy, litigation, legal support, community partnerships, and strategic communications. @ImmDefense.    

Contact:  RebeccaJeschkeMedia Relations Director and Digital Rights Analystpress@eff.org JulieMaoDeputy Director, Just Futures Lawjulie@justfutureslaw.org
Rebecca Jeschke

EFF to First Circuit: Schools Should Not Be Policing Students’ Weekend Snapchat Posts

2 weeks 4 days ago

This blog post was co-written by EFF intern Haley Amster.

EFF filed an amicus brief in the U.S. Court of Appeals for the First Circuit urging the court to hold that under the First Amendment public schools may not punish students for their off-campus speech, including posting to social media while off campus.

The Supreme Court has long held that students have the same constitutional rights to speak in their communities as do adults, and this principle should not change in the social media age. In its landmark 1969 student speech decision, Tinker v. Des Moines Independent Community School District, the Supreme Court held that a school could not punish students for wearing black armbands at school to protest the Vietnam War. In a resounding victory for the free speech rights of students, the Court made clear that school administrators are generally forbidden from policing student speech except in a narrow set of exceptional circumstances: when (1) a student’s expression actually causes a substantial disruption on school premises; (2) school officials reasonably forecast a substantial disruption; or (3) the speech invades the rights of other students.

However, because Tinker dealt with students’ antiwar speech at school, the Court did not explicitly address the question of whether schools have any authority to regulate student speech that occurs outside of school. At the time, it may have seemed obvious that students can publish op-eds or attend protests outside of school, and that the school has no authority to punish students for that speech even if it’s highly controversial and even if other students talk about it in school the next day. As we argued in our amicus brief, the Supreme Court’s three student speech cases following Tinker all involved discipline related to speech that may reasonably be characterized as on-campus.

In the social media age, the line between off- and on-campus has been blurred. Students frequently engage in speech on the Internet outside of school, and that speech is then brought into school by students on their smartphones and other mobile devices. Schools are increasingly punishing students for off-campus Internet speech brought onto campus.

In our amicus brief, EFF urged the First Circuit to make clear that schools have no authority under Tinker to police students’ off-campus speech, including when that speech occurs on social media. The case, Doe v. Hopkinton, involves two public high school students, “John Doe” and “Ben Bloggs,” who were suspended for making comments in a private Snapchat group that their school considered to be bullying. Doe and Bloggs filed suit asserting that the school suspension violated their First Amendment rights.

The school made no attempt to show in the lower court that Doe and Bloggs sent the messages at issue while on campus, and the federal judge erroneously concluded that “it does not matter whether any particular message was sent from an on- or off-campus location.”

As we explained in our amicus brief, that conclusion was wrong. Tinker made clear that students’ speech is entitled to First Amendment protection, and authorized schools to punish student speech only in narrow circumstances to ensure the safety and functioning of the school. The Supreme Court has never authorized or suggested that public schools have any authority to reach into students’ private lives and punish them for their speech while off school grounds or after school hours.

This is exactly what another federal appeals court considering this question concluded last summer. In B.L. v. Mahanoy Area School District, a high school student who had failed to advance from junior varsity to the varsity cheerleading squad posted a Snapchat selfie over the weekend with text that said, among other things, “fuck cheer.” One of her Snapchat connections took a screen shot of the post and shared it with the cheerleading coaches, who suspended the student from participation in the junior varsity cheer squad.

The Third Circuit in Mahanoy made clear that the narrow set of circumstances established in Tinker where a school may regulate disruptive student speech applies only to speech uttered at school. As such, it held that schools have no authority to punish students for their off-campus speech—even when that speech “involves the school, mentions teachers or administrators, is shared with or accessible to students, or reaches the school environment.”

This conclusion is especially critical given that students use social media to engage in a wide variety of self-expression, political speech, and activism. As we highlighted in our amicus brief, this includes expressing dissatisfaction with their schools’ COVID-19 safety protocols, calling out instances of racism at schools, and organizing protests against school gun violence. It is essential that courts draw a bright line prohibiting schools from policing off-campus speech so that students can exercise their constitutional rights outside of school without fear that they might be punished for it come Monday morning.

Mahanoy is currently on appeal to the Supreme Court, which will consider the case this spring. We hope that the First Circuit and the Supreme Court will take this opportunity to reaffirm the free speech rights of public-school students and draw clear limits on schools’ ability to police students’ private lives.

Naomi Gilens

Speak Up for Real Privacy in Virginia

2 weeks 5 days ago

Last week, we raised the alarm about an empty privacy bill moving fast through the Virginia legislature. The bill, SB 1392, is supported by Microsoft and Amazon, and would set a dangerous standard for state privacy bills.

Take Action

Virginia: Speak Up for Real Privacy

The bill has passed through the House Committee on Technology, Communications, and Innovation and is headed to a floor vote in the House this week.

Thanks to your messages and the work of privacy and consumer advocates on the ground in Virginia, lawmakers have started to hear the message that privacy laws should protect people, not businesses. While they have made some small changes to the bill, such as a mandate to set up a working group to suggest ways to strengthen the bill, these changes are not nearly enough to protect the people of Virginia. It is much better to pass a strong bill than to pass a weak one with the hope of improving it, and we urge the legislature to hit pause on SB 1392 until it can be amended to offer real protections.

Now that people demanding privacy have the ear of the legislature, it’s time to speak up. Write to your delegates and demand real privacy in Virginia.

TAKE ACTION

VIRGINIA: SPEAK UP FOR REAL PRIVACY

Hayley Tsukayama

EFF to Patent Office: No New Design Patents

2 weeks 5 days ago

Design is incredibly important to how people use and choose products, but design patents are not. They provide exclusive rights only to ornamental product features, which by definition are not useful or artistic; for those that are, utility patent and copyright protection exist instead. As we’ve said before, we don’t need design patents because they restrict far more creativity, innovation, and economic activity than they promote. Unfortunately, the Patent Office is preparing to grant even more.

Design patents provide exclusive rights to ornamental product features that are not useful enough to patent or creative enough for copyright. As we’ve said before, we don’t need design patents, as they give far too much power to those who give so little to the public. Unfortunately, the Patent Office wants to grant more design patents to those who contribute even less. 

To do that, the Patent Office is proposing regulations that would open the floodgates to unprecedented and unnecessary types of design patents on computer-generated imagery (CGI). Although the standards for CGI design patents are way too low already, the Patent Office wants to make them even lower by allowing patented designs on non-physical products, like websites, software applications, and holographic projections.

We have never allowed patents on designs untethered to physical products, and should not do so now. Design patent owners have the power to stop anyone else in this country from making, using, or selling what their patent covers. If companies can get patents on designs for non-physical products, like website banners, they will have the right to sue anyone whose website uses the same or similar features to demand payment or force them to stop. Given the exorbitant cost of litigation, companies with the resources to amass design patents will have massive power over what the web looks like for the rest of us. 

We should be especially cautious of expanding corporate power over computer graphics during a global pandemic when face-to-face communication is a public health risk. The last thing we need are more design patents restricting people’s ability to compete, create, and freely express themselves online. That is why EFF submitted comments urging the Patent Office not to take this unprecedented and perilous approach.

Extending design patent protection to digital images means unnecessarily extending protection to content that already gets ample protection under copyright and trademark law. Letting design patents intrude further into the realm of graphic design creates uniquely dangerous risks. When copyright applies, so do protections for fair uses under the First Amendment. But there are no such protections for the use of patented designs. That makes the extension of design patent protection a threat not only to technological innovation and competition, but also to creativity and free expression.

Despite these dangers, the Patent Office is proposing rules that will ensure we see more design patents and more patent litigation. The Office wants to change how it applies the part of the Patent Act which makes an “ornamental design for an article of manufacture” eligible for protection by effectively discarding the “article of manufacture” requirement altogether. For example, the Patent Office admiringly cited Singapore’s decision to eliminate a requirement that “a design must be applied to a physical article in order to be protected,” thus allowing patents on graphical user interface (GUI) designs applied to a “non-physical product.” But in the U.S., patents on designs for non-physical products have never been allowed.

Nor should they: granting new and unprecedented design rights would wreak havoc on the U.S. economy when it is already struggling to recover from the economic depression caused by the unrelenting COVID-19 pandemic. Now more than ever, people depend on computer technology and connectivity to work, learn, communicate with each other, and get essential products and services—from groceries to health care. We should not impose any additional restrictions on people’s ability to create, use, and communicate digital content.

To that end, Singapore may not be the best example to draw from — after all,  its law also includes content-based prohibitions on designs that do not align with public order or morals. If other countries are to serve as models, it would be better to look to those that better align values of free expression and individual choice in their design regulations. One such model is Germany, where the law governing registered designs explicitly says a “computer program is not considered to be a product.”

As we’ve written before, former Director of the Patent Office Andrei Iancu worked overtime during his tenure to tilt the scales in favor of patent owners and against technologists, start-ups, and end-users. Although his departure from the office is a positive sign, it will take a lot of time and work to rebuild from the damage he inflicted. If this proposal is adopted, however, the damage will be more pervasive and difficult to fix.

We call on the Patent Office to reconsider—and abandon—this effort to expand design patent protection. Instead of lowering patentability standards, we should be empowering examiners to reject deficient design patent applications under existing law. Granting more and worse design patents will only encourage extortionate patent litigation and deter the innovation and economic activity the patent system is supposed to promote.

 

 

 

 

 

Alex Moss

Turkey’s Free Speech Clampdown Hits Twitter, Clubhouse -- But Most of All, The Turkish People

2 weeks 6 days ago

EFF has been tracking the Turkish government’s crackdown on tech platforms and its continuing efforts to force them to comply with draconian rules on content control and access to users’ data. As of now, the Turkish government has now managed to coerce Facebook, YouTube, and TikTok into appointing a legal representative to comply with the legislation via threats to their bottom line: prohibiting Turkish taxpayers from placing ads and making payments to them if they fail to appoint a legal representative. According to local news, Google has appointed a legal representative through a subsidiary in Turkey. 

Out of the major foreign social media platforms used in Turkey, only Twitter has not appointed a local representative and subject itself to Turkish jurisdiction over its content and users’ policies. Coincidentally, Twitter has been drawn into a series of moderation decisions that push the company into direct conflict with Turkish politicians. On February 2nd, Twitter decided that three tweets by the Turkish Interior Minister Süleyman Soylu violated its rules about hateful conduct and abusive behavior policy. Access to these tweets was restricted rather than removed as Twitter considered them still in the public interest. Similarly, Twitter decided to remove and delete a tweet by the AKP coalition MHP leader Devlet Bahçel, where he tweeted that student protestors were “terrorists” and "poisonous snakes" “whose heads needed to be crushed”, as the tweet violated Twitter’s violent threats policy.

Yaman Akdeniz, a founder of the Turkish Freedom of Expression Association, told EFF 

“This is the first time Twitter deployed its policy on Turkish politicians while the company is yet to decide whether to have a legal representative in Turkey as required by Internet Social Media Law since October 2020.

As in many other countries, politicians in Turkey are now angry at Twitter both for failing to sufficiently censor criticism of Turkish policies, and for sanctioning senior domestic political figures for their violations of the platform’s terms of service. 

By attempting to avoid both forms of political pressure by declining to elect a local representative, Twitter is already paying a price. The Turkish regulator BTK has already imposed the first set of sanctions by forbidding Turkish taxpayers from paying for ads on Twitter. In principle, BTK can go further later this spring. It will be permitted to apply for sanctions against Twitter starting in April 2021, including ordering ISPs to throttle the speed of Turkish users’ connections to Twitter, at first by 50% and subsequently by up to 90%. Throttling can make sites practically inaccessible within Turkey, fortifying Turkey’s censorship machine and silencing speech--a disproportionate measure that profoundly limits users’ ability to access online content within Turkey.

The Turkish Constitutional Court has overturned previous complete bans on Wikipedia in 2019 and Twitter and YouTube back in 2014. Even though the recent legislation “only” foresees throttling sites’ access speeds by 50% or 90%, this sanction aims to make sites unusable in practice and should be viewed by the Court the same way as an outright ban. Research on website usability has already found that huge numbers of users will lose patience with only slightly slower sites than they expect; Delays of just “1 second” are enough to interrupt a person’s conscious thought process; making users wait five or ten times as long would be catastrophic.

But if the Turkish authorities think that throttling away major platforms that refuse to comply with its orders, they may have another problem. The new Internet Social Media law covers any social network provider that exceeds a “daily access” of one million. While the law is unclear as to what that figure means in practice, it wasn’t intended to cover smaller alternatives -- like Clubhouse, the new invitation-only audio-chat social networking, iOS-only app. Inevitably, with Twitter throttled and other services suspected of being required to comply with Turkish government demands, that’s exactly where political conversations have shifted. 

During the recent crackdown, Clubhouse has hosted Turkish groups every night until after midnight, where students, academics, journalists, and sometimes politicians join the conversations. For now, Turkish speech enforcement is falling back to other forms of intimidation. At least four students were recently taken into custody. Although the government said the arrests related to the students’ use of other social media platforms, the students believe that their Clubhouse activity was the only thing that distinguished them from thousands of others.

Clubhouse, as with many other fledglings, general-purpose social media networks, has not accounted for its use as a platform by endangered voices. It has a loosely-enforced real names policy -- one of the reasons why the students were able to be targeted by law enforcement. And as the Stanford Internet Observatory discovered, its design potentially allowed government actors or other network spies to collect private data on its users, en masse.

Ultimately, while it’s the major tech companies who face legal sanctions and service interruptions under Turkey’s Social Media Law, it’s ordinary Turkish citizens who are really paying the price: whether it’s slower Internet services, navigated cowed social platforms, or, physical arrest for simply speaking out online on platforms that cannot yet adequately protect them from their own government.

Katitza Rodriguez
Checked
52 minutes 28 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed