California Has $7 Billion for Broadband. Sacramento May Just Sit On It

1 week 2 days ago

It’s hard to believe that when Governor Newsom identifies a total of $7 billion for California’s legislature to spend on broadband access—coming from a mix of state surplus dollars and federal rescue money to invest in broadband infrastructure—that the legislature would do nothing.

It is hard to believe that, when handed an amount that would finance giving every single Californian a fiber connection to the Internet over the next five years; would allow the state to address an urgent broadband crisis worsened by the pandemic; and gives us a way to start ending the digital divide now, that the legislature would rather waste time we can’t afford to think it over.

But that is exactly what California’s legislature has proposed this week. Can you believe it?

TAKE ACTION

TELL YOUR LAWMAKERS TO SUPPORT THE GOVERNOR'S BROADBAND PLAN

Tucked away on page 12 of this 153-page budget document from the legislature this week is the following plan for Governor Newsom’s proposal to help connect every Californian to 21st-century access:

Broadband. Appropriates $7 billion over a multi-year period for broadband infrastructure and improved access to broadband services throughout the state. Details will continue to be worked out through three party negotiations. Administrative flexibilities will enable the appropriated funds to be accelerated to ensure they are available as needed to fund the expansion and improvements.

What this says is that the legislature wants to approve $7 billion for broadband infrastructure but does not want to authorize the governor to carry out his proposal any time soon.

There’s no excuse for this. Lawmakers have been given a lot of detail on this proposal, and ask anyone in the public and they would say we need action right now. This cannot be what passes in Sacramento next week as part of establishing California’s budget. At the very least, the legislature needs to give the Governor the clear authority to begin the planning process of deploying public fiber infrastructure to all Californians. This is a long process, which requires feasibility studies, environmental assessments, contracting with construction crews, and setting up purchases of materials. All of this takes months of time to process before any construction can even start and delaying even this first basic step pushes back the date we end the digital divide in California.

Wasting Time Risks Federal Money and Will Perpetuate the Digital Divide

Federal rescue dollars must be spent quickly, or they will be rescinded back to the federal government. Those are explicit rules from Congress and the Biden Administration as part of the rescue funds that were issued these last few months. Right now, there is a global rush for fiber broadband deployment that is putting a lot of pressure on manufacturers and workforce that build fiber-optic wires. In other words, more and more of the world is catching on to what EFF stated years ago, which is 21st century broadband access is built on fiber optics. Each day California sits out deploying this infrastructure puts us further behind the queue in demand and further delays actual construction.

Therefore, if Sacramento does not immediately authorize at least the planning phase of building out a statewide middle mile open-access fiber network—along with empowering local governments, non-profits, and cooperatives to draft their own fiber plans to deploy last mile connectivity—then we risk losing that valuable federal money. The state has a real opportunity, but only if it acts now, not months from now. California even has a chance to jump the line ahead of the rest of the country as Congress continues to debate about its own broadband infrastructure plan.

For the state that has made famous the little girls doing homework in fast-food parking lots because they lacked affordable robust internet access at home, it is irresponsible to look at $7 billion and not start the process to solve the problem. That’s exactly what will happen if the California legislature doesn’t hear from you.

Call your Assemblymember and Senator now to demand they approve Governor Newsom’s broadband plan next week to fix the digital divide now. This is the time to act on ending the digital divide, not continue talking about it.

TAKE ACTION

TELL YOUR LAWMAKERS TO SUPPORT THE GOVERNOR'S BROADBAND PLAN

Ernesto Falcon

Organizing in the Public Interest: MusicBrainz

1 week 2 days ago

This blog post is part of a series, looking at the public interest internet—the parts of the internet that don’t garner the headlines of Facebook or Google, but quietly provide public goods and useful services without requiring the scale or the business practices of the tech giants. Read our first two parts or our introduction.

Last time, we saw how much of the early internet’s content was created by its users—and subsequently purchased by tech companies. By capturing and monopolizing this early data, these companies were able to monetize and scale this work faster than the network of volunteers that first created it for use by everybody. It’s a pattern that has happened many times in the network’s history: call it the enclosure of the digital commons. Despite this familiar story, the older public interest internet has continued to survive side-by-side with the tech giants it spawned: unlikely and unwilling to pull in the big investment dollars that could lead to accelerated growth, but also tough enough to persist in its own ecosystem. Some of these projects you’ve heard of—Wikipedia, or the GNU free software project, for instance. Some, because they fill smaller niches and aren’t visible to the average Internet user, are less well-known. The public interest internet fills the spaces between tech giants like dark matter; invisibly holding the whole digital universe together.

Sometimes, the story of a project’s switch to the commercial model is better known than its continuing existence in the public interest space. The notorious example in our third post was the commercialization of the publicly-built CD Database (CDDB): when a commercial offshoot of this free, user-built database, Gracenote, locked down access, forks like freedb and gnudb continued to offer the service free to its audience of participating CD users.

Gracenote’s co-founder, Steve Scherf, claimed that without commercial investment, CDDB’s free alternatives were doomed to “stagnation”. While alternatives like gnudb have survived, it’s hard to argue that either freedb or gnudb have innovated beyond their original goal of providing and collecting CD track listings. Then again, that’s exactly what they set out to do, and they’ve done it admirably for decades since.

But can innovation and growth take place within the public interest internet? CDDB’s commercialization parlayed its initial market into a variety of other music-based offerings. Their development of these products led to them being purchased, at various points, by AV manufacturer Escient, Sony, Tribune Media, and most recently, Nielsen. Each sale made money for its investors. Can a free alternative likewise build on its beginnings, instead of just preserving them for its original users?

MusicBrainz, a Community-Driven Alternative to Gracenote

Among the CDDB users who were thrown by its switch to a closed system in the 1990s, was Robert Kaye. Kaye was a music lover and, at the time, a coder working on one of the earliest MP3 encoders and players at Xing. Now he and a small staff work full-time on MusicBrainz, a community-driven alternative to Gracenote. (Disclosure: EFF special advisor Cory Doctorow is on the board of MetaBrainz, the non-profit that oversees MusicBrainz).

“We were using CDDB in our service,” he told me from his home in Barcelona. “Then one day, we received a notice that said you guys need to show our [Escient, CDDB’s first commercial owner] logo when a CD is looked up. This immediately screwed over blind users who were using a text interface of another open source CD player that couldn’t comply with the requirement. And it pissed me off because I’d typed in a hundred or so CDs into that database… so that was my impetus to start the CD index, which was the precursor to MusicBrainz.”

Over two decades after the user rebellion that created it, MusicBrainz continues to tick along

MusicBrainz has continued ever since to offer a CDDB-compatible CD metadata database, free for anyone to use. The bulk of its user-contributed data has been put into the public domain, and supplementary data—such as extra tags added by volunteers—is provided under a non-commercial, attribution license. 

Over time, MusicBrainz has expanded by creating other publicly available, free-to-use databases of music data, often as a fallback for when other projects commercialize and lock down. For instance, Audioscrobbler was an independent system that collected information on what music you’ve listened to (no matter on what platform you heard it), to learn and provide recommendations based on its users’ contributions, but under your control. It was merged into Last.fm, an early Spotify-like streaming service, which was then sold to CBS. When CBS seemed to be neglecting the “scrobbling” community, MusicBrainz created ListenBrainz, which re-implemented features that had been lost over time. The plan, says Kaye, is to create a similarly independent recommendation system. 

While the new giants of Internet music—Spotify, Apple Music, Amazon—have been building closed machine-learning models to data-mine their users, and their musical interests, MusicBrainz has been working in the open with Barcelona's Pompeu Fabra University to derive new metadata from the MusicBrainz communities’ contributions. Automatic deductions of genre, mood, beats-per-minute and other information are added to the AcousticBrainz database for everyone to use. These algorithms learn from their contributors’ corrections, and the fixes they provide are added to the commonwealth of public data for everyone to benefit from.

MusicBrainz’ aspirations sound in synchrony with the early hopes of the Internet, and after twenty years, they appear to have proven the Internet can support and expand a long-term public good, as opposed to a proprietary, venture capital-driven growth model. But what’s to stop the organization from going the same way as those other projects with their lofty goals? Kaye works full-time on MusicBrainz along with eight other employees: what’s to say that they’re not exclusively profiteering from the wider unpaid community in the same way as larger companies like Google benefit from their users’ contributions?

MusicBrainz has some good old-fashioned pre-Internet institutional protections. It is managed as a 501(c) non-profit, the MetaBrainz Foundation, which places some theoretical constraints on how it might be bought out. Another old Internet value is radical transparency, and the organization has that in spades. All of its financial transactions, from profit and loss sheets to employment costs, to its server outlay and board meeting notes are published online.

Another factor, says Kaye, is keeping a clear delineation between the work done by MusicBrainz’s paid staff and the work of the MusicBrainz volunteer community. “My team should work on the things that aren’t fun to work on. The volunteers work on the fun things,” he says. When you're running a large web service built on the contributions of a community, there’s no end of volunteers for interesting projects, but, as Kaye notes, “there's an awful lot of things that are simply not fun, right? Our team is focused on doing these things.” It helps that MetaBrainz, the foundation, hires almost exclusively from long-term MusicBrainz community members.

Perhaps MusicBrainz’s biggest defense against its own decline is the software (and data) licenses it uses for its databases and services. In the event of the organizations’ separation from the desires of its community, all its composition and output—its digital assets, the institutional history—are laid out so that the community can clone its structure, and create another, near-identical, institution closer to its needs. The code is open source; the data is free to use; the radical transparency of the financial structures means that the organization itself can be reconstructed from scratch if need be.

Such forks are painful. Anyone who has recently watched the volunteer staff and community of Freenode, the distributed Internet Relay Chat (IRC) network, part ways with the network’s owner and start again at Libera.chat, will have seen this. Forks can be divisive in a community, and can be reputationally devastating to those who are abandoned by the community they claimed to lead and represent. MusicBrainz staff’s livelihood depends on its users in a way that even the most commercially sensitive corporation does not. 

It’s unlikely that a company would place its future viability so directly in the hands of its users. But it’s this self-imposed sword of Damocles hanging over Rob Kaye and his staff’s heads that fuels the communities’ trust in their intentions.

Where Does the Money Come From?

Open licenses, however, can also make it harder for projects to gather funding to persist. Where does MusicBrainz' money come from? If anyone can use their database for free, why don’t all their potential revenue sources do just that, free-riding off the community without ever paying back? Why doesn’t a commercial company reproduce what MusicBrainz does, using the same resources that a community would use to fork the project?

MusicBrainz’s open finances show that, despite those generous licenses, they’re doing fine. The project’s transparency lets us see that it brought in around $400K in revenue in 2020, and had $400K in costs (it experienced a slight loss, but other years have been profitable enough to make this a minor blip). The revenue comes as a combination of small donors and larger sponsors, including giants like Google, who use MusicBrainz’ data and pay for a support contract.

Given that those sponsors could free-ride, how does Kaye get them to pay? He has some unorthodox strategies (most famously, sending a cake to Amazon to get them to honor a three-year-old invoice), but the most common reason seems to be that an open database maintainer that is responsive to a wider community is also easier for commercial concerns to interface with, both technically and contractually. Technologists building out a music tool or service turn to MusicBrainz for the same reason as they might pick an open source project: it’s just easier to slot it into their system without having to jump through authentication hoops or begin negotiations with a sales team. Then, when a company forms around that initial hack, its executives eventually realize that they now have a real dependency on a project with whom they have no contractual or financial relationship. A support contract means that they have someone to call up if it goes down; a financial relationship means that it’s less likely to disappear tomorrow.

If Sony had used MusicBrainz’ data, they would have been able to carry on regardless

Again, commercial alternatives may make the same offer, but while a public interest non-profit like MusicBrainz might vanish if it fails its community, or simply runs out of money, those other private companies may well have other reasons to exit their commitments with their customers. When Sony bought Gracenote, it was presumably partly so that they could support their products that used Gracenote’s databases. After Sony sold Gracenote, they ended up terminating their own use of the databases. Sony announced to their valued customers in 2019 that Sony Blu-Ray and Home Theater products would no longer have CD and DVD recognition features. The same thing happened to Sony’s mobile Music app in 2020, which stopped being able to recognize CDs when it was cut off from Gracenote’s service. We can have no insight into these closed, commercial deals, but we can presume that Sony and Gracenote’s new owner could not come to an amicable agreement. 

By contrast, if Sony had used MusicBrainz’ data, they would have been able to carry on regardless. They’d be assured that no competitor would buy out MusicBrainz from under them, or lock their products out of an advertised feature. And even if MusicBrainz the non-profit died, there would be a much better chance that an API-compatible alternative would spring up from the ashes. If it was that important, Sony could have supported the community directly. As it is, Sony paid $260 million for Gracenote. For their CD services, at least, they could have had a more stable service deal with MusicBrainz for $1500 a month.

Over two decades after the user rebellion that created it, MusicBrainz continues to tick along. Its staff is drawn from music fans around the world, and meets up every year with a conference paid for by the MusicBrainz Foundation. Its contributors know that they can always depend on its data staying free; its paying customers know that they can always depend on its data being usable in their products. MusicBrainz staff can be assured that they won’t be bought up by big tech, and they can see the budget that they have to work with.

It’s not perfect. A transparent non-profit that aspires to internet values can be as flawed as any other. MusicBrainz suffered a reputational hit last year when personal data leaked from its website, for instance. But by continuing to exist, even with such mistakes, and despite multiple economic downturns, it demonstrates that a non-profit, dedicated to the public interest, can thrive without stagnating, or selling its users out.

But, but, but. While it’s good to know public interest services are successful in niche territories like music recognition, what about the parts of the digital world that really seem to need a more democratic, decentralized alternative—and yet notoriously lack them? Sites like Facebook, Twitter, and Google have not only built their empires from others’ data, they have locked their customers in, apparently with no escape. Could an alternative, public interest social network be possible? And what would that look like?

We'll cover these in a later part of our series. (For a sneak preview, check out the recorded discussions at “Reimagining the Internet”, from our friends at the Knight First Amendment Institute at Columbia University and the Initiative on Digital Public Infrastructure at the University of Massachusetts, Amherst, which explore in-depth many of the topics we’ve discussed here.)

Danny O'Brien

If Not Overturned, a Bad Copyright Decision Will Lead Many Americans to Lose Internet Access

1 week 2 days ago

This post was co-written by EFF Legal Intern Lara Ellenberg

In going after internet service providers (ISPs) for the actions of just a few of their users, Sony Music, other major record labels, and music publishing companies have found a way to cut people off of the internet based on mere accusations of copyright infringement. When these music companies sued Cox Communications, an ISP, the court got the law wrong. It effectively decided that the only way for an ISP to avoid being liable for infringement by its users is to terminate a household or business’s account after a small number of accusations—perhaps only two. The court also allowed a damages formula that can lead to nearly unlimited damages, with no relationship to any actual harm suffered. If not overturned, this decision will lead to an untold number of people losing vital internet access as ISPs start to cut off more and more customers to avoid massive damages.

EFF, together with the Center for Democracy & Technology, the American Library Association, the Association of College and Research Libraries, the Association of Research Libraries, and Public Knowledge filed an amicus brief this week urging the U.S. Court of Appeals for the Fourth Circuit to protect internet subscribers’ access to essential internet services by overturning the district court’s decision.

The district court agreed with Sony that Cox is responsible when its subscribers—home and business internet users—infringe the copyright in music recordings by sharing them on peer-to-peer networks. It effectively found that Cox didn’t terminate accounts of supposedly infringing subscribers aggressively enough. An earlier lawsuit found that Cox wasn’t protected by the Digital Millennium Copyright Act’s (DMCA) safe harbor provisions that protect certain internet intermediaries, including ISPs, if they comply with the DMCA’s requirements. One of those requirements is implementing a policy of terminating “subscribers and account holders … who are repeat infringers” in “appropriate circumstances.” The court ruled in that earlier case that Cox didn’t terminate enough customers who had been accused of infringement by the music companies.

In this case, the same court found that Cox was on the hook for the copyright infringement of its customers and upheld the jury verdict of $1 billion in damages—by far the largest amount ever awarded in a copyright case.

The District Court Got the Law Wrong

When an ISP isn’t protected by the DMCA’s safe harbor provision, it can sometimes be held responsible for copyright infringement by its users under “secondary liability” doctrines. The district court found Cox liable under both varieties of secondary liability—contributory infringement and vicarious liability—but misapplied both of them, with potentially disastrous consequences.

An ISP can be contributorily liable if it knew that a customer infringed on someone else’s copyright but didn’t take “simple measures” available to it to stop further infringement. Judge O’Grady’s jury instructions wrongly implied that because Cox didn’t terminate infringing users’ accounts, it failed to take “simple measures.” But the law doesn’t require ISPs to terminate accounts to avoid liability. The district court improperly imported a termination requirement from the DMCA’s safe harbor provision (which was already knocked out earlier in the case). In fact, the steps Cox took short of termination actually stopped most copyright infringement—a fact the district court simply ignored.

The district court also got it wrong on vicarious liability. Vicarious liability comes from the common law of agency. It holds that people who are a step removed from copyright infringement (the “principal,” for example, a flea market operator) can be held liable for the copyright infringement of its “agent” (for example, someone who sells bootleg DVDs at that flea market), when the principal had the “right and ability to supervise” the agent. In this case, the court decided that because Cox could terminate accounts accused of copyright infringement, it had the ability to supervise those accounts. But that’s not how other courts have ruled. For example, the Ninth Circuit decided in 2019 that Zillow was not responsible when some of its users uploaded copyrighted photos to real estate listings, even though Zillow could have terminated those users’ accounts. In reality, ISPs don’t supervise the Internet activity of their users. That would require a level of surveillance and control that users won’t tolerate, and that EFF fights against every day.

The consequence of getting the law wrong on secondary liability here, combined with the $1 billion damage award, is that ISPs will terminate accounts more frequently to avoid massive damages, and cut many more people off from the internet than is necessary to actually address copyright infringement.

The District Court’s Decision Violates Due Process and Harms All Internet Users

Not only did the decision get the law on secondary liability wrong, it also offends basic ideas of due process. In a different context, the Supreme Court decided that civil damages can violate the Constitution’s due process requirement when the amount is excessive, especially when it fails to consider the public interests at stake. In the case against Cox, the district court ignored both the fact that a $1 billion damages award is excessive, and that its decision will cause ISPs to terminate accounts more readily and, in the process, cut off many more people from the internet than necessary.

Having robust internet access is an important public interest, but when ISPs start over-enforcing to avoid having to pay billion-dollar damages awards, that access is threatened. Millions of internet users rely on shared accounts, for example at home, in libraries, or at work. If ISPs begin to terminate accounts more aggressively, the impact will be felt disproportionately by the many users who have done nothing wrong but only happen to be using the same internet connection as someone who was flagged for copyright infringement.

More than a year after the start of the COVID-19 pandemic, it's more obvious than ever that internet access is essential for work, education, social activities, healthcare, and much more. If the district court’s decision isn’t overturned, many more people will lose access in a time when no one can afford not to use the internet. That harm will be especially felt by people of color, poorer people, women, and those living in rural areas—all of whom rely disproportionately on shared or public internet accounts. And since millions of Americans have access to just a single broadband provider, losing access to a (shared) internet account essentially means losing internet access altogether. This loss of broadband access because of stepped-up termination will also worsen the racial and economic digital divide. This is not just unfair to internet users who have done nothing wrong, but also overly harsh in the case of most copyright infringers. Being effectively cut off from society when an ISP terminates your account is excessive, given the actual costs of non-commercial copyright infringement to large corporations like Sony Music.

It's clear that Judge O’Grady misunderstood the impact of losing Internet access. In a hearing on Cox’s earlier infringement case in 2015, he called concerns about losing access “completely hysterical,” and compared them to “my son complaining when I took his electronics away when he watched YouTube videos instead of doing homework.” Of course, this wasn’t a valid comparison in 2015 and it rightly sounds absurd today. That’s why, as the case comes before the Fourth Circuit, we’re asking the court to get the law right and center the importance of preserving internet access in its decision.

Mitch Stoltz

Supreme Court Overturns Overbroad Interpretation of CFAA, Protecting Security Researchers and Everyday Users

1 week 2 days ago

EFF has long fought to reform vague, dangerous computer crime laws like the CFAA. We're gratified that the Supreme Court today acknowledged that overbroad application of the CFAA risks turning nearly any user of the Internet into a criminal based on arbitrary terms of service. We remember the tragic and unjust results of the CFAA's misuse, such as the death of Aaron Swartz, and we will continue to fight to ensure that computer crime laws no longer chill security research, journalism, and other novel and interoperable uses of technology that ultimately benefit all of us.

EFF filed briefs both encouraging the Court to take today's case and urging it to make clear that violating terms of service is not a crime under the CFAA. In the first, filed alongside the Center for Democracy and Technology and New America’s Open Technology Institute, we argued that Congress intended to outlaw computer break-ins that disrupted or destroyed computer functionality, not anything that the service provider simply didn’t want to have happen. In the second, filed on behalf of computer security researchers and organizations that employ and support them, we explained that the broad interpretation of the CFAA puts computer security researchers at legal risk for engaging in socially beneficial security testing through standard security research practices, such as accessing publicly available data in a manner beneficial to the public yet prohibited by the owner of the data. 

Today's win is an important victory for users everywhere. The Court rightly held that exceeding authorized access under the CFAA does not encompass “violations of circumstance-based access restrictions on employers’ computers.” Thus, “an individual ‘exceeds authorized access’ when he accesses a computer with authorization but then obtains information located in particular areas of the computer— such as files, folders, or databases—that are off limits to him.” Rejecting the Government’s reading allowing CFAA charges for any website terms of service violation, the Court adopted a “gates-up-or-down” approach: either you are entitled to access the information or you are not. This means that private parties’ terms of service limitations on how you can use information, or for what purposes you can access it, are not criminally enforced by the CFAA.

Read our detailed analysis of the decision here

Related Cases: Van Buren v. United States
Andrew Crocker

The EU Commission's Refuses to Let Go of Filters

1 week 2 days ago

The EU copyright directive has caused controversy than any other proposal in recent EU history - and for good reason. In abandoning traditional legal mechanisms to tackle copyright infringement online, Article 17 (formerly Article 13) of the directive introduced a new liability regime for online platforms, supposedly in order to support creative industries, that will have disastrous consequences for users. In a nutshell: To avoid being held responsible for illegal content on their services, online platforms must act as copyright cops, bending over backwards to ensure infringing content is not available on their platforms. As a practical matter (as EFF and other user rights advocates have repeatedly explained) this means Article 17 is a filtering mandate.

But all was not lost - the EU Commission had an opportunity to stand up for users and independent creators by mitigating Article 17's threat to free expression. Unfortunately, it has chosen instead to stand up for a small but powerful group of copyright maximalists. 

The EU Commission's Guidance Document: Civil Society Concerns Take a Backseat

EU "Directives" are not automatically applicable laws. Once a directive is passed, EU member states must “transpose” them into national law. These transpositions are now the center of the fight against copyright upload filters. In several meetings of an EU Commission's Stakeholder Dialogue and through consultations developing guidelines for the application of Article 17 (which must be implemented in national laws by June 7, 2021) EFF and other civil society groups stressed that users' rights to free speech are not negotiable and must apply when they upload content, not during a later complaint stage.

The first draft of the guidance document seemed to recognize those concerns and prioritize user rights. But the final result, issued today, is disappointing. On the plus side, the EU Commission stresses that Article 17 does not mandate the use of specific technology to demonstrate "best efforts" to ensure users don't improperly upload copyright-protected content on platforms. However, the guidance document failed to state clearly that mandated upload filters undermine the fundamental rights protection of users. The EU Commission differentiates "manifestly" infringing uploads from other user uploads, but stresses the importance of rightsholders' blocking instructions, and the need to ensure they do not suffer "economic harm." And rather than focusing on how to ensure legitimate uses such as quotations or parodies, the Commission advises that platforms must give heightened attention to "earmarked" content. As a practical matter, that "heightened attention" is likely to require using filters to prevent users from uploading such content.

We appreciate that digital rights organizations had a seat at the stakeholder dialogue-table, even though outnumbered by rightsholders from the music and film industries and representatives of big tech companies. And the guidance document contains a number of EFF suggestions for implementation, such as to clarify that specific technological solutions are not mandated, to ensure that smaller platforms have a lower standard of "best efforts", and to respect data protection law when interpreting Article 17. However, on the most crucial element - the risk of over-blocking of legitimate user content - the Commission simply describes "current market practices," including the use of content recognition technologies that inevitably over-block. Once again, user rights and exceptions take a backseat.

This battle to protect freedom of expression is far from over. Guidance documents are non-binding and the EU Court of Justice will have the last say on whether Article 17 will lead to censorship and limit freedom of expression rights. Until then, national governments do not have a discretion to transpose the requirements under Article 17 as they see fit, but an obligation to use the legislative leeway available to implement them in line with fundamental rights.

Christoph Schmon

Why Indian Courts Should Reject Traceability Obligations

1 week 3 days ago

Strong end-to-end encryption is under attack in India. The Indian government’s new and dangerous online intermediary rules forcing messaging applications to track—and be able to identify—the originator of any message is fundamentally incompatible with the privacy and security protections of strong encryption. Companies were obliged to comply with the mandate on May 25. Three petitions have been filed (Facebook; WhatsApp; Arimbrathodiyil) asking the Indian High Courts (in Delhi and Kerala) to strike down these rules.

The traceability provision—Rule 4(2) in the “Intermediary Guidelines and Digital Media Ethics Code” rules (English version starts at page 19)—was adopted by the Ministry of Electronics and Information Technology earlier this year. The rules require any large social media intermediary that provides messaging “shall enable the identification of the first originator of the information on its computer resource” in response to a court order or a decryption request issued under the 2009 Decryption Rules. (The Decryption Rules allow authorities to request the interception or monitoring of decryption of any information generated, transmitted, received, or stored in any computer resource.)

The minister has claimed that the rules will “[not] impact the normal functioning of WhatsApp” and said that “the entire debate on whether encryption would be maintained or not is misplaced” because technology companies can still decide to use encryption—so long as they accept the “responsibility to find a technical solution, whether through encryption or otherwise” that permits traceability. WhatsApp strongly disagrees, writing that "traceability breaks end-to-end encryption and would severely undermine the privacy of billions of people who communicate digitally." 

The Indian government's assertion is bizarre because the rules compel intermediaries to know information about the content of users’ messages that they currently don’t and which is currently protected by encryption. This legal mandate seeks to change WhatsApp’s security model and technology, and the assumptions somehow seem to imply that such matter needn’t matter to users and needn’t bother companies.

That’s wrong. Because WhatsApp uses a specific privacy-by-design implementation that protects users’ secure communication by making a forward indistinguishable from a new message, from the server’s point of view. So when a WhatsApp user forwards a message using the arrow, it serves to mark the forward information at the client-side, but the fact that the message has been forwarded is not visible to the WhatsApp server. The traceability mandate would make WhatsApp change the application to make this information, which was previously invisible.

The Indian government also defended the rules by noting that legal safeguards restrict the process of gaining access to the identity of a person who originated a message, that such orders can only be issued for national security and serious crime investigations, and on the basis that “it is not any individual who can trace the first originator of the information.” However, messaging services do not know ahead of time which messages will or will not be subject to such orders; as WhatsApp has noted,

there is no way to predict which message a government would want to investigate in the future. In doing so, a government that chooses to mandate traceability is effectively mandating a new form of mass surveillance. To comply, messaging services would have to keep giant databases of every message you send, or add a permanent identity stamp—like a fingerprint—to private messages with friends, family, colleagues, doctors, and businesses. Companies would be collecting more information about their users at a time when people want companies to have less information about them.  

India's legal safeguards will not solve the core problem:

The rules represent a technical mandate for companies to re-engineer or re-design their systems for every user, not just for criminal suspects.

The overall design of messaging services must change to comply with the government's demand to identify the originator of a message. Such changes move companies away from privacy-focused engineering and data minimization principles that should characterize secure private messaging apps.

This provision is one of many features of the new rules that pose a threat to expression and privacy online, but it’s drawn particular attention because of the way it comes into collision with end-to-end encryption. WhatsApp previously wrote:

“Traceability” is intended to do the opposite by requiring private messaging services like WhatsApp to keep track of who-said-what and who-shared-what for billions of messages sent every day. Traceability requires messaging services to store information that can be used to ascertain the content of people’s messages, thereby breaking the very guarantees that end-to-end encryption provides. In order to trace even one message, services would have to trace every message.

Rule 4(2) applies to WhatsApp, Telegram, Signal, iMessage, or any “significant social media intermediaries” with more than 5 million registered users in India. It can also apply to federated social networks such as Mastodon or Matrix if the government decides these pose a “material risk of harm” to national security (rule 6). Free and open-source software developers are also afraid that they’ll be targeted next by this rule (and other parts of the intermediary rules), including for developing or operating more decentralized services. So Facebook and WhatsApp aren’t the only ones seeking to have the rules struck down; a free software developer named Praveen Arimbrathodiyil, who helps run community social networking services in India, has also sued, citing the burdens and risks of the rules for free and open-source software and not-for-profit communications tools and platforms.

This fight is playing out across the world. EFF has long said that end-to-end encryption, where intermediaries do not know the content of users’ messages, is a vitally important feature for private communications, and has criticized tech companies that don’t offer it or offer it in a watered-down or confusing way. Its end-to-end messaging encryption features are something WhatsApp is doing right—following industry best practices on how to protect users—and the government should not try to take this away.

Katitza Rodriguez

PayPal Shuts Down Long-Time Tor Supporter with No Recourse

1 week 3 days ago

Larry Brandt, a long-time supporter of internet freedom, used his nearly 20-year-old PayPal account to put his money where his mouth is. His primary use of the payment system was to fund servers to run Tor nodes, routing internet traffic in order to safeguard privacy and avoid country-level censorship. Now Brandt’s PayPal account has been shut down, leaving many questions unanswered and showing how financial censorship can hurt the cause of internet freedom around the world.

Brandt first discovered his PayPal account was restricted in March of 2021. Brandt reported to EFF: “I tried to make a payment to the hosting company for my server lease in Finland.  My account wouldn't work. I went to my PayPal info page which displayed a large vertical banner announcing my permanent ban. They didn't attempt to inform me via email or phone—just the banner.”

Brandt was unable to get the issue resolved directly through PayPal, and so he then reached out to EFF.

For years, EFF has been documenting instances of financial censorship, in which payment intermediaries and financial institutions shutter accounts and refuse to process payments for people and organizations that haven’t been charged with any crime. Brandt shared months of PayPal transactions with the EFF legal team, and we reviewed his transactions in depth. We found no evidence of wrongdoing that would warrant shutting down his account, and we communicated our concerns to PayPal. Given that the overwhelming majority of transactions on Brandt’s account were payments for servers running Tor nodes, EFF is deeply concerned that Brandt’s account was targeted for shut down specifically as a result of his activities supporting Tor. 

We reached out to PayPal for clarification, to urge them to reinstate Brandt’s account, and to educate them about Tor and its value in promoting freedom and privacy globally. PayPal denied that the shutdown was related to the concerns about Tor, claiming only that “the situation has been determined appropriately” and refusing to offer a specific explanation. After several weeks, PayPal has still refused to reinstate Brandt’s account.

The Tor Project echoed our concerns, saying in an email: “This is the first time we have heard about financial persecution for defending internet freedom in the Tor community. We're very concerned about PayPal’s lack of transparency, and we urge them to reinstate this user’s account. Running relays for the Tor network is a daily activity for thousands of volunteers and relay associations around the world. Without them, there is no Tor—and without Tor, millions of users would not have access to the uncensored internet.”

One of the particularly concerning elements of Brandt’s situation is how automated his account shut down was. After his PayPal account was shuttered, Brandt attempted to reach out to PayPal directly. As he explained to EFF: “I tried to contact them many times by email and phone. PayPal never responded to either. They have an online 'Resolution Center' but I never had a dialog with anyone there either.”  The PayPal terms reference the Resolution Center as an option, but asserts PayPal has no obligation to disclose details to its users.

Internet companies just aren’t incentivized to care about customer service. 

Many online service providers make it difficult or impossible for users to reach a human to resolve a problem with their services. That’s because employing people to resolve these issues often costs more than the small amounts they save by reinstating wrongfully banned accounts. Internet companies just aren’t incentivized to care about customer service. But while it may serve companies’ bottom lines to automate account shut downs and avoid human interaction, the experience for individual users is deeply frustrating. 

EFF, along with the ACLU of Northern California, New America’s Open Technology Institute, and the Center for Democracy and Technology have endorsed the Santa Clara principles, which attempt to guide companies in centering human rights in their decisions to ban users or take down content. In particular, the third principle is that “Companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension.” Our advocacy has already pressured companies like Facebook, Twitter, and YouTube to endorse the Santa Clara principles—but so far, PayPal has not. Brandt’s account was shut down without notice, he was given no opportunity to appeal, and he was given no clarity on what actions resulted in his account being shut down, nor whether this was in relation to a violation of PayPal’s terms – and, if so, which part of those terms.

We are concerned about situations such as Brandt’s not only because of the harm and inconvenience caused to one user, but because of the societal harms from patterns of account closures. When a handful of online payment services can dictate who has access to financial services, they can also determine which people and which services get to exist in our increasingly digital world.  While tech giants like Google and Facebook have come under fire for their content moderation practices and wrongfully banning accounts, financial services haven’t gotten the same level of scrutiny.

But if anything, financial intermediaries should be getting the most scrutiny. Access to financial services directly impacts one’s ability to survive and thrive in modern society, and is the only way that most websites can process payments. We’ve seen the havoc that financial censorship can wreak on online booksellers, music sharing sites, and the whistleblower website Wikileaks. PayPal has already made newsworthy mistakes, like automatically freezing accounts that have transactions that mention words such as “Syria.” In that case, PayPal temporarily froze the account of News Media Canada over an article about Syrian refugees that was entered into their annual awards competition.

EFF is calling on PayPal to do better by its customers, and that starts by embracing the Santa Clara principles. Specifically, we are calling on them to:

  • Publish a transparency report. A transparency report will indicate how many accounts PayPal is shutting down in response to government requests, and we’d urge them to additionally indicate how many accounts they shut down for other reasons, including terms of service violations, as well as how many Suspicious Activity Reports they file. Other online financial services, including most recently Coinbase, have already begun publishing transparency reports, and there’s no reason PayPal can’t do the same
  • Provide meaningful notice to users. If PayPal is choosing to shut down someone’s account, they should provide detailed guidance about what aspect of PayPal’s terms were violated or why the account was shut down, unless forbidden from doing so by a legal prohibition or in cases of suspected account takeover. This is a powerful mechanism for holding companies back from over-reliance on automated account suspensions.
  • Adopt a meaningful appeal process. If a user’s PayPal account is shut down, they should have an opportunity to appeal to a person that was not involved in the initial decision to shut down the account.

Brandt agreed that part of the problem boils down to PayPal failing to prioritize the experience of users: “Good customer service and common sense would have suggested that they call me and discuss my PayPal activities or at least send me an email to tell me to stop. Then the company would be better equipped to make an informed decision about banning. But I think customer service is not so much in their best interests.”

Increased transparency into the patterns of financial censorship will help human rights advocates analyze patterns of abuse among financial intermediaries, and scrutiny from civil society can act as a balancing force against companies which are otherwise not incentivized to keep accounts on. For every example such as Brandt’s, in which a financial account was summarily shuttered without any opportunity to appeal, there are likely countless others that EFF doesn’t hear about or have an opportunity to document. 

For now, Brandt is not backing down. While he can’t use PayPal anymore, he’s still committed to supporting the Tor network by continuing to pay for servers around the world using alternative means, and he urges other people to think about what they can do to help support Tor in the future: “Tor is of critical importance for anyone requiring anonymity of location or person….I'm talking about millions of people in China, Iran, Syria, Belarus, etc. that wish to communicate outside their country but have prohibitions against such activities.  We need more incentives to add to the Tor project, not fewer.” For answers to many common questions about relay operation and the law, see the EFF Tor Legal FAQ.  

rainey Reitman

Your Avatar is You, However You See Yourself, and You Should Control Your Experience and Your Data

1 week 3 days ago

Virtual worlds are increasingly providing sophisticated, realistic, and often immersive experiences that are the stuff of fantasy. You can enter them by generating an avatar - a representation of the user that could take the form of an animal, a superhero, a historic figure, each some version of yourself or the image you’d like to project. You can often choose to express yourself by selecting how to customize your character. For many, Avatar customization is key for satisfying and immersive gameplay or online experience. Avatars used to be relatively crude, even cartoonish representations, but they are becoming increasingly life-like, with nuanced facial expressions backed by a wealth of available emotes and actions. Most games and online spaces now offer at least a few options for choosing your avatar, with some providing in-depth tools to modify every aspect of your digital representation. 

There is a broad array of personal and business applications for these avatars as well- from digital influencers, celebrities, customer service representatives, to your digital persona in the virtual workplace. Virtual reality and augmented reality promise to take avatars to the next level, allowing the avatar’s movement to mirror the user’s gestures, expressions, and physicality. 

The ability to customize how you want to be perceived in a virtual world can be incredibly empowering. It enables embodying rich personas to fit the environment and the circumstances or adopting a mask to shield your privacy and personal self from what you wish to make public. You might use one persona for gaming, another for in a professional setting, a third for a private space with your friends.

An avatar can help someone remove constraints imposed on them by wider societal biases. For example trans and gender non-conforming individuals can more accurately reflect their true self, relieving the effects of gender dysphoria and transphobia, which has shown therapeutic benefits. For people with disabilities, avatars can allow participants to pursue unique activities through which they can meet and interact with others. In some cases, avatars can help avoid harassment. For example, researchers found some women choose male avatars to avoid misogyny in World of Warcraft. 

Facebook, owner of Oculus VR and heavily investing in AR, has highlighted its technical progress in one Facebook Research project called Codec Avatar. The Codec Avatars research project focuses on ultra-realistic avatars, potentially modeled directly on users’ bodies, and modeling the user’s voice, movements, and likeness, looking to power the ‘future of connection’ with avatars that enable what Facebook calls ‘social presence’ in their VR platform. 

Social presence combines the telepresence aspect of a VR experience and the social element of being able to share the experience with other people. In order to deliver what Facebook envisions for an “authentic social connection” in virtual reality, you have to pass the mother test: ‘your mother has to love your avatar before the two of you feel comfortable interacting as you would in real life’, as Yaser Sheikh, Director of Research at Facebook Reality Labs, put it. 

While we’d hope your mother would love whatever avatar you make, Facebook seems to mean the Codec Avatars are striving to be indistinguishable from their human counterparts–a “picture-perfect representation of the sender’s likeness,” that has the “unique qualities that make you instantly recognizable,” as captured by a full-body scan, and animated by ego-centric surveillance. While some may prefer exact replicas like these, the project is not yet embracing a future that allows people the freedom to be whoever they want to be online.

By contrast, Epic Games has introduced MetaHumans, which also allows lifelike animation techniques via its Unreal Engine and motion capture, but does not require a copy of the user. Instead, it allows the user the choice to create and control how they appear in virtual worlds.

Facebook’s plan for Codec Avatars is to verify users with “through a combination of user authentication, device authentication, and hardware encryption, and is “exploring the idea of securing future avatars through an authentic account.”  This obsession with authenticated perfect replicas mirrors Facebook’s controversial history with insisting on “real names”, later loosened somewhat to allow “authentic names,” without resolving the inherent problems Indeed, Facebook’s insistence to tie your Oculus account with your Facebook account (and its authentic name) already brings these policies together, for the worse.  If Facebook insists on indistinguishable avatars, tied to a Facebook “authentic” account, in its future of social presence, this will put the names policy on steroids.  

Facebook should respect the will of individuals not to disclose their real identities online.

Until the end of next year, Oculus still allows existing users to keep a separate unique VR profile to log in, which does not need to be your Facebook name. With Facebook login, users can still set their name as visible to ‘Only Me’ in Oculus settings, so that at least people on Oculus won’t be able to find you by your Facebook name.  But this is a far cry from designing an online identity system that celebrates the power of avatars to enable people to be who they want to be.

Lifelike Avatars and Profiles of Faces, Bodies, and Behavior on the Horizon

A key part of realistic avatars is mimicking the body and facial expression of the user, derived from collecting your non-verbal and body cues (the way you frown, tilt your head, or move your eyes) and your body structure and motion. Facebook Modular Codec Avatar system seeks to make “inferences about what a face should look like” to construct authentic simulations; compared to the original Codec Avatar system which relied more on direct comparison with a person.  

While still a long way from the hyper-realistic Codec Avatar project, Facebook has recently announced a substantial step down that path, rolling out avatars with real-time animated gestures and expressions for Facebook’s virtual reality world, Horizon. These avatars will later be available for other Quest app developers to integrate into their own work.

 Facebook's Codec Avatar research suggests that it will eventually require a lot of sensitive information about its users’ faces and body language: both their detailed physical appearance and structure (to recognize you for authentication applications, and to produce photorealistic avatars, including full-body avatars), and our moment-to-moment emotions and behaviors, captured in order to replicate them realistically in real-time in a virtual or augmented social setting. 

While this technology is still in development, the inferences coming from these egocentric data collection practices require even stronger human rights protections. Algorithmic tools can leverage the platform’s intimate knowledge of their users, assembled from thousands of seemingly unrelated data points and making inferences drawn from both individual and collective behavior. 

Research using unsophisticated cartoon avatars suggested that avatars may accurately infer some personality traits of the user. Animating hyper-realistic avatars of natural persons, such as Facebook’s Codec Avatars, will need to collect much more personal data. Think of it like walking around strapped to a dubious lie-detector, which measures your temperature, body responses, and heart-rate, as you go about your day. 

Inferences based on egocentric collection of data about users’ emotions, attention, likes, or dislikes provide platforms the power to control what your virtual vision sees, how your virtual body looks like, and how your avatar can behave. While wearing your headsets, you will see the 3D world through a lens made by those who control the infrastructure. 

Realistic Avatars Require Robust Security

Hyper-realistic avatars also raise concerns about “deep fakes”. Right now, deep fakes involving a synthetic video or audio “recording” may be mistaken for a real recording of the people it depicts. The unauthorized use of an avatar could also be confused with the real person it depicts. While any avatar, realistic or not, may be driven by a third party, hyper-realistic avatars, with human-like expressions and gestures, can more easily build trust.  Worse, in a dystopian future, realistic avatars of people you know could be animated automatically, for advertising or influencing opinion. For example, imagine an uncannily convincing ad where hyper-realistic avatars of your friends swoon over a product, or where an avatar of your crush tells you how good you’ll look in a new line of clothes.  More nefariously, hyper-realistic avatars of familiar people could be used for social engineering, or to draw people down the rabbit hole of conspiracy theories and radicalization.

‘Deep fake’ issues with a third party independently making a realistic fake depiction of a real person are well covered by existing law. The personal data captured to make ultra-realistic avatars, which is not otherwise readily available to the public, should not be used to act out expressions or interactions that people did not actually consent to present. To protect against this and put the user in charge of their experience, users must have strong security measures around the use of their accounts, what data is collected and how this data is used

A secure system for authentication does not require a verified match to one’s offline self. For some, of course, a verification linked to an offline identity may be valuable, but for others, the true value may lay in a way to connect without revealing their identity. Even if a user is presenting differently from their IRL body, they may still want to develop a consistent reputation and goodwill with their avatar persona, especially if it is used across a range of experiences. This important security and authentication can be provided without requiring a link to an authentic name account, or verification that the avatar presented matches the offline body. 

For example, the service could verify if the driver of the avatar was the same person who made it, without simultaneously revealing who the driver was offline. With appropriate privacy controls and data use limitations, a VR/AR device is well-positioned to verify the account holder biometrically, and thereby verify a consistent driver, even if that was not matched to an offline identity. 

Transparency and User Control Are Vital for the Avatars of the Virtual World

In the era of life-like avatars, it is even more important for users to have transparency and control from companies on the algorithms that underpin why their avatar will behave in specific ways, and to provide strong users control over the use of inferences. 

Facebook’s Responsible Innovation Principles, which allude to more transparency and control, are an important first step, but they remain incomplete and flawed. The first principle (“Never surprise people”)  fortunately implies greater transparency moving forward. Indeed, many of the biggest privacy scandals have stemmed from people being surprised by unfair data processing practices, even if the practice had been included in a privacy policy.  Simply informing people of your data practices, even if effectively done, does not ensure that the practices are good ones. 

Likewise, the second principle (“Provide controls that matter”) does not necessarily ensure that you as a user will have the controls over everything you think matters. One might debate over what falls into the category of things that “matter” enough to have controls, like the biometric data collected or the inferences generated by the service, or the look of one’s avatar.   This is particularly important when there can be so much data collected in a life-like avatar, and raises critical questions on how it could be used, even as the tech is in its infancy.  For example, if the experience requires an avatar that's designed to reflect your identity, what is at stake inside the experience is your sense of self. The platform won't just control the quality of the experience you observe (like watching a movie), but rather control an experience that has your identity and sense of self at its core. This is an unprecedented ability to potentially produce highly tailored forms of psychological manipulation according to your behavior in real-time.

Without strong user controls, social VR platforms or third-party developers may be tempted to use this data for other purposes, including psychological profiling of users’ emotions, interests, and attitudes, such as detecting nuances of how people feel about particular situations, topics, or other people.  It could be used to make emotionally manipulative content that subtly mirrors the appearance or mannerisms of people close to us, perhaps in ways we can’t quite put our fingers on.  

Data protection laws, like the GDPR, require that personal data collected for a specific purpose (like making your avatar more emotionally realistic in a VR experience) should not be used for other purposes (like calibrating ads to optimize your emotional reactions to them or mimicking your mannerisms in ads shown to your friends). 

While Facebook’s VR/AR policies for third-party developers prevent them, and rightly so, from using Oculus user data for marketing or advertising, among other things, including performing or facilitating surveillance for law enforcement purposes (without a valid court order), attempting to identify a natural person and combining user data with data from a third-party; the company has not committed to these restrictions, or to allowing strong user controls, on its own uses of data. 

Facebook should clarify and expand upon their principles, and confirm they understand that transparency and controls that “matter” include transparency about and control over not only the form and shape of the avatar but also the use or disclosure of the inferences the platform will make about users (their behavior, emotions, personality, etc.), including the processing of personal data running in the background. 

We urge Facebook to give users control and put people in charge of their experience. The notion that people must replicate their physical forms online to achieve the “power of connection,” fails to recognize that many people wish to connect in a variety of ways– including the use of different avatars to express themselves. For some, their avatar may indeed be a perfect replica of their real-world bodies. Indeed, it is critical for inclusion to allow avatar design options that reflect the diversity of users.  But for others, their authentic self is what they’ve designed in their minds or know in their hearts. And are finally able to reflect in glorious high resolution in a virtual world. 



.

Kurt Opsahl

Your Avatar is You, However You See Yourself, and You Should Control Your Experience and Your Data

1 week 3 days ago

Virtual worlds are increasingly providing sophisticated, realistic, and often immersive experiences that are the stuff of fantasy. You can enter them by generating an avatar - a representation of the user that could take the form of an animal, a superhero, a historic figure, each some version of yourself or the image you’d like to project. You can often choose to express yourself by selecting how to customize your character. For many, Avatar customization is key for satisfying and immersive gameplay or online experience. Avatars used to be relatively crude, even cartoonish representations, but they are becoming increasingly life-like, with nuanced facial expressions backed by a wealth of available emotes and actions. Most games and online spaces now offer at least a few options for choosing your avatar, with some providing in-depth tools to modify every aspect of your digital representation. 

There is a broad array of personal and business applications for these avatars as well- from digital influencers, celebrities, customer service representatives, to your digital persona in the virtual workplace. Virtual reality and augmented reality promise to take avatars to the next level, allowing the avatar’s movement to mirror the user’s gestures, expressions, and physicality. 

The ability to customize how you want to be perceived in a virtual world can be incredibly empowering. It enables embodying rich personas to fit the environment and the circumstances or adopting a mask to shield your privacy and personal self from what you wish to make public. You might use one persona for gaming, another for in a professional setting, a third for a private space with your friends.

An avatar can help someone remove constraints imposed on them by wider societal biases. For example trans and gender non-conforming individuals can more accurately reflect their true self, relieving the effects of gender dysphoria and transphobia, which has shown therapeutic benefits. For people with disabilities, avatars can allow participants to pursue unique activities through which they can meet and interact with others. In some cases, avatars can help avoid harassment. For example, researchers found some women choose male avatars to avoid misogyny in World of Warcraft. 

Facebook, owner of Oculus VR and heavily investing in AR, has highlighted its technical progress in one Facebook Research project called Codec Avatar. The Codec Avatars research project focuses on ultra-realistic avatars, potentially modeled directly on users’ bodies, and modeling the user’s voice, movements, and likeness, looking to power the ‘future of connection’ with avatars that enable what Facebook calls ‘social presence’ in their VR platform. 

Social presence combines the telepresence aspect of a VR experience and the social element of being able to share the experience with other people. In order to deliver what Facebook envisions for an “authentic social connection” in virtual reality, you have to pass the mother test: ‘your mother has to love your avatar before the two of you feel comfortable interacting as you would in real life’, as Yaser Sheikh, Director of Research at Facebook Reality Labs, put it. 

While we’d hope your mother would love whatever avatar you make, Facebook seems to mean the Codec Avatars are striving to be indistinguishable from their human counterparts–a “picture-perfect representation of the sender’s likeness,” that has the “unique qualities that make you instantly recognizable,” as captured by a full-body scan, and animated by ego-centric surveillance. While some may prefer exact replicas like these, the project is not yet embracing a future that allows people the freedom to be whoever they want to be online.

By contrast, Epic Games has introduced MetaHumans, which also allows lifelike animation techniques via its Unreal Engine and motion capture, but does not require a copy of the user. Instead, it allows the user the choice to create and control how they appear in virtual worlds.

Facebook’s plan for Codec Avatars is to verify users with “through a combination of user authentication, device authentication, and hardware encryption, and is “exploring the idea of securing future avatars through an authentic account.”  This obsession with authenticated perfect replicas mirrors Facebook’s controversial history with insisting on “real names”, later loosened somewhat to allow “authentic names,” without resolving the inherent problems Indeed, Facebook’s insistence to tie your Oculus account with your Facebook account (and its authentic name) already brings these policies together, for the worse.  If Facebook insists on indistinguishable avatars, tied to a Facebook “authentic” account, in its future of social presence, this will put the names policy on steroids.  

Facebook should respect the will of individuals not to disclose their real identities online.

Until the end of next year, Oculus still allows existing users to keep a separate unique VR profile to log in, which does not need to be your Facebook name. With Facebook login, users can still set their name as visible to ‘Only Me’ in Oculus settings, so that at least people on Oculus won’t be able to find you by your Facebook name.  But this is a far cry from designing an online identity system that celebrates the power of avatars to enable people to be who they want to be.

Lifelike Avatars and Profiles of Faces, Bodies, and Behavior on the Horizon

A key part of realistic avatars is mimicking the body and facial expression of the user, derived from collecting your non-verbal and body cues (the way you frown, tilt your head, or move your eyes) and your body structure and motion. Facebook Modular Codec Avatar system seeks to make “inferences about what a face should look like” to construct authentic simulations; compared to the original Codec Avatar system which relied more on direct comparison with a person.  

While still a long way from the hyper-realistic Codec Avatar project, Facebook has recently announced a substantial step down that path, rolling out avatars with real-time animated gestures and expressions for Facebook’s virtual reality world, Horizon. These avatars will later be available for other Quest app developers to integrate into their own work.

 Facebook's Codec Avatar research suggests that it will eventually require a lot of sensitive information about its users’ faces and body language: both their detailed physical appearance and structure (to recognize you for authentication applications, and to produce photorealistic avatars, including full-body avatars), and our moment-to-moment emotions and behaviors, captured in order to replicate them realistically in real-time in a virtual or augmented social setting. 

While this technology is still in development, the inferences coming from these egocentric data collection practices require even stronger human rights protections. Algorithmic tools can leverage the platform’s intimate knowledge of their users, assembled from thousands of seemingly unrelated data points and making inferences drawn from both individual and collective behavior. 

Research using unsophisticated cartoon avatars suggested that avatars may accurately infer some personality traits of the user. Animating hyper-realistic avatars of natural persons, such as Facebook’s Codec Avatars, will need to collect much more personal data. Think of it like walking around strapped to a dubious lie-detector, which measures your temperature, body responses, and heart-rate, as you go about your day. 

Inferences based on egocentric collection of data about users’ emotions, attention, likes, or dislikes provide platforms the power to control what your virtual vision sees, how your virtual body looks like, and how your avatar can behave. While wearing your headsets, you will see the 3D world through a lens made by those who control the infrastructure. 

Realistic Avatars Require Robust Security

Hyper-realistic avatars also raise concerns about “deep fakes”. Right now, deep fakes involving a synthetic video or audio “recording” may be mistaken for a real recording of the people it depicts. The unauthorized use of an avatar could also be confused with the real person it depicts. While any avatar, realistic or not, may be driven by a third party, hyper-realistic avatars, with human-like expressions and gestures, can more easily build trust.  Worse, in a dystopian future, realistic avatars of people you know could be animated automatically, for advertising or influencing opinion. For example, imagine an uncannily convincing ad where hyper-realistic avatars of your friends swoon over a product, or where an avatar of your crush tells you how good you’ll look in a new line of clothes.  More nefariously, hyper-realistic avatars of familiar people could be used for social engineering, or to draw people down the rabbit hole of conspiracy theories and radicalization.

‘Deep fake’ issues with a third party independently making a realistic fake depiction of a real person are well covered by existing law. The personal data captured to make ultra-realistic avatars, which is not otherwise readily available to the public, should not be used to act out expressions or interactions that people did not actually consent to present. To protect against this and put the user in charge of their experience, users must have strong security measures around the use of their accounts, what data is collected and how this data is used

A secure system for authentication does not require a verified match to one’s offline self. For some, of course, a verification linked to an offline identity may be valuable, but for others, the true value may lay in a way to connect without revealing their identity. Even if a user is presenting differently from their IRL body, they may still want to develop a consistent reputation and goodwill with their avatar persona, especially if it is used across a range of experiences. This important security and authentication can be provided without requiring a link to an authentic name account, or verification that the avatar presented matches the offline body. 

For example, the service could verify if the driver of the avatar was the same person who made it, without simultaneously revealing who the driver was offline. With appropriate privacy controls and data use limitations, a VR/AR device is well-positioned to verify the account holder biometrically, and thereby verify a consistent driver, even if that was not matched to an offline identity. 

Transparency and User Control Are Vital for the Avatars of the Virtual World

In the era of life-like avatars, it is even more important for users to have transparency and control from companies on the algorithms that underpin why their avatar will behave in specific ways, and to provide strong users control over the use of inferences. 

Facebook’s Responsible Innovation Principles, which allude to more transparency and control, are an important first step, but they remain incomplete and flawed. The first principle (“Never surprise people”)  fortunately implies greater transparency moving forward. Indeed, many of the biggest privacy scandals have stemmed from people being surprised by unfair data processing practices, even if the practice had been included in a privacy policy.  Simply informing people of your data practices, even if effectively done, does not ensure that the practices are good ones. 

Likewise, the second principle (“Provide controls that matter”) does not necessarily ensure that you as a user will have the controls over everything you think matters. One might debate over what falls into the category of things that “matter” enough to have controls, like the biometric data collected or the inferences generated by the service, or the look of one’s avatar.   This is particularly important when there can be so much data collected in a life-like avatar, and raises critical questions on how it could be used, even as the tech is in its infancy.  For example, if the experience requires an avatar that's designed to reflect your identity, what is at stake inside the experience is your sense of self. The platform won't just control the quality of the experience you observe (like watching a movie), but rather control an experience that has your identity and sense of self at its core. This is an unprecedented ability to potentially produce highly tailored forms of psychological manipulation according to your behavior in real-time.

Without strong user controls, social VR platforms or third-party developers may be tempted to use this data for other purposes, including psychological profiling of users’ emotions, interests, and attitudes, such as detecting nuances of how people feel about particular situations, topics, or other people.  It could be used to make emotionally manipulative content that subtly mirrors the appearance or mannerisms of people close to us, perhaps in ways we can’t quite put our fingers on.  

Data protection laws, like the GDPR, require that personal data collected for a specific purpose (like making your avatar more emotionally realistic in a VR experience) should not be used for other purposes (like calibrating ads to optimize your emotional reactions to them or mimicking your mannerisms in ads shown to your friends). 

While Facebook’s VR/AR policies for third-party developers prevent them, and rightly so, from using Oculus user data for marketing or advertising, among other things, including performing or facilitating surveillance for law enforcement purposes (without a valid court order), attempting to identify a natural person and combining user data with data from a third-party; the company has not committed to these restrictions, or to allowing strong user controls, on its own uses of data. 

Facebook should clarify and expand upon their principles, and confirm they understand that transparency and controls that “matter” include transparency about and control over not only the form and shape of the avatar but also the use or disclosure of the inferences the platform will make about users (their behavior, emotions, personality, etc.), including the processing of personal data running in the background. 

We urge Facebook to give users control and put people in charge of their experience. The notion that people must replicate their physical forms online to achieve the “power of connection,” fails to recognize that many people wish to connect in a variety of ways– including the use of different avatars to express themselves. For some, their avatar may indeed be a perfect replica of their real-world bodies. Indeed, it is critical for inclusion to allow avatar design options that reflect the diversity of users.  But for others, their authentic self is what they’ve designed in their minds or know in their hearts. And are finally able to reflect in glorious high resolution in a virtual world. 



.

Kurt Opsahl

EFF at 30: Surveillance Is Not Obligatory, with Edward Snowden

1 week 3 days ago

To commemorate the Electronic Frontier Foundation’s 30th anniversary, we present EFF30 Fireside Chats. This limited series of livestreamed conversations looks back at some of the biggest issues in internet history and their effects on the modern web.

To celebrate 30 years of defending online freedom, EFF was proud to welcome NSA whistleblower Edward Snowden for a chat about surveillance, privacy, and the concrete ways we can improve our digital world, as part of our EFF30 Fireside Chat series. EFF Executive Director Cindy Cohn, EFF Director of Engineering for Certbot Alexis Hancock, and EFF Policy Analyst Matthew Guariglia weighed in on the way the internet (and surveillance) actually function, the impact that has on modern culture and activism, and how we’re grappling with the cracks this pandemic has revealed—and widened—in our digital world. 

You can watch the full conversation here or read the transcript.

On June 3, we’ll be holding our fourth EFF30 Fireside Chat, on how to free the internet, with net neutrality pioneer Gigi Sohn. EFF co-founder John Perry Barlow once wrote, "We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before." This year marked the 25th anniversary of this audacious essay denouncing centralized authority on the blossoming internet. But modern tech has strayed far from the utopia of individual freedom that 90s netizens envisioned. We'll be discussing corporatization, activism, and the fate of the internet, framed by Barlow's "Declaration of the Independence of Cyberspace," with Gigi, along with EFF Senior Legislative Counsel Ernesto Falcon and EFF Associate Director of Policy and Activism Katharine Trendacosta.

RSVP to the next EFF30 Fireside Chat

The Internet is Not Made of Magic

Snowden opened the discussion by explaining the reality that all of our internet usage is made up of a giant mesh of companies and providers. The internet is not magic—it’s other people’s computers: “All of our communications—structurally—are intermediated by other people’s computers and infrastructure…[in the past] all of these lines that you were riding across—the people who ran them were taking notes.” We’ve come a long way from that time when our communications were largely unencrypted, and everything you typed into the Google search box “was visible to everybody else who was on that Starbucks network with you, and your Internet Service Provider, who knew this person who paid for this account searched for this thing on Google….anybody who was between your communications could take notes.”

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FPYRaSOIbiOA%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

How Can Tech Protect Us from Surveillance?

In 2013, Snowden came forward with details about the PRISM program, through which the NSA and FBI worked directly with large companies to see what was in individuals' internet communications and activity, making much more public the notion that our digital lives were not safe from spying. This has led to a change in people’s awareness of this exploitation, Snowden said, and myriad solutions have come about to solve parts of what is essentially an ecosystem problem: some technical, some legal, some political, some individual. “Maybe you install a different app. Maybe you stop using Facebook. Maybe you don’t take your phone with you, or start using an encrypted messenger like Signal instead of something like SMS.” 

Nobody sells you a car without brakes—nobody should sell you a browser without security.

When it comes to the legal cases, like EFF’s case against the NSA, the courts are finally starting to respond. Technical solutions, like the expansion of encryption in everyday online usage, are also playing a part, Alexis Hancock, EFF’s Director of Engineering for Certbot, explained. “Just yesterday, I checked on a benchmark that said that 95% of web traffic is encrypted—leaps and bounds since 2013.” In 2015, web browsers started displaying “this site is not secure” messages on unencrypted sites, and that’s where EFF’s Certbot tool steps in. Certbot is a “free, open source software that we work on to automatically supply free SSL, or secure, certificates for traffic in transit, automating it for websites everywhere.” This keeps data private in transit—adding a layer of protection over what is traveling between your request and a website’s server. Though this is one of the things that don’t get talked about a lot, partly because these are pieces that you don’t see and shouldn’t have to see, but give people security. “Nobody sells you a car without brakes—nobody should sell you a browser without security.”  

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FcJWq6ub0CQs%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

Balancing the Needs of the Pandemic and the Dangers of Surveillance

We’ve moved the privacy needle forward in many ways since 2013, but in 2020, a global catastrophe could have set us back: the COVID-19 pandemic. As Hancock described it, EFF’s focus for protecting privacy during the pandemic was to track “where technology can and can’t help, and when is technology being presented as a silver bullet for certain issues around the pandemic when people are the center for being able to bring us out of this.”

There is a looming backlash of people who have had quite enough.

Our fear was primarily scope creep, she explained: from contact tracing to digital credentials, many of these systems already exist, but we must ask, “what are we actually trying to solve here? Are we actually creating more barriers to healthcare?” Contact tracing, for example, must put privacy first and foremost—because making it trustworthy is key to making it effective. 

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FR9CIDUhGOgU%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

The Melting Borders Between Corporate, Government, Local, and Federal Surveillance 

But the pandemic, unfortunately, isn’t the only nascent danger to our privacy. EFF’s Matthew Guariglia described the merging of both government and corporate surveillance, and federal and local surveillance, that's happening around the country today: “Police make very effective marketers, and a lot of the manufacturers of technology are counting on it….If you are living in the United States today you are likely walking past or carrying around street level surveillance everywhere you go, and this goes double if you live in a concentrated urban setting or you live in an overpoliced community.”

Police make very effective marketers, and a lot of the manufacturers of technology are counting on it

From automated license plate readers to private and public security cameras to Shotspotter devices that listen for gunshots but also record cars backfiring and fireworks, this matters now more than ever, as the country reckons with a history of dangerous and inequitable overpolicing: “If a Shotspotter misfires, and sends armed police to the site of what they think is a shooting, there is likely to be a higher chance for a more violent encounter with police who think they’re going to a shooting.” This is equally true for a variety of these technologies, from automated license plate readers to facial recognition, which police claim are used for leads, but are too often accepted as fact. 

“Should we compile records that are so comprehensive?” asked Snowden about the way these records aren’t only collected, but queried, allowing government and companies to ask for the firehose of data. “We don’t even care what it is, we interrelate it with something else. We saw this license plate show up outside our store at a strip mall and we want to know how much money they have.” This is why the need for legal protections is so important, added Executive Director Cindy Cohn: “The technical tools are not going to get to the place where the phone company doesn’t know where your phone is. But the legal protections can make sure that the company is very limited in what they can do with that information—especially when the government comes knocking.”

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FcLlVb_W8OmA%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

 

After All This, Is Privacy Dead?

All these privacy-invasive regimes may lead some to wonder if privacy, or anonymity, are, to put it bluntly, dying. That’s exactly what one audience member asked during the question and answer section of the chat. “I don’t think it’s inevitable,” said Guariglia. “There is a looming backlash of people who have had quite enough.” Hancock added that optimism is both realistic and required: “No technology makes you a ghost online—none of it, even the most secure, anonymous-driven tools out there. And I don’t think that it comes down to your own personal burden...There is actually a more collective unit now that are noticing that this burden is not yours to bear...It’s going to take firing on all cylinders, with activism, technology, and legislation. But there are people fighting for you out there. Once you start looking, you’ll find them.” 

If you look for darkness, that’s all you’ll ever see. But if you look for lightness, you will find it.

“So many people care,” Snowden said. “But they feel like they can’t do anything….Does it have to be that way?...Governments live in a permissionless world, but we don’t. Does it have to be that way?” If you’re looking for a lever to pull—look at the presumptions these mass data collection systems make, and what happens if they fail: “They do it because mass surveillance is cheap...could we make these systems unlawful for corporations, and costly [for others]? I think in all cases, the answer is yes.”

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FEaeKVAbMO6s%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

Democracy, social movements, our relationships, and your own well being all require private space to thrive. If you missed this chat, please take an hour to watch it—whether you’re a privacy activist or an ordinary person, it’s critical for the safety of our society that we push back on all forms of surveillance, and protect our ability to communicate, congregate, and coordinate without fear of reprisal. We deeply appreciate Edward Snowden joining us for this EFF30 Fireside Chat and discussing how we can fight back against surveillance, as difficult as it may seem. As Hancock said (yes, quoting the anime The Last Airbender): “If you look for darkness, that’s all you’ll ever see. But if you look for lightness, you will find it.

___________________________

Check out additional recaps of EFF's 30th anniversary conversation series, and don't miss our next program where we'll tackle digital access and the open web with Gigi Sohn on June 3, 2021—EFF30 Fireside Chat: Free the Internet.

Related Cases: Jewel v. NSA
Jason Kelley

Newly Released Records Show How Trump Tried to Retaliate Against Social Media For Fact-Checking

2 weeks 1 day ago

A year ago today, President Trump issued an Executive Order that deputized federal agencies to retaliate against online social media services on his behalf, a disturbing and unconstitutional attack on internet free expression.

To mark this ignoble anniversary, EFF and the Center for Democracy & Technology are making records from their Freedom of Information Act lawsuit over the Executive Order public. The records show how Trump planned to leverage more than $117 million worth of government online advertising to stop platforms from fact-checking or otherwise moderating his speech.

Although the documents released thus far do not disclose whether government officials cut federal advertising as the Executive Order directed, they do show that the agencies’ massive online advertising budgets could easily be manipulated to coerce private platforms into adopting the president or the government’s preferred political views.

President Trump’s Executive Order was as unconstitutional as it was far-reaching. It directed independent agencies like the FCC to start a rulemaking to undermine legal protections for users’ speech online. It also ordered the Department of Justice to review online advertising spending by all federal agencies to consider whether certain platforms receiving that money were “problematic vehicles for government speech.”

President Biden rescinded the order earlier this month and directed federal agencies to stop working on it. President Biden’s action came after several groups challenging the Executive Order in court called on the president to withdraw the order. (Cooley LLP, Protect Democracy, and EFF represent the plaintiffs in one of those challenges, Rock The Vote v. Biden). EFF applauds President Biden for revoking President Trump’s illegal order, and we hope to be able to say more soon about what impact the rescission will have on Rock The Vote, which is pending before the U.S. Court of Appeals for the Ninth Circuit.

Trump sought to punish online platforms

Despite President Biden’s rescission, the order remains an unprecedented effort by a sitting president to use the federal government’s spending powers to punish private services for countering President Trump’s lies or expressing different views. One section of the order directed the Office of Management and Budget to collect reports from all federal agencies documenting their online advertising spending. The DOJ, according to the order, was then to review that spending and consider each platform’s “viewpoint-based speech restrictions,” and implicitly, recommend cutting advertising on platforms it determined to be problematic.

EFF and CDT filed their FOIA lawsuit against OMB and the DOJ in September of last year so that the public could understand whether Trump followed through on his threats to cut federal advertising on services he did not like. Here’s what we’ve learned so far.

Documents released by OMB show that federal agencies spent $117,363,000 to advertise on online services during the 2019 fiscal year. The vast majority of the government’s spending went to two companies: Google received $55,364,000 and Facebook received $46,827,000.

In contrast, federal agencies spent $7,745,000 on Twitter in 2019, despite the service being the target of Trump’s ire for appending fact-checking information to his tweets spreading lies about mail-in voting.

The documents also show which agencies reported the most online advertising spending. The Department of Defense spent $36,814,000 in 2019, with the Departments of Health and Human Services and Homeland Security spending $16,649,000 and $12,359,000, respectively. The Peace Corps reported spending $449,000 during the same period.

The documents also show that federal agencies paid online services for a variety of purposes. The FBI spent $199,000 on LinkedIn, likely on recruiting and hiring, with the federal government spending a total of $4,840,000 on the platform. And the Department of Education spent $534,000 on advertising in 2019 as part of campaigns around federal student aid and loan forgiveness.

Finally, the Department of Agriculture paid SnapChat $117 in 2018 to create a custom filter for those attending its annual Future Farmers of America convention. We think the filter looked like this.

Withholding government ad spending can be unconstitutional

Stepping back, it’s important to remember that the millions of dollars in federal advertising spent at these platforms gives the government a lot of power. The government has wide leeway to decide where it wants to advertise. But when officials cut, or merely threaten to cut, advertising with a service based on its supposed political views, that is coercive, retaliatory, and unconstitutional.

The government’s potential misuse of its spending powers was one reason EFF joined the Rock The Vote legal team. Federal officials may try to recycle Trump’s tactics in the future to push platforms to relax or change their content moderation policies, so it’s important that courts rule that such ploys are illegal.

It is also why EFF and CDT are pushing for the release of more records from OMB and the DOJ, so the public can fully understand President Trump’s unconstitutional actions. The DOJ, which was charged with reviewing the advertising spending and recommending changes, has only released a few dozen pages of emails thus far. And those communications provide little insight into how far the DOJ went in implementing the order, on top of redacting information the agency cannot withhold under FOIA.

We look forward to publishing more documents in the future so that public can understand the full extent of Trump’s unconstitutional effort to retaliate against online services.

Related Cases: Rock the Vote v. Trump
Aaron Mackey

Chile’s New “Who Defends Your Data?” Report Shows ISPs’ Race to Champion User Privacy

2 weeks 2 days ago

Derechos Digitales’ fourth ¿Quien Defiende Tus Datos? (Who Defends Your Data?) report on Chilean ISPs' data privacy practices launched today, showing that companies must keep improving their commitments to user rights if they want to hold their leading positions. Although Claro  (América Móvil) remains at the forefront as in 2019’s report, Movistar (Telefónica) and GTD have made progress in all the evaluated categories. WOM lost points and ended in a tie with Entel in the second position, while VTR lagged behind.

Over the last four years, certain transparency practices that once seemed unusual in Latin America have become increasingly more common. In Chile, they have even become a default. This year, all companies evaluated except for VTR received credit for adopting three important industry-accepted best practices: publishing law enforcement guidelines, which help provide a glimpse into the process and standard companies use for analyzing government requests for user data; disclosing personal data processing practices in contracts and policies; and releasing transparency reports.

Overall, the publishing of transparency reports has also become more common. These are critical for understanding a company’s practice of managing user data and its handling of government data requests. VTR is the only company that has not updated its transparency report recently—since May 2019. After the last edition, GTD published its first transparency report and law enforcement guidelines. Similarly, for the first time Movistar has released specific guidelines for authorities requesting access to user's data in Chile, and received credit for denying legally controversial government requests for user's data.

Most of the companies also have policies stating their right to provide user notification when there is no secrecy obligation in place or its term has expired. But as in the previous edition, earning a full star in this category requires more than that. Companies have to clearly set up a notification procedure or make concrete efforts to put them in place. Derechos Digitales also urged providers to engage in legislative discussions regarding Chile’s cybercrime bill, in favor of stronger safeguards for user notification. Claro has upheld the right to notification within the country's data protection law reform and has raised concerns against attempts to increase the data retention period for communications metadata in the cybercrime bill.    

Responding to concerns over government’s use of location data in the context of the COVID pandemic, the new report also sheds light on whether ISPs’ have made public commitments not to disclose user location data unless it is anonymized and aggregated, without a previous judicial order. While the pandemic has changed society in many ways, it has not reduced the need for privacy when it comes to sensitive personal data. Companies’ policies should also push back sensitive personal data requests that seek to target groups rather than individuals. In addition, the study aimed to spot which providers went public about their anonymized and aggregate location data-sharing agreements with private and public institutions. Movistar is the only company that has disclosed such agreements.

Together, the six researched companies account for 88.3% of fixed Internet users and 99.2% of mobile connections in Chile.

This year's report rates providers in five criteria overall: data protection policies, law enforcement guidelines, defending users in courts or Congress, transparency reports, and user notification. The full report is available in Spanish, and here we highlight the main findings.

Main results

Data Protection Policies and ARCO Rights

Compared to 2019’s edition, Movistar and GTD improved their marks on data protection policies. Companies should not only publish those policies, but commit to support user-centric data protection principles inspired by the bill reforming the data protection law, under discussion in Chilean Congress. GTD has overcome its poor score from 2019, and has earned a full star in this category this year. Movistar received a partial score for failing to commit to the complete set of principles. On the upside, the ISP has devised a specific page to inform users about their ARCO rights (access, rectification, cancellation, and opposition). The report highlights other positive remarks for WOM, Claro, and Entel for providing a specific point of contact for users to demand these rights. WOM went above and beyond, and has made it easier for users to unsubscribe from the provider’s targeted ads database. 

Transparency Reports and Law Enforcement Guidelines

Both transparency reports and law enforcement guidelines have become an industry norm among Chile’s main ISPs. All featured companies have published them, although VTR has failed to disclose an updated transparency report since the 2019 study. Amid many advances since last edition, GTD  disclosed its first transparency report referring to government data requests during 2019. The company earned a partial score in this category for not releasing new statistical data about 2020’s requests.

As for law enforcement guidelines, not all companies clearly state the need for a judicial order to hand over different kinds of communication metadata to authorities. Claro, Entel, and GTD have more explicit commitments in this sense. VTR requests a judicial order before carrying out interception measures or handing call records to authorities. However, the ISP does not mention this requirement for other metadata, such as IP addresses. Movistar’s guidelines are detailed about the types of user data that the government can ask for, but it refers to judicial authorization only when addressing the interception of communications.

Finally, WOM’s 2021 guidelines explicitly require a warrant before handing phone and tower traffic data, as well as geolocation data. As the report points out, in early 2020, WOM was featured in the news as the only ISP to comply with a direct and massive location data request made by prosecutors, which the company denied. We’ve written about this case as an example of worrisome reverse searches, targeting all users in a particular area instead of specific individuals. Directly related to this concern, this year’s report underscores Claro’s and Entel’s commitment to only comply with individualized personal data requests. 

Pushing for User Notification about Data Requests

Claro remains in the lead when it comes to user notification. Beyond stating in the company policy that it has a right to notify users when this is not prohibited by law (as the other companies do, except for Movistar) – Claro’s policies also describe the user notice procedure for data requests in civil, labor, and family judicial cases. Derechos Digitales points out the ISP has also explored with the Public Prosecutor’s Office ways to implement such notification with regard to criminal cases, once the secrecy obligation has expired. WOM’s transparency report mentions similar efforts, urging authorities to collaborate in providing information to ISPs about the status of investigations and legal cases, so they are aware when a secrecy obligation is no longer in effect. As the company says:

“Achieving advances in this area would allow the various stakeholders to continue to comply with their legal duties and at the same time make progress in terms of transparency and safeguarding users' rights.”

Having Users' Backs Before Disproportionate Data Requests and Legislative Proposals

Companies can also stand with their users by challenging disproportionate data requests or defending users’ privacy in Congress. WOM and Claro have specific sections on their websites listing some of their work on this front (see, respectively, tabs “protocolo de entrega de información a la autoridad” y “relación con la autoridad”). Such reports include Claro’s meetings with Chilean senators who take part in the commission discussing the cybercrime bill. The ISP reports having emphasized concerns about the expansion of the mandatory retention period for metadata, as well as suggesting that the reform of the country’s data protection law should explicitly authorize telecom operators to notify users about surveillance measures. 

Entel and Movistar have received equally high scores in this category. Entel, in particular, has kept its fight against a disproportionate request made by Chile's telecommunications regulator (Subtel) for subscriber data. In 2018, the regulator asked for personal information pertaining to the totality of Entel's customer base in order to share those with private research companies for carrying out satisfaction surveys. Other Chilean ISPs received the same request, but only Entel challenged the legal grounds of Subtel’s authority for such a demand. The case, which was first reported for this category in the last edition, had a new development in late 2019, when the Supreme Court confirmed the sanctions against Entel for not delivering the data, but reduced the company’s fine. Civil society groups Derechos Digitales, Fundación Datos Protegidos, and Fundación Abriendo Datos have recently released a statement stressing how Subtel's request conflicts with data protection principles, particularly purpose limitation, proportionality, and data security.

Movistar's credit in this category also relates to a Subtel request for subscriber data, this one in 2019. The ISP denied the demand, pointing out a legal tension between the agency’s oversight authority to request customer personal data without user consent and privacy safeguards provided by Chile’s Constitution and data protection law that set limits on personal data-sharing.

***

Since its first edition in 2017, Chile’s reports have shown solid and continuous progress, fostering ISP competition toward stronger standards and commitments in favor of users’ privacy and transparency. Derechos Digitales’ work is part of a series of reports across Latin America and Spain adapted from EFF’s Who Has Your Back? report, which for nearly a decade has evaluated the practices of major global tech companies.

Veridiana Alimonti

European Court on Human Rights Bought Spy Agencies’ Spin on Mass Surveillance

2 weeks 3 days ago

The European Court of Human Rights (ECHR) Grand Chamber this week affirmed what we’ve long known, that the United Kingdom’s mass surveillance regime, which involved the indiscriminate and suspicionless interception of people’s communications, violated basic human rights to privacy and free expression. We applaud the Strasbourg-based Grand Chamber, the highest judicial body of the Council of Europe, for the ruling and for its strong stance demanding new safeguards to prevent privacy abuses, beyond those required by a lower court in 2018.  

Yet, the landmark decision, while powerful in declaring that UK mass interception powers are unlawful, failed to protect journalists, and lacked legal safeguards to ensure British spy agency GCHQ wasn’t abusing its power, imprudently bought into spy agency propaganda that suspicionless interception powers must be granted to ensure national security. The Grand Chamber rejected the fact that mass surveillance is an inherently disproportionate measure and believed that any potential privacy abuses can be mitigated by “minimization and targeting” within the mass spying process. We know this doesn’t work. The Grand Chamber refused to insist that governments stop bulk interception--a mistake recognized by ECHR Judge Paulo Pinto de Albuquerque, who said in a dissenting opinion: 

For good or ill, and I believe for ill more than for good, with the present judgment the Strasbourg Court has just opened the gates for an electronic “Big Brother” in Europe.

The case at issue, Big Brother Watch and Others v. The United Kingdom, was brought in the wake of disclosures by whistleblower Edward Snowden, who confirmed that the NSA and GCHQ were routinely spying on hundreds of millions of innocent people around the globe. A group of more than 15 human rights organizations filed a complaint against portions of the UK's mass surveillance regime before the ECHR. In a decision in 2018, the court rejected the UK’s spying programs for violating the right to privacy and freedom of expression, but it failed to say that the UK's indiscriminate and suspicionless interception regime was inherently incompatible with the European Convention on Human Rights. EFF filed a Declaration as part of this proceeding. The court, however, acknowledged the lack of robust safeguards needed to provide adequate guarantees against abuse. The Grand Chamber’s decision this week came in an appeal to the 2018 ruling. 

The new ruling goes beyond the initial 2018 decision by requiring prior independent authorization for the mass interception of communications, which must include meaningful “end-to-end safeguards.” The Grand Chamber emphasized that there is considerable potential for mass interception powers to be abused, adversely affecting people’s rights. It warns that these powers should be subject to ongoing assessments of their necessity and proportionality at every stage of the process; to independent authorization at the outset, and to ex-post-facto oversight that should be sufficiently robust to keep the “interference” of people's rights to only what is “necessary” in a democratic society. Under powers given to UK security services in 2000, they only needed authorization by the Secretary of State (Home Office) for interception. The Grand Chamber ruled that, in lacking adequate safeguards like independent oversight, UK surveillance law did not meet the required “quality of law” standard and was incapable of keeping the “interference” to what was necessary.

In its ruling, the Grand Chamber assessed the quality of the UK's bulk interception law and developed an eight-part test that the legal framework of new surveillance laws must meet to justify authorization of bulk interception. The legal framework must make clear and consider the following: the circumstances in which an individual’s communications may be intercepted; the procedure to be followed for granting authorization; the procedures to be followed for selecting, examining and using intercept material; the precautions to be taken when communicating the material to other parties; the limits on the duration of interception, the storage of intercept material and the circumstances in which such material must be erased and destroyed; the procedures and modalities for supervision by an independent authority of compliance with the above safeguards and its powers to address non-compliance; the procedures for independent ex post facto review of such compliance and the powers vested in the competent body in addressing instances of non-compliance.

These are welcome safeguards against abuse. But the opinion doesn’t contain all good news. We are disappointed that the Grand Chamber found that the UK's practice of requesting intercepted material from foreign governments and intelligence agencies, rather than intercepting and collecting them directly, was not a violation of the right to privacy and free expression. Our friends at ARTICLE19 and others argued this, and it also reflects our views: Only truly targeted surveillance constitutes a legitimate restriction on free expression and privacy, and any surveillance measure should only be authorized by a competent judicial authority that is independent and impartial.

Back on the bright side, we were happy that the Grand Chamber once again rejected the UK government’s contention (akin to the U.S. government’s) that privacy invasions only occur once a human being looks at intercepted communications. The Grand Chamber confirmed that the legally significant “interference” with privacy begins as soon as communications are first intercepted—becoming more and more severe as they are stored and later used by government agents. The steps include interception and initial retention of communications data; application of specific selectors to the retained data;  the examination of selected data by analysts; and the subsequent retention of data and use of the “final product”, including the sharing of data with third parties. The Grand Chamber correctly applied its analysis to every step of the way, something U.S. Courts have yet to do. 

The Grand Chamber also found that the government had neglected to subject its targeting practices to enough authorization procedures. Bulk communications may be analyzed (by machines or by people) using “selectors”—that is, search terms such as account names or device addresses—and the government apparently did not specify how these selectors would be chosen or what kinds of selectors it might use in the course of surveillance procedures. It required analysts performing searches on people’s communications to document why they searched for terms connected to particular people’s identities, but did not have anyone else (other than an individual analyst) decide whether those search terms were OK.

The Grand Chamber ruled that acquiring communications metadata through mass interception powers is just as intrusive as intercepting communications content. It considers that the interception, retention, and searching of communications data should be analyzed taking into account the same safeguards as those applicable to the content of communications. However, the Grand Chamber decided that while the interception of communications data and content will normally be authorized at the same time, once obtained the two may be treated differently. The Court explained: 

In view of the different character of related communications data and the different ways in which they are used by the intelligence services, as long as the aforementioned safeguards are in place, the Court is of the opinion that the legal provisions governing their treatment may not necessarily have to be identical in every respect to those governing the treatment of content.

On concerns raised about the impact of surveillance on journalists and their sources, the Grand Chamber agreed that the UK was substantially deficient in not having proactive independent oversight of surveillance of journalists’ communications, whereby “a judge or other independent and impartial decision-making body” would have applied a higher level of scrutiny to this surveillance.

Overall, the Grand Chamber decision falls below the standards of the Court of Justice of the European Union (the Supreme Court of the European Union in matters of European Union law), although it does have some good safeguards. For instance, the Luxembourg Court of Justice of the European Union judgment, in Schrems I. v. Data Protection Commissioner, made clear that legal frameworks granting public authorities access to data on a generalized basis compromise "the essence of the fundamental right to private life," as guaranteed by Article 7 of the European Union Charter of Fundamental Rights.  In other words, any law that compromises the “essence to right private life” cannot ever be proportionate nor necessary. 

While we would like more, this decision still puts the Grand Chamber way ahead of U.S. courts deciding cases challenging bulk surveillance. Courts in the U.S. have tied themselves in knots trying to accommodate the U.S. government’s overbroad secrecy claims and the needs of the U.S. standing doctrine. In Europe, the UK did not claim that the case could not be decided due to secrecy.  More importantly,  the Grand Chamber was able to reach a decision on the merits without endangering the national security of the U.K. 

U.S. courts should take heed: the sky will not fall if you allow full consideration of the legality of mass surveillance in regular courts, rather than the truncated, rubber-stamp review currently done in secret by the Foreign Intelligence Surveillance Court (FISA). Americans, just like Europeans, deserve to communicate without being subject to bulk surveillance. While it contains a serious flaw, the Grand Chamber ruling demonstrates that the legality of mass surveillance programs can and should be subject to thoughtful, balanced, and public scrutiny by an impartial body, independent from the executive branch, that isn’t just taking the government’s word for it but applying laws that guarantee privacy, freedom of expression, and other human rights. 

Related Cases: Jewel v. NSA
Katitza Rodriguez

Amid Systemic Censorship of Palestinian Voices, Facebook Owes Users Transparency

2 weeks 4 days ago

Over the past few weeks, as protests in—and in solidarity with—Palestine have grown, so too have violations of the freedom of expression of Palestinians and their allies by major social media companies. From posts incorrectly flagged by Facebook as incitement to violence, to financial censorship of relief payments made on Venmo, and the removal of Instagram Stories (which also heavily affected activists in Colombia, Canada, and Brazil), Palestinians are experiencing an unprecedented level of censorship during a time where digital communications are absolutely critical.

The vitality of social media during a time like this cannot be understated. Journalistic coverage from the ground is minimal—owing to a number of factors, including restrictions on movement by Israeli authorities—while, as the New York Times reported, misinformation is rife and has been repeated by otherwise reliable media sources. Israeli officials have even been caught spreading misinformation on social media. 

Palestinian digital rights organization 7amleh has spent the past few weeks documenting content removals, and a coalition of more than twenty organizations, including EFF, have reached out to social media companies, including Facebook and Twitter. Among the demands are for the companies to immediately stop censoring—and reinstate—the accounts and content of Palestinian voices, to open an investigation into the takedowns, and to transparently and publicly share the results of those investigations.

A brief history

Palestinians face a number of obstacles when it comes to online expression. Depending on where they reside, they may be subject to differing legal regimes, and face censorship from both Israeli and Palestinian authorities. Most Silicon Valley tech companies have offices in Israel (but not Palestine), while some—such as Facebook—have struck particular deals with the Israeli government to deal with incitement. While incitement to violence is indeed against the company’s community standards, groups like 7amleh say that this agreement results in inconsistent application of the rules, with incitement against Palestinians often allowed to remain on the platform.

Additionally, the presence of Hamas—which is the democratically-elected government of Gaza, but is also listed as a terrorist organization by the United States and the European Union—complicates things for Palestinians, as any mention of the group (including, at times, something as simple as the group’s flag flying in the background of an image) can result in content removals.

And it isn’t just Hamas—last week, Buzzfeed documented an instance where references to Jerusalem’s Al Aqsa mosque, one of the holiest sites in Islam, were removed because “Al Aqsa” is also contained within another designated group, Al Aqsa Martyrs’ Brigade. Although Facebook apologized for the error, this kind of mistake has become all too common, particularly as reliance on automated moderation has increased amidst the pandemic.

“Dangerous Individuals and Organizations”

Facebook’s Community Standard on Dangerous Individuals and Organizations gained a fair bit of attention a few weeks back when the Facebook Oversight Board affirmed that President Trump violated the standard with several of his January 6 posts. But the standard is also regularly used as justification for the widespread removal of content by Facebook pertaining to Palestine, as well as other countries like Lebanon. And it isn’t just Facebook—last Fall, Zoom came under scrutiny for banning an academic event at San Francisco State University (SFSU) at which Palestinian figure Leila Khaled, alleged to belong to another US-listed terrorist organization, was to speak.

SFSU fell victim to censorship again in April of this year when its Arab and Muslim Ethnicities and Diasporas (AMED) Studies Program discovered that its Facebook event “Whose Narratives? What Free Speech for Palestine?,” scheduled for April 23, had been taken down for violating Facebook Community Standards. Shortly thereafter, the program’s entire page, “AMED STUDIES at SFSU,” was deleted, along with its years of archival material on classes, syllabi, webinars and vital discussions not only on Palestine but on Black, Indigenous, Asian and Latinx liberation, gender and sexual justice and a variation of Jewish voices and perspectives including opposition to Zionism. Although no specific violation was noted, Facebook has since confirmed that the post and the page were removed for violating the Dangerous Individuals and Organizations standard. This was in addition to cancellations by other platforms including Google, Zoom, and Eventbrite. 

Given the frequency and the high-profile contexts in which Facebook’s Dangerous Individuals and Organizations Standard is applied, the company should take extra care to make sure the standard reflects freedom of expression and other human rights values. But to the contrary, the standard is a mess of vagueness and overall lack of clarity—a point that the Oversight Board has emphasized.

Facebook has said that the purpose of this community standard is to “prevent and disrupt real-world harm.” In the Trump ruling, the Oversight Board found that President Trump’s January 6 posts readily violated the Standard. “The user praised and supported people involved in a continuing riot where people died, lawmakers were put at serious risk of harm, and a key democratic process was disrupted. Moreover, at the time when these restrictions were extended on January 7, the situation was fluid and serious safety concerns remained.”

But in two previous decisions, the Oversight Board criticized the standard. In a decision overturning Facebook’s removal of a post featuring a quotation misattributed to Joseph Goebbels, the Oversight Board admonished Facebook for not including all aspects of its policy on dangerous individuals and organizations in the community standard.

Facebook apparently has self-designated lists of individuals and organizations subject to the policy that it does not share with users, and treats any quoting of such persons as an “expression of support” unless the user provides additional context to make their benign intent explicit, a condition also not disclosed to users. Facebook's lists evidently include US-designated foreign terrorist organizations, but also seems to go beyond that list.

As the Oversight Board concluded, “this results in speech being suppressed which poses no risk of harm” and found that the standard fell short of international human rights standards: “the policy lacks clear examples that explain the application of ‘support,’ ‘praise’ and ‘representation,’ making it difficult for users to understand this Community Standard. This adds to concerns around legality and may create a perception of arbitrary enforcement among users.” Moreover, “the policy fails to explain how it ascertains a user’s intent, making it hard for users to foresee how and when the policy will apply and conduct themselves accordingly.”

The Oversight Board recommended that Facebook explain and provide examples of the application of key terms used in the policy, including the meanings of “praise,” “support,” and “representation.” The Board also recommended that the community standard provide clearer guidance to users on making their intent apparent when discussing such groups, and that a public list of “dangerous” organizations and individuals be provided to users.

The United Nations Special Rapporteur on Freedom of Expression also expressed concern that the standard, and specifically the language of “praise” and “support,” was “excessively vague.”

Recommendations

Policies such as Facebook’s that restrict references to designated terrorist organizations may be well-intentioned, but in their blunt application, they can have serious consequences for documentation of crimes—including war crimes—as well as vital expression, including counterspeech, satire, and artistic expression, as we’ve previously documented. While companies, including Facebook, have regularly claimed that they are required to remove such content by law, it is unclear to what extent this is true. The legal obligations are murky at best. Regardless, Facebook should be transparent about the composition of its "Dangerous Individuals and Organizations" list so that users can make informed decisions about what they post.

But while some content may require removal under certain jurisdictions, it is clear that other decisions are made on the basis of internal policies and external pressure—and are often not in the best interest of the individuals that they claim to serve. This is why it is vital that companies include vulnerable communities—in this case, Palestinians—in policy conversations.

Finally, transparency and appropriate notice to users would go a long way toward mitigating the harm of such takedowns—as would ensuring that every user has the opportunity to appeal content decisions in every circumstance. The Santa Clara Principles on Transparency and Accountability in Content Moderation offer a baseline for companies.

Jillian C. York

Activists Mobilize to Fight Censorship and Save Open Science

2 weeks 5 days ago

Major publishers want to censor research-sharing resource Sci-Hub from the internet, but archivists are quickly responding to make that impossible. 

More than half of academic publishing is controlled by only five publishers. This position is built on the premise that users should pay for access to scientific research, to compensate publishers for their investment in editing, curating, and publishing it. In reality, research is typically submitted and evaluated by scholars without compensation from the publisher. What this model is actually doing is profiting off of a restriction on article access using burdensome paywalls. One project in particular, Sci-Hub, has threatened to break down this barrier by sharing articles without restriction. As a result, publishers are going to every corner of the map to destroy the project and wipe it from the internet. Continuing the long tradition of internet hacktivism, however, redditors are mobilizing to create an uncensorable back-up of Sci-Hub.

Paywalls: More Inequity and Less Progress

It’s an open secret at this point that the paywall model used by major publishers, where one must pay to read published articles, is at odds with the way science works which is one reason researchers regularly undermine it by sharing PDFs of their work directly. The primary functions paywalls serve now are to drive up contract prices with universities and ensure current research is only available to the most affluent or well-connected. The cost of access has gotten so out of control that even $35 billion dollar institutions like Harvard have warned that contract costs are becoming untenable. If this is the case for Harvard, it’s hard to see how smaller entities can manage these costs– particularly those in the global south. As a result, crucial and potentially life-saving knowledge is locked away from those who need it most. That’s why the fight for open access is a fight for human rights

Indeed, the past year has shown us the incredible power of open access after publishers made COVID-19 research immediately available at no cost. This temporary move towards open access helped support the unprecedented global public health effort that spurred the rapid development of vaccines, treatments, and better informed public health policies. This kind of support for scientific progress should not be reserved for a global crisis; instead, it should be the standard across all areas of research.

Sci-Hub and the Fight for Access

Sci-hub is a crucial piece of the movement towards open access. The project was started over 10 years ago by a researcher in Kazakhstan, Alexandra Elbakyan, with the goal “to remove all barriers in the way of science.” The result has been a growing library of millions of articles made freely accessible, running only on donations. Within six years it even became the largest Open Access academic resource in the world, and it has only grown since, bringing cutting-edge research to rich and poor countries alike

But that invaluable resource has come at a cost. Since its inception, Sci-Hub has faced numerous legal challenges and investigations. Some of these challenges have led to dangerously broad court orders. One such challenge is being addressed in India, where courts have been asked to block access to a site by publishers Elsevier, Wiley, and American Chemical Society. The courts have been hesitant, however, as the site has clear public importance, and local experts have argued that Sci-Hub is the only way for many in the country to access research.  In any event, one inevitable truth cannot be avoided: researchers want to share their work– not make publishers rich.

Archivists Rush to Defend Sci-Hub

With these challenges ongoing, SciHub’s Twitter account was permanently suspended under the site’s “counterfeit policy.” Given the timing of this suspension, Elbakyan and other academic activists believe it was directly related to the legal action in India. A few months later, Elbakyan shared on her personal twitter that Apple had granted the FBI access to her account data after a request in early 2019. 

Responding to these attacks last week, redditors on the archivist subreddit  r/DataHoarder have (once again) rallied to support the site. In a post two weeks ago, users appealed to the legacy of reddit co-founder Aaron Swartz and called for anyone with hard drive space and a VPN to defend ‘free science’ by downloading and seeding 850 torrents containing Sci-Hub’s 77 TB library. The ultimate goal of these activists is to then use these torrents, containing 85 million scientific articles, to make a fully decentralized and uncensorable iteration of Sci-Hub.

This project should ring utopian to anyone who values access to scientific knowledge, a goal publishers and the DOJ have taken great pains to block with legal obstacles. A fully decentralized, uncensorable, and globally accessible database for scientific work is a potential engine for greater research equity. The only potential losers with such a resource might be the old gatekeepers who rely on an artificial scarcity of scientific knowledge, and increasingly tools of surveillance, to extract exorbitant profit margins off the labor of scientists.

It’s Time to Fight for Open Access

Journal publishers must do their part to make research immediately available to all, freely and without privacy-invasive practices. There is no need for such a valuable resource such as Sci-Hub to live in the shadows of copyright litigation. While we hope publishers make this change willingly, there are other common-sense initiatives that could help. For example, there are federal bills like the Fair Access to Science and Technology Research Act (FASTR), or state bills such as California’s A.B. 2192, which can require government-funded research to be made freely available. The principle behind these bills is simple: if the public-funded the research, the public shouldn’t have to pay again to access it.

In addition to supporting legislation, students and academics can also advocate for Open Access on campus. Colleges can not only provide a financial incentive by breaking contracts with publishers but also support researchers in the process of making their own work Open Access. The UC system for example has required all research from their 10 campuses be made open access since 2013, a policy more public institutions can and should adopt.  Even talking about open access with peers on campus can stir interest in local organizing, and when it does our EFA local organizing toolkit and organizing team (organizing@eff.org) can help support these local efforts.

We need to lift these artificial restraints on science imposed by major publishers and take advantage of 21st-century technology. Initiatives taken by archivist activists such as those supporting Sci-Hub shouldn’t be caught in a game of cat and mouse but supported by policy and business models which allow such projects to thrive and promote equity.

Rory Mir

EFF Sues Police Standards Agency to Obtain Use of Force Training Materials

3 weeks 1 day ago
Police Group Abusing Copyright Law to Withhold Documents, Violate Public Records Act

Woodland, California—The Electronic Frontier Foundation (EFF) sued the California Commission on Peace Officer Standards and Training (POST) to obtain materials showing how police are trained in the use of force, after the organization cited third-party copyright interests to illegally withhold them from the public.

The lawsuit, filed under California’s Public Records Act (PRA), seeks a court order forcing POST to make public unredacted copies of outlines for a number of police training courses, including training on use of force. As the country struggles to process the many painful-to-watch examples of extensive and deadly use of force by police, Californians have a right to know what officers are being trained to do, and how they are being trained. The complaint was filed yesterday in the Superior Court of California, Yolo County.

California lawmakers recognized the need for more transparency in law enforcement by passing SB 978, which took effect last year. The law requires POST and local law enforcement agencies to publish, in a conspicuous space on their websites, training manuals and other materials about policies and practices.

“POST is unlawfully hiding this material,” said EFF Staff Attorney Cara Gagliano. “SB 978 is clear—police must allow the public to see its training manuals. Doing so helps educate the community about what to expect and how to behave during police encounters, and helps to hold police accountable when they don’t comply with their training.”

As part of a 2020 review of POST’s compliance with the law, EFF discovered that the use of force training materials were not on its website. EFF requested the documents under the PRA and was sent copies of documents listing use of force training providers and certification dates. The only substantive documents it received were heavily redacted copies of the course outlines, with just the subject headings visible.

POST said it would not make public the material because the California Peace Officers Association (CPOA), which created the training manuals, had made a copyright claim over the materials and requested they not be published on a public website. POST agreed, citing compliance with federal copyright law.

But SB 978 mandates that POST must publish training manuals if the materials would be available to the public under the PRA, which does not contain any exception for copyrighted material. What’s more, the PRA says state agencies can’t allow “other parties” to control whether information subject to the law can be disclosed.

“Copyright law is not a valid excuse for POST to evade its obligation under the law to make training materials public,” said Gagliano. “Police and the organizations that create their training manuals are not above the law."

For the complaint:
https://www.eff.org/document/eff-v-post-complaint

For more on digital rights and the Black-led movement against police violence:
https://www.eff.org/issues/digital-rights-and-black-led-movement-against-police-violence

Contact:  CaraGaglianoStaff Attorneycara@eff.org
Karen Gullo

Washington State Has Sued a Patent Troll For Violating Consumer Protection Laws

3 weeks 1 day ago

Landmark Technology, a patent troll that has spent 20 years threatening and suing small businesses over bogus patents, and received EFF’s Stupid Patent of the Month award in 2019, has been sued by the State of Washington.

Washington Attorney General Bob Ferguson has filed a lawsuit claiming that Landmark Technology has violated the state’s Patent Troll Protection Act, which bans “bad faith” assertions of patent infringement. Following a widespread campaign of patent demand letters, more than 30 states passed some kind of law placing limits on bad-faith patent assertions.

These laws face an uphill battle to be enforced. First of all, the Constitution places important limits on the government’s ability to penalize the act of seeking legal redress. Second, the Federal Circuit has specifically held that a high bar of bad faith must be established for laws that would penalize patent assertion.

Washington’s case against Landmark could be a major test of state anti-troll laws, and whether state anti-trolling and consumer protection laws can dissuade some worst-of-the-worst patent troll behavior.

The lawsuit is filed against “Landmark Technology A,” a recently created LLC that appears to be largely identical to the now-defunct “Landmark Technology.” The new company asserts the same patent against the same type of targets. The patent’s inventor is Landmark Technology owner Lawrence Lockwood.

Over 1,000 Demand Letters

Landmark threatens and sues small businesses over U.S. Patent No. 7,010,508, which was issued to Lockwood in 2006 and claims rights to “automated multimedia data processing network for processing business and financial transactions between entities from remote sites.”

The Washington case reveals just how widespread Landmark’s threats are. From January 2019 to July 2020, Landmark sent identical demand letters to 1,176 small businesses all across the country. Those letters threaten to sue unless Landmark gets paid a $65,000 licensing fee. 

Landmark essentially insists that if you use a website for e-commerce, you infringe this patent. In recent years, it’s filed suit against candy companies, an educational toy maker, an organic farm, and a Seattle bottle maker, just to name a few. 

Or as the Washington State Attorney General put it:

[T]he company broadly and aggressively misuses the patent claims, targeting virtually any small business with a website, seemingly at random. Landmark claims that common, near-ubiquitous business webpages infringe on its patent rights — such as small business home pages, customer login pages, new customer registration and product-ordering pages.

“Landmark extorts small businesses, demanding payment for webpages that are essential for running a business,” Washington Attorney General Ferguson said. “It backs them into a corner — pay up now, or get buried in legal fees. I’m putting patent trolls on notice: Bully businesses with unreasonable patent assertions, and you’ll see us in court.”

According to the AG’s press release, four Washington companies settled for between $15,000 and $20,000 each to avoid litigation costs. The lawsuit seeks restitution for those companies.

The patents created by Landmark owner Lawnrence Lockwood patents have been used in well over 150 lawsuits filed by Landmark Technology and Landmark Technology A; as well as at least 40 cases filed by his earlier company PanIP, which sued dozens of early e-commerce websites by 2003. Given what we now know about the more than 1,000 letters sent just in 2019 and 2020, the litigation record seems like just the tip of the iceberg.

The U.S. Patent and Trademark Office found in a 2014 review that the ’508 patent was likely to be invalid because it didn’t actually explain how to do the things it claimed. However, that case settled before the patent could be invalidated.

The USPTO is an office that labors under industry capture. Its fees are paid by patent owners, and in practice it works for patent owners far too often—not users or small business owners. While review processes like inter partes review (IPR) are useful in restoring some balance to the system, it’s critical that the worst abusers of the patent system be treated as a serious consumer protection problem. It’s certainly worthwhile for states to experiment and try to find ways to deter abuse, within the bounds of due process.

Patent owners who demand licensing fees from hundreds or thousands of individuals based on a patent that clearly should be found invalid, for broadly used web technology, are essentially engaging in widespread extortion, as AG Ferguson states. When patent owners won’t let users set up even a basic, out-of-the-box website without facing a demand letter, it’s not just an economic problem—it’s a threat to free expression.

Joe Mullin

Fighting Disciplinary Technologies

3 weeks 2 days ago

An expanding category of software, apps, and devices is normalizing cradle-to-grave surveillance in more and more aspects of everyday life. At EFF we call them “disciplinary technologies.” They typically show up in the areas of life where surveillance is most accepted and where power imbalances are the norm: in our workplaces, our schools, and in our homes.

At work, employee-monitoring “bossware” puts workers’ privacy and security at risk with invasive time-tracking and “productivity” features that go far beyond what is necessary and proportionate to manage a workforce. At school, programs like remote proctoring and social media monitoring follow students home and into other parts of their online lives. And at home, stalkerware, parental monitoring “kidware” apps, home monitoring systems, and other consumer tech monitor and control intimate partners, household members, and even neighbors. In all of these settings, subjects and victims often do not know they are being surveilled, or are coerced into it by bosses, administrators, partners, or others with power over them.

Disciplinary technologies are often marketed for benign purposes: monitoring performance, confirming compliance with policy and expectations, or ensuring safety. But in practice, these technologies are non-consensual violations of a subject’s autonomy and privacy, usually with only a vague connection to their stated goals (and with no evidence they could ever actually achieve them). Together, they capture different aspects of the same broader trend: the appearance of off-the-shelf technology that makes it easier than ever for regular people to track, control, and punish others without their consent.

The application of disciplinary technologies does not meet standards for informed, voluntary, meaningful consent. In workplaces and schools, subjects might face firing, suspension, or other severe punishment if they refuse to use or install certain software—and a choice between invasive monitoring and losing one’s job or education is not a choice at all. Whether the surveillance is happening on a workplace- or school-owned device versus a personal one is immaterial to how we think of disciplinary technology: privacy is a human right, and egregious surveillance violates it regardless of whose device or network it’s happening on.

And even when its victims might have enough power to say no, disciplinary technology seeks a way to bypass consent. Too often, monitoring software is deliberately designed to fool the end-user into thinking they are not being watched, and to thwart them if they take steps to remove it. Nowhere is this more true than with stalkerware and kidware—which, more often than not, are the exact same apps used in different ways.

There is nothing new about disciplinary technology. Use of monitoring software in workplaces and educational technology in schools, for example, has been on the rise for years. But the pandemic has turbo-charged the use of disciplinary technology on the premise that, if in-person monitoring is not possible, ever-more invasive remote surveillance must take its place. This group of technologies and the norms it reinforces are becoming more and more mainstream, and we must address them as a whole.

To determine the extent to which certain software, apps, and devices fit under this umbrella, we look at a few key areas:

The surveillance is the point. Disciplinary technologies share similar goals. The privacy invasions from disciplinary tech are not accidents or externalities: the ability to monitor others without consent, catch them in the act, and punish them is a selling point of the system. In particular, disciplinary technologies tend to create targets and opportunities to punish them where none existed before.

This distinction is particularly salient in schools. Some educational technology, while inviting in third parties and collecting student data in the background, still serves clear classroom or educational purposes. But when the stated goal is affirmative surveillance of students—via face recognition, keylogging, location tracking, device monitoring, social media monitoring, and more—we look at that as a disciplinary technology.

Consumer and enterprise audiences. Disciplinary technologies are typically marketed to and used by consumers and enterprise entities in a private capacity, rather than the police, the military, or other groups we traditionally associate with state-mandated surveillance or punishment. This is not to say that law enforcement and the state do not use technology for the sole purpose of monitoring and discipline, or that they always use it for acceptable purposes. What disciplinary technologies do is extend that misuse.

With the wider promotion and acceptance of these intrusive tools, ordinary citizens and the private institutions they rely on increasingly deputize themselves to enforce norms and punish deviations. Our workplaces, schools, homes, and neighborhoods are filled with cameras and microphones. Our personal devices are locked down to prevent us from countermanding the instructions that others have inserted into them. Citizens are urged to become police, in a digital world increasingly outfitted for the needs of a future police state.

Discriminatory impact. Disciplinary technologies disproportionately hurt marginalized groups. In the workplace, the most dystopian surveillance is used on the workers with the least power. In schools, programs like remote proctoring disadvantage disabled students, Black and brown students, and students without access to a stable internet connection or a dedicated room for test-taking. Now, as schools receive COVID relief funding, surveillance vendors are pushing expensive tools that will disproportionately discriminate against the students already most likely to be hardest hit by the pandemic. And in the home, it is most often (but certainly not exclusively) women, children, and the elderly who are subject to the most abusive non-consensual surveillance and monitoring.

And in the end, it’s not clear that disciplinary technologies even work for their advertised uses. Bossware does not conclusively improve business outcomes, and instead negatively affects employees’ job satisfaction and commitment. Similarly, test proctoring software fails to accurately detect or prevent cheating, instead producing rampant false positives and overflagging. And there’s little to no independent evidence that school surveillance is an effective safety measure, but plenty of evidence that monitoring students and children does decrease perceptions of safety, equity, and support, negatively affect academic outcomes,  and can have a chilling effect on development that disproportionately affects minoritized groups and young women. If the goal is simply to use surveillance to give authority figures even more power, then disciplinary technology could be said to “work”—but at great expense to its unwilling targets, and to society as a whole.

The Way Forward

Fighting just one disciplinary technology at a time will not work. Each use case is another head of the same Hydra that reflects the same impulses and surveillance trends. If we narrowly fight stalkerware apps but leave kidware and bossware in place, the fundamental technology will still be available to those who wish to abuse it with impunity. And fighting student surveillance alone is untenable when scholarly bossware can still leak into school and academic environments.

The typical rallying cries around user choice, transparency, and strict privacy and security standards are not complete remedies when the surveillance is the consumer selling point. Fixing the spread of disciplinary technology needs stronger medicine. We need to combat the growing belief, funded by disciplinary technology’s makers, that spying on your colleagues, students, friends, family, and neighbors through subterfuge, coercion, and force is somehow acceptable behavior for a person or organization. We need to show how flimsy disciplinary technologies’ promises are; how damaging its implementations can be; and how, for every supposedly reasonable scenario its glossy advertising depicts, the reality is that misuse is the rule, not the exception.

We’re working at EFF to craft solutions to the problems of disciplinary technology, from demanding anti-virus companies and app stores recognize spyware more explicitly, pushing companies to design for abuse cases, and exposing the misuse of surveillance technology in our schools and in our streets. Tools that put machines in power over ordinary people are a sickening reversal of how technology should work. It will take technologists, consumers, activists and the law to put it right.

Gennie Gebhart

#ParoNacionalColombia and Digital Security Considerations for Police Brutality Protests

3 weeks 3 days ago

In the wake of Colombia’s tax reform proposal, which came as more Colombians fell into poverty as a result of the pandemic, demonstrations spread over the country in late April, reviving social unrest and socio-economic demands that led people to the streets in 2019.The government's attempts to reduce public outcry by withdrawing the tax proposal to draft a new text did not work. Protests continue online and offline. Violent repression on the ground by police, and the military presence in Colombian cities, have raised concerns among national and international groups—from civil organizations across the globe to human rights bodies that are calling on the government to respect people’s constitutional rights to assemble and allow free expression on the Internet and the streets. Media has reported on government crackdowns against the protestors, including physical violence, missing persons, and deaths, seizing of phones and other equipment used to document protests, and police action, as well as internet disruptions and content restrictions or takedowns by online platforms.

As the turmoil and demonstrations continue, we’ve put together some useful resources from EFF and allies we hope can help those attending protests and using technology and the Internet to speak up, report, and organize. Please note that the authors of this post come from primarily U.S.- and Brazil-based experiences. The post is by no means comprehensive. We urge readers to be aware that protest circumstances change quickly; digital security risks, and their mitigation, can vary depending on your location and other contexts. 

This post has two sections covering resources for navigating protests and resources for navigating networks.

Resources for Navigating Protests

Resources for Navigating Network Issues

Resources for Navigating Protests

To attend protests safely, demonstrators need to consider many factors and threats: these range from protecting themselves from harassment and their own devices’ location tracking capabilities, to balancing the need to use technologies for documenting law enforcement brutality and disseminating information. Another consideration is using encryption to protect data and messages from unintended readers. Some resources that may be helpful are:

For Protestors (Colombia)  For Bringing Devices to Protests For Using Videos and Photos to Document Police Brutality, Protect Protesters’ Faces, and Scrub Metadata

Resources for Navigating Network Issues

What happens if the Internet is really slow, down altogether, or there’s some other problem keeping people from connecting online? What if social media networks remove or block content from being widely seen, and each platform has a different policy for addressing content issues? We’ve included some resources for understanding hindrances to sending messages and posts or connecting online. 

For Network and Platform Blockages (Colombia)  For Network Censorship  For Selecting a Circumvention Tool

If circumvention (not anonymity) is your primary goal for accessing and sending material online, the following resources might be helpful. Keep in mind that Internet Service Providers (ISPs) are still able to see that you are using one of these tools (e.g. that you’re on a Virtual Private Network (VPN) or that you’re using Tor), but not where you’re browsing, nor the content of what you are accessing. 

VPNs

A few diagrams showing the difference between default connectivity to an ISP using a VPN and using Tor are included below (from the Understanding and Circumventing Network Censorship SSD guide).

Your computer tries to connect to https://eff.org, which is at a listed IP address (the numbered sequence beside the server associated with EFF’s website). The request for that website is made and passed along to various devices, such as your home network router and your ISP, before reaching the intended IP address of https://eff.org. The website successfully loads for your computer.

In this diagram, the computer uses a VPN, which encrypts its traffic and connects to eff.org. The network router and ISP might see that the computer is using a VPN, but the data is encrypted. The ISP routes the connection to the VPN server in another country. This VPN then connects to the eff.org website.

Tor 

Digital security guide on using Tor Browser, which uses the volunteer-run Tor network, from Surveillance Self-Defense (EFF): How to: Use Tor on macOS (English), How to: Use Tor for Windows (English), How to: Use Tor for Linux (English), Cómo utilizar Tor en macOS (Español), Cómo Usar Tor en Windows (Español), Como usar Tor en Linux (Español)

The computer uses Tor to connect to eff.org. Tor routes the connection through several “relays,” which can be run by different individuals or organizations all over the world. The final “exit relay” connects to eff.org. The ISP can see that you’re using Tor, but cannot easily see what site you are visiting. The owner of eff.org, similarly, can tell that someone using Tor has connected to its site, but does not know where that user is coming from.

For Peer-to-Peer Resources

Peer-to-Peer alternatives can be helpful during a shutdown or during network disruptions and include tools like the Briar App, as well as other creative uses such as Hong Kong protesters’ use of AirDrop on iOS devices.

For Platform Censorship and Content Takedowns

If your content is taken down from services like social media platforms, this guide may be helpful for understanding what might have happened, and making an appeal (Silenced Online): How to Appeal (English)

For Identifying Disinformation

Verifying the authenticity of information (like determining if the poster is part of a bot campaign, or if the information itself is part of a propaganda campaign) is tremendously difficult. Data & Society’s reports on the topic (English), and Derechos Digitales’ thread (Español) on what to pay attention to and how to check information might be helpful as a starting point. 

Need More Help?

For those on the ground who need digital security assistance, Access Now has a 24/7 Helpline for human rights defenders and folks at risk, which is available in English, Spanish, French, German, Portuguese, Russian, Tagalog, Arabic, and Italian. You can contact their helpline at https://www.accessnow.org/help/

Thanks to former EFF fellow Ana Maria Acosta for her contributions to this piece.

Shirin Mori

Community Control of Police Spy Tech

3 weeks 3 days ago

All too often, police and other government agencies unleash invasive surveillance technologies on the streets of our communities, based on the unilateral and secret decisions of agency executives, after hearing from no one except corporate sales agents. This spy tech causes false arrests, disparately burdens BIPOC and immigrants, invades our privacy, and deters our free speech.

Many communities have found Community Control of Police Surveillance (CCOPS) laws to be an effective step on the path to systemic change. CCOPS laws empower the people of a community, through their legislators, to decide whether or not city agencies may acquire or use surveillance technology. Communities can say “no,” full stop. That will often be the best answer, given the threats posed by many of these technologies, such as face surveillance or predictive policing. If the community chooses to say “yes,” CCOPS laws require the adoption of use policies that secure civil rights and civil liberties, and ongoing transparency over how these technologies are used.

 The CCOPS movement began in 2014 with the development of a model local surveillance ordinance and launch of a statewide surveillance campaign by the ACLU affiliates in California. By 2016, a broad coalition including EFF, ACLU of Northern California, CAIR San Francisco-Bay Area, Electronic Frontier Alliance (EFA) member Oakland Privacy, and many others passed the first ordinance of its kind in Santa Clara County, California.  EFF has worked to enact these laws across the country. So far, 18 communities have done so. You can press the play button below to see a map of where they are.

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.google.com%2Fmaps%2Fd%2Fembed%3Fmid%3D1GN7RoV6w8hGOcKbu5qnycbwznnwWFpWu%22%20width%3D%22640%22%20height%3D%22480%22%20allow%3D%22autoplay%22%3E%C2%A0%3C%2Fiframe%3E Privacy info. This embed will serve content from google.com

 

These CCOPS laws generally share some common features. If an agency wants to acquire or use surveillance technology (broadly defined), it must publish an impact statement and a proposed use policy. The public must be notified and given an opportunity to comment. The agency cannot use or acquire this spy tech unless the city council grants permission and approves the use policy. The city council can require improvements to the use policy. If a surveillance technology is approved, the agency must publish annual reports regarding their use of the technology and compliance with the approved policies. There are also important differences among these CCOPS laws. This post will identify the best features of the first 18 CCOPS laws, to show authors of the next round how best to protect their communities. Specifically:

  • The city council must not approve a proposed surveillance technology unless it finds that the benefits outweigh the cost, and that the use policy will effectively protect human rights.
  • The city council needs a reviewing body, with expertise regarding surveillance technology, to advise it in making these decisions.
  • Members of the public need ample time, after notice of a proposed surveillance technology, to make their voices heard.
  • The city council must review not just the spy tech proposed by agencies after the CCOPS ordinance is enacted, but also any spy tech previously adopted by agencies. If the council does not approve it, use must cease.
  • The city council must annually review its approvals, and decide whether to modify or withdraw these approvals.
  • Any emergency exemption from ordinary democratic control must be written narrowly, to ensure the exception will not swallow the rule.
  • Members of the public must have a private right of action so they can go to court to enforce both the CCOPS ordinance and any resulting use policies.

Authors of CCOPS legislation would benefit by reviewing the model bill from the ACLU. Also informative are the recent reports on enacted CCOPS laws from Berkeley Law’s Samuelson Clinic, and from EFA member Surveillance Technology Oversight Project, as well Oakland Privacy and the ACLU of Northern California’s toolkit for fighting local surveillance.

Strict Standard of Approval

There is risk that legislative bodies may become mere rubber stamps providing a veneer of democracy over a perpetuation of bureaucratic theater. Like any good legislation, the power or fault is in the details.

Oakland’s ordinance accomplishes this by making it clear that legislative approval should not be the default. It is not the city council’s responsibility, or the community’s, to find a way for agency leaders to live out their sci-fi dreams. Lawmakers must not approve the acquisition or use of a surveillance technology unless, after careful deliberation and community consultation, they find that the benefits outweigh the costs, that the proposal effectively safeguards privacy and civil rights, and that no alternative could accomplish the agency’s goals with lesser costs—economically or to civil liberties.

A Reviewing Body to Assist the City Council

Many elected officials do not have the technological proficiency to make these decisions unassisted.  So the best CCOPS ordinances designate a reviewing body responsible for providing council members the guidance needed to ask the right questions and get the necessary answers. A reviewing body builds upon the core CCOPS safeguards: public notice and comment, and council approval. Agencies that want surveillance technology must first seek a recommendation from the reviewing body, which acts as the city’s informed voice on technology and its privacy and civil rights impacts.

When Oakland passed its ordinance, the city already had a successful model to draw from. Coming out of the battle between police and local advocates who had successfully organized to stop the Port of Oakland’s Domain Awareness Center, the city had a successful Privacy Advisory Commission (PAC). So Oakland’s CCOPS law tasked the PAC with providing advice to the city council on surveillance proposals.

While Oakland’s PAC is made up exclusively of volunteer community members with a demonstrated interest in privacy rights, San Francisco took a different approach. That city already had a forum for city leadership to coordinate and collaborate on technology solutions. Its fifteen-member Committee on Information Technology (COIT) is comprised of thirteen department heads—including the President of the Board of Supervisors—and two members of the public.

There is no clear rule-of-thumb on which model of CCOPS reviewing body is best. Some communities may question whether appointed city leaders might be apprehensive about turning down a request from an allied city agency, instead of centering residents' civil rights and personal freedoms. Other communities may value the perspective and attention that paid officials can offer to carefully consider all proposed surveillance technology and privacy policies before they may be submitted for consideration by the local legislative body. Like the lawmaking body itself, these reviewing bodies’ proceedings should be open to the public, and sufficiently noticed to invite public engagement before the body issues its recommendation to adopt, modify, or deny a proposed policy.

Public Notice and Opportunity to Speak Up

Public notice and engagement are essential. For that participation to be informed and well-considered, residents must first know what is being proposed, and have the time to consult with unbiased experts or otherwise educate themselves about the capabilities and potential undesired consequences of a given technology. This also allows time to organize their neighbors to speak out. Further, residents must have sufficient time to do so. For example, Davis, California, requires a 30-day interval between publication of a proposed privacy policy and impact report, and the city council’s subsequent hearing regarding the proposed surveillance technology.

New York City’s Public Oversight of Surveillance (POST) Act is high on transparency, but wanting on democratic power. On the positive side, it provides residents with a full 45 days to submit comments to the NYPD commissioner. Other cities would do well to provide such meaningful notice. However, due to structural limits on city council control of the NYPD, the POST Act does not accomplish some of the most critical duties of this model of surveillance ordinance—placing the power and responsibility to hear and address public concerns with the local legislative body, and empowering that body to prohibit harmful surveillance technology.

Regular Review of Technology Already in Use

The movement against surveillance equipment is often a response to the concerning ways that invasive surveillance has already harmed our communities. Thus, it is critical that any CCOPS ordinance apply not just to proposals for new surveillance tech, but also to the continued use of existing surveillance tech.

For example, city agencies in Davis that possessed or used surveillance technology when that city’s CCOPS ordinance went into effect had a four-month deadline to submit a proposed privacy policy. If the city council did not approve it within four regular meetings, then the agency had to stop using it. Existing technology must be subject to at least the same level of scrutiny as newer technology. Indeed, the bar should arguably be higher for existing technologies, considering the likely existence of a greater body of data showing their capabilities or lack thereof, and any prior harm to the community.

Moving forward, CCOPS ordinances must also require that each agency using surveillance technology issue reports about it on at least an annual basis. This allows the city council and public to monitor the use and deployment of approved surveillance technologies. Likewise, CCOPS ordinances must require the city council, at least annually, to revisit its decision to approve a surveillance technology. This is an opportunity to modify the use policies, or end the program altogether, when it becomes clear that the adopted protections have not been sufficient to protect rights and liberties.

In Yellow Springs, Colorado, village agencies must facilitate public engagement by submitting annual reports to the village council, and making them publicly available on their websites. Within 60 days, the village council must hold a public hearing about the report with opportunity for public comment. Then the village council must determine whether each surveillance technology has met its standards for approval. If not, the village council must discontinue the technology, or modify the privacy policy to resolve the failures.

Emergency Exceptions

Many CCOPS ordinances allow police to use surveillance technology without prior democratic approval, in an emergency. Such exceptions can easily swallow the rule, and so they must be tightly drafted.

First, the term “emergency” must be defined narrowly, to cover only imminent danger of death or serious bodily injury to a person. This is the approach, for example, in San Francisco. Unfortunately, some cities extend this exemption to also cover property damage. But police facing large protests can always make ill-considered claims that property is at risk.

Second, the city manager alone must have the power to allow agencies to make emergency use of surveillance technology, as in Berkeley. Suspension of democratic control over surveillance technology is a momentous decision, and thus should come only from the top.

Third, emergency use of surveillance technology must have tight time limits. This means days, not weeks or months. Further, the legislative body must be quickly notified, so it can independently and timely assess the departure from legislative control. Yellow Springs has the best schedule: emergency use must end after four days, and notification must occur within ten days.

Fourth, CCOPS ordinances must strictly limit retention and sharing of personal information collected by surveillance technology on an emergency basis. Such technology can quickly collect massive quantities of personal information, which then can be stolen, abused by staff, or shared with ICE. Thus, Oakland’s staff may not retain such data, unless it is related to the emergency or is relevant to an ongoing investigation. Likewise, San Francisco’s staff cannot share such data, except based on a court’s finding that the data is evidence of a crime, or as otherwise required by law.

Enforcement

It is not enough to enact an ordinance that requires democratic control over surveillance technology. It is also necessary to enforce it. The best way is to empower community members to file their own enforcement lawsuits. These are often called a private right of action. EFF has filed such surveillance regulation enforcement litigation, as have other advocates like Oakland Privacy and the ACLU of Northern California.

The best private rights of action broadly define who can sue. In Boston, for example, “Any violation of this ordinance constitutes an injury and any person may institute proceedings.” It is a mistake to limit enforcement just to a person who can show they have been surveilled. With many surveillance tools capturing information in covert dragnets, it can be exceedingly difficult to identify such people, or prove that you have been personally impacted, despite a brazen violation of the ordinance. In real and immutable ways, the entire community is harmed by unauthorized surveillance technology, including through the chilling of protest in public spaces.

Some ordinances require a would-be plaintiff, before suing, to notify the government of the ordinance violation, and allow the government to avoid a suit by ending the violation. But this incentivizes city agencies to ignore the ordinance, and wait to see whether anyone threatens suit. Oakland’s ordinance properly eschews this kind of notice-and-cure clause.

Private enforcement requires a full arsenal of remedies. First, a judge must have the power to order a city to comply with the ordinance. Second, there should be damages for a person who was unlawfully subjected to surveillance technology. Oakland provides this remedy. Third, a prevailing plaintiff should have their reasonable attorney fees paid by the law-breaking agency. This ensures access to the courts for everyone, and not just wealthy people who can afford to hire a lawyer. Davis properly allows full recovery of all reasonable fees. Unfortunately, some cities cap fee-shifting at far less than the actual cost of litigating an enforcement suit.

Other enforcement tools are also important. Evidence collected in violation of the ordinance must be excluded from court proceedings, as in Somerville, Massachusetts. Also, employees who violate the ordinance should be subject to workplace discipline, as in Lawrence, Massachusetts.

Next Steps

The movement to ensure community control of government surveillance technology is gaining steam. If we can do it in cities across the country, large and small, we can do it in your hometown, too. The CCOPS laws already on the books have much to teach us about how to write the CCOPS laws of the future.

Please join us in the fight to ensure that police cannot decide by themselves to deploy dangerous and invasive spy tech onto our streets. Communities, through their legislative leaders, must have the power to decide—and often they should say “no.”

 

Related Cases: Williams v. San Francisco
Nathan Sheard
Checked
2 hours 44 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed