Brazil’s Fake News Bill: Congress Must Stand Firm on Repealing Dangerous and Disproportionate Surveillance Measures

2 months 1 week ago

This post is the first of two analyzing the risks of approving dangerous and disproportionate surveillance obligations in the Brazilian Fake News bill. You can read our second article here.

The revised text of Brazil’s so-called Fake News bill (draft bill 2630), aimed at countering disinformation online, contains both good and bad news for user privacy compared to previous versions. In a report released by Congressman Orlando Silva in late October, following a series of public hearings in the Chamber of Deputies, the most recent text seeks to address civil society’s claims against provisions harmful to privacy.

Regarding serious flaws EFF previously pointed out, the bill no longer sets a general regime for users' legal identification. Second, it does not require social media and messaging companies to provide their staff in Brazil remote access to user logs and databases, a provision that would bypass international cooperation safeguards and create privacy and security risks. Most importantly, it drops the traceability mandate for instant messaging applications, under which forwarding information would be tracked. We hope all these positive and critical changes are preserved by Members of Congress in the upcoming debates. 

However, the text of the bill also has significant downsides for privacy. Among them, Article 18 of the draft legislation will expose some users’ IDs, requiring providers to make publicly available, by default, the national ID number of natural persons paying for content that mentions political parties or candidates, as well as the name of the person who authorized the ad message. Besides the potential for harassment and retaliation based on users' political leanings, the provision creates a trove of personal data for potential political profiling using a national and unique ID number.

These can be cross-referenced with several other government and corporate databases. Even though Brazil’s data protection law sets safeguards for the use of sensitive and publicly available personal data, the stakes here are high. Government's "antifascists' ' dossiers or lists in Brazil, including names of public officials, influencers, journalists, and university teachers, revealed by the press in the last couple of years, are a serious demonstration of the problem.

The bill also establishes that internet platforms can have their activities prohibited or temporarily suspended as part of the penalties for noncompliance. According to Article 2, the draft law applies to social networks, search engines, and instant messaging service providers with over two million users registered in Brazil. Blocking entire websites and internet platforms raises many technical and fundamental rights concerns. International freedom of expression standards emphasize that even blocking just specific content  is only permissible  in exceptional cases of clearly illegal content or speech not covered by freedom of expression safeguards, such as direct and public incitement to genocide.

Yet, blocking entire internet applications as a penalty for noncompliance with the law clashes with those standards. As stressed by the UN Human Rights Committee, generic bans on the operation of certain sites and systems are not compatible with Article 19, paragraph 3 of the UN's International Covenant on Civil and Political Rights (ICCPR). Similarly, the Council of Europe has recommended that public authorities should not, through general blocking measures, deny access by the public to information on the internet, regardless of frontiers.

Blocking applications to force their compliance with the law is also prone to abuse, even when determined by courts. We have seen how this can lead to abusive enforcement, for instance, with judicial orders requiring encrypted applications to hand the content of communications or encryption keys to authorities (like WhatsApp in Brazil or Telegram in Russia). In a constitutional challenge to WhatsApp being blocked in Brazil, Justice Rosa Weber examined the legal provision allegedly authorizing the blocking and repealed any interpretation that entailed punishing providers for failure to comply with a court order to turn over the content of communications “that can only be obtained by deliberately weakening privacy protection mechanisms embedded in the application’s architecture.”

Trial proceedings have been halted since May 2020. In the meantime, provisions in the draft reform of Brazil’s Criminal Procedure Code pose similar assistance obligations threats. Justice Weber had it right: refusing to deliberately weaken the privacy protections they built into their products should not be grounds for blocking internet platforms. Blocking deprives users of the fundamental right of free expression, and should be approached with extreme caution. 

The provision of disproportionate surveillance measures in the Fake News bill raises related red flags. The remainder of this post delves into why the Brazilian legislators must align with the bill's rapporteur and reject the traceability rule. Our second article goes deeper into the perils and flaws of expanding existing data retention mandates in Brazilian law.  Following the Brazilian  Congress’ recent approval of an amendment explicitly including data protection as a fundamental right under the country's Federal Constitution, legislators should embrace the principle of data minimization and proportionate personal data processing when assessing and voting on the Fake News bill.

Twists and Turns in the Traceability Rule, and Why It Should Stay Out

The dangerous traceability rule, approved in the Senate but wisely dropped in the current version of the bill, forced messaging applications to retain information about who shared  communications that have been “massively forwarded.” The provision required three months of stored data showing the complete chain of forwarded communications, including date and time of forwarding, and the total number of users who received the message. Although these obligations were conditioned on virality thresholds, the service provider was expected to temporarily retain this data for all forwarded messages during a 15-day period to determine whether or not the virality threshold was met.

As we have stressed, this provision undermines users’ expectations of privacy and security in messaging services like WhatsApp and iMessage, and raises serious due process concerns. It reverses the burden of proof against users in two dimensions. First, merely sharing a viral message may put a user under suspicion, and place the burden on the user to demonstrate that he or she did not have a malicious intent when forwarding it. Second, the rule blurs the lines between the originator of a communication chain in a messaging application and the actual creator of the message’s content. Although the former does not equate to the latter, as previously explained, it would be up to the originator of the communications chain to prove they are not the content’s author.  

An informal alternative version of the bill with changes in the traceability rule was circulated a few days before the official submission of Congressman Silva’s new report. The language of the informal version generically required messaging services to have the ability to identify the original sender of massively disseminated messages (resembling India’s troublesome traceability rules). Even worse than the previous version of the bill, this article did not define what a massive message meant nor specify time limits for data retention. But just like the previous traceability mandate, it is designed to lead messaging services to collect information about all forwarded messages, regardless of whether they were maliciously shared and even before the message content is deemed a problem.

All in all, both are intended to encourage companies to move away from strong encryption safeguards aimed at ensuring that an adversary can neither confirm nor disconfirm guesses about a message’s content. On WhatsApp, for example, forwarding information is protected and remains encrypted for the messaging provider (even though users can see that a message was forwarded on their devices, this information is encrypted on the company's server side).

Fortunately, the traceability mandate was dropped from the proposal officially presented. The new text sets out a metadata preservation order.

The Proposed Metadata Preservation Order is a More Proportionate Call

According to Article 13 of the current official bill, a court order can direct messaging applications to prospectively preserve and make available interaction records pertaining to specific users for a period of no longer than 15 days, renewable up to a maximum of 60 days. The rule is subject to the same high-level requirements applied to the interception of communications content set out in the Law 9.296/1996 (Brazil's Telephone Interception Law). In this case, the preservation of such records occurs after request relating to specific users, and not by default for users in general.

Interaction records capture the date and time that specific users have sent and received messages and audio calls. There is currently no law directly authorizing judges to order companies to preserve such interaction records. The article prohibits linking the data to the content of communications and voids generic requests. It also disallows orders that exceed the scope and technical limits of the service, therefore preserving secure end-to-end encryption and other privacy-by-design implementations that protects users' data and communications.

Metadata preservation orders in criminal investigations that respect the security and privacy features built into applications can provide relevant information about individuals suspected to be involved in serious crimes. And they do so without jeopardizing the principles of strong encryption or without encouraging massive retention of data associated with the communication of millions of users. As data protection scholar Danilo Doneda argued, this is a much more proportionate proposition. 

Nonetheless, the language of the last paragraph of the provision, which allows the judge to require additional information relating to a specific user, still needs tweaking to make explicit that such requests complement the metadata interactions order and, therefore, follow the same privacy-protective requirements.

In line with Congress’ approval of data protection as a fundamental right in the country’s Constitution, and following the safeguards of Brazil’s data protection law, legislators must align with the bill’s rapporteur and reject the traceability mandate in the Fake News bill.     

Veridiana Alimonti

Senators Push To Study “Unstable” Patent Law, While Patent Trolls Cheer Them On

2 months 2 weeks ago

Led by a group of senators that spent much of 2019 trying to change U.S. patent law for the worse, the U.S. Patent and Trademark Office (PTO) has agreed to study the “current state of patent jurisprudence.” The details of the study make it clear that its proposers believe in a narrative created by patent owners, that the 2014 Alice case has introduced “uncertainty” into patent law.

That would be Alice v. CLS Bank, the Supreme Court case that made it clear you can’t get a patent on an abstract idea just by adding in generic computer language.

As we’ve told Congress before, the only ones experiencing “uncertainty” because of the Alice precedent are people and companies using weak software patents to demand money from others.

For people who actually work with and on software, the Alice precedent has produced more certainty than ever before. Software innovation in the past seven years has been extraordinary. In the U.S., since Alice, the software industry is experiencing record profits and levels of employment. It certainly helps that more baseless patent lawsuits are being thrown out by courts. Truly innovative companies that build stuff—rather than rely on software patents—are thriving.

In 2020, the great majority of software-related appeals where patent eligibility was at issue ended up with the patents being found invalid. That’s happening because of Alice—we can’t and we won’t let that progress get rolled back.

In our “Saved by Alice” project, we’ve told some of the stories of small businesses faced with extortionate demands from patent assertion entities claiming to have patented basic aspects of doing business. These small companies—often with a sole proprietor or just a handful of employees—were threatened by patent owners claiming wide-ranging rights in things like online voting, package tracking, and online picture menus. Because of the Alice precedent, the patent trolls in these cases weren’t able to get away with it.

Despite the clear evidence that Alice is working, a few senators have decided this area of law is so “unstable” it requires a government study, but the only instability comes from patent maximalists resisting the clear rulings handed down by the Supreme Court. Last week, EFF filed comments explaining why the Alice framework for analyzing patents is working well. If the PTO wants to study patents, it would do well to study whether we should have software patents at all. There’s no evidence software patents have led to a net gain in innovation, and there’s growing evidence of their harm. 

EFF’s full comments to the PTO are available on our website. You can also read comments from other individuals and organizations can be read at, on two different docket numbers: PTO-P-2021-0032-0004, as well as the earlier PTO-P-2021-0032-0002 docket.

Joe Mullin

Face Recognition Is So Toxic, Facebook Is Dumping It

2 months 2 weeks ago

Facebook announced it is, for now, shutting down its face recognition program, which created face prints of users and automatically recognized them in uploaded photos. The decision to end the program comes at a time when face recognition technology is receiving push back, criticisms, and legislative bans across the United States, and the globe. Close to 20 U.S. cities, including San Francisco and Boston have banned government use of face recognition. There is also growing momentum to legislate against biometric surveillance in the European Union and in New Zealand

Commercial use of face recognition technology presents its own range of privacy and security concerns.

Facebook’s discontinuing of this program, including the reported deleting of over one billion face prints, makes it one of the largest face recognition programs to be ended since the technology was invented. As Facebook wrote in its statement, “This change will represent one of the largest shifts in facial recognition usage in the technology’s history. More than a third of Facebook’s daily active users have opted in to our Face Recognition setting and are able to be recognized, and its removal will result in the deletion of more than a billion people’s individual facial recognition templates.” An earlier version of Facebook’s program collected faceprints from its users without their consent, which violated the Illinois Biometric Information Privacy Act (BIPA). The company settled a BIPA lawsuit by paying its Illinois users $650 million

Facebook says that it will maintain the use of face recognition in “services that help people gain access to a locked account, verify their identity in financial products or unlock a personal device.” Also, the company imagines a future in which the technology could be reintroduced to make the platform more accessible. But for now, it has weighed that use against the ongoing social harms of face recognition technology. 

We’ve long advocated for a complete ban on government use of face recognition—which is invasive, inaccurate, and disproportionately harmful to people of color. Commercial use of face recognition technology presents its own range of privacy and security concerns. While federal legislation has been introduced to mitigate the risks of private use of face recognition, in most parts of the country, residents remain largely unprotected. 

Companies will continue to feel the pressure of activists and concerned users so long as they employ invasive biometric technologies like face recognition. This is especially true for corporate systems that process users’ biometrics without their freely given opt-in consent, or that store the data in ways that are vulnerable to theft or easily accessible to law enforcement. Facebook’s step is just one very large domino in the continued fight against face recognition technology. 

Matthew Guariglia

PDX Privacy: Building Community Defenses in Difficult Times

2 months 2 weeks ago

The Electronic Frontier Alliance is made up of more than seventy groups of concerned community members, often including workers in the tech industry who see issues of the industry from the inside. One of the Alliance’s most active members is PDX Privacy, a Portland-based privacy group whose membership advocates for local and state privacy protections, and overlaps with Portland’s Techno-Activism 3rd Mondays, a year-round campaign of workshops, speakers, and panels.

Here, the EFF Organizing Team talks to three members of PDX Privacy about how they started, and what they’ve learned fighting for privacy through both advocacy and popular education.

What is PDX privacy?

Chris: We’re a group of local residents who really care about privacy. We're trying to educate the community to advocate for privacy centric policies and anti-surveillance policies.

AJ: We're all volunteers, and, in addition to advocating for public policy and changes in our community, a big part of what we do is also to educate people in our community about some of the local issues related to privacy, how people are surveilling us, why privacy is important. And, some specific things going on in our community that they can advocate for.

How did PDX privacy start?

Chris: It started back in 2017. At that point it was me with a Twitter account. When the 2016 election happened, it meant that privacy was now on the shelf for a while. I didn't want that to happen. So, I wanted to keep working on the community control over police surveillance objective. I didn't really know how to go about that at first but I just started looking for people who also cared about it, and then there were a few, and then we got Michael and AJ in there.

AJ: I joined the group in the Summer of 2018. I think I just found it on kind of a local aggregator for tech related events, but I came to a meeting of the TA3M that Chris hosted and found out about this PDX Privacy group, and wanted to help out.

Michael: Around late 2017, I started to become really passionate about these privacy-related issues, and I happened to find a TA3M meetup. It's just the privacy happy hour that I met Chris at and we got to talking about privacy.

Before PDX privacy, what made privacy an issue that was important to you?

Chris: TA3M is the gateway drug. I think that I've always cared about privacy and just finding that balance of what you share and what you don't. Things that I've shared that then came back to bite me. But then, there was Snowden. I’m an electrical engineer by trade. I was aware of a lot of the technologies, I just didn't realize the extent to which they were being used against the general population. I think that made me want to protect my data more. I thought unless I’m actually committing some crime I don't really think that I should have my every moment monitored and every single thing I say or do being recorded.

Michael: My concerns centered around overreach by government and corporations and the chilling effect that that can have on free speech when you're under mass surveillance. I think our most important work with PDX Privacy is bringing awareness to that and helping pump the brakes on some of those technologies until we can figure out a way that protects privacy and freedom of speech.

AJ: I spent a number of years in my career working in enterprise software in the data and analytics industry. So not the companies that are collecting our personal data, but the companies that build the software and figure out what to do with that data once it's collected. They also analyze all kinds of benign data, but I got to know perhaps a little too well what the internet knows about us. And it really freaked me out. And, you know, for a while I was concerned about it, but it felt kind of helpless like there was nothing that I could really do to stop it, and that nobody really cared about these issues. A real turning point was when the Cambridge Analytica story broke. It wasn't a very surprising story, but what I noticed in that story was that people seem to care in a way that I haven't seen them care about other privacy issues before. That really made me feel like there was an interest to do something about it. So that meant, number one, I quit my full time job and I founded a privacy focused startup. And then I also found TA3M and PDX privacy and got involved in local advocacy, to see how we can better educate people on what data is being collected and get them to push their legislators to take appropriate action.

What have been barriers that prevent people from becoming more engaged?

Chris: For me the biggest challenge is just figuring out how things work. Figuring out all the levels of government and who has power and how we can affect change. I think people feel overwhelmed and like there's nothing they can do about it. Our hope is to try to show people ways that we can change things, whether it's just a setting on your phone or if it's changing some kind of policy locally or nationally.

Michael: One of the hardest things is making privacy concerns really concrete. It can be abstract.

You mentioned the difficulty in explaining the concepts and making them accessible. Have you found ways that have been more effective than others?

AJ: I presented at TA3M for an hour on why privacy matters. The more you say that, the more you refine it. It helps to take something people are familiar with, and do an analogy, and I think one of the great ones is about encryption where you have the the envelope example where you have the address on the envelope but you don't know what's inside the envelope, versus the postcard where it's totally unencrypted. One of the things that resonates is that the data that's being collected about them is so detailed and granular that even though it's anonymous, it's not really anonymous because it's so detailed that it can be used to say it's specifically you. 

What are some of the surprises? What have you learned now that you're doing more advocacy?

Chris: Here in Portland, there's kind of a team effort in passing laws. Some city councilors or state senators have things that they want to accomplish and they're sometimes happy to have groups like ours supporting their efforts. In some cases, we have to educate them on how things work, but in other cases they're already on board, and they're happy that we're on the same page. So there’s been more of a collaborative effort than I had anticipated.

AJ: An interesting discovery for me is that there's so many issues that aren't necessarily privacy-centric, but in so many other issues, like police reform, privacy is a component. So, as opposed to our own group flying solo, realizing that privacy is an important component but not the only component of a lot of these larger issues and really seeing where there's momentum and an interest. Right now there's a lot for police. And, we know that a lot of these privacy issues most harmfully impact the most vulnerable communities.

Michael: We’ve seen that there is a lot of strength in the community, and a really strong community that comes together to support itself. We certainly have our own advocacy issues that we focus on, but getting involved in the community more broadly and strengthening those relationships is essential. Then when you are going to those other groups and saying ‘we would like some support for this initiative,’ you already have those relationships, it's easier to lean on them.

What do you see on the horizon for PDX privacy?

Chris: We're excited about some recent privacy progress and hope to build on that. Last year, Portland enacted two facial recognition bans, one of which was the first, and I believe is still the only, ban of facial recognition use by private entities. And Verizon had plans to build a drone-testing facility in North Portland. Because of our concerns about surveillance, we joined some other local groups working to set that land aside for community use instead, and Verizon canceled their plans. Portland is currently working on a surveillance ordinance for the city, and we hope to help make the proposed legislation as strong as possible so the public can have input into how and whether they're surveilled and also have transparency about use of surveillance systems. Additionally, we want to engage more with other local organizations to build relationships and work with them. As we grow, we also plan to expand to other parts of the metro area—Washington and Clackamas counties and the cities within them.

AJ: That's a good summary, I think, in addition to the regulatory path, a big priority for us as well is on education about privacy. COVID has made some of those community events more challenging, and we are doing them remotely. Just having a large population of people who care about privacy and are knowledgeable about the issues. It helps them make decisions which emphasize privacy, but it also helps shift public policy, because they vote on the issues they care about and if legislators see that their communities care about privacy, they have an incentive to better address those needs. So I think that's a big priority for us as well. In addition to helping shape public policy.

José Martín

Copyright Regulator Eases Restrictions on Research, Education, and Repair

2 months 3 weeks ago

The Digital Millennium Copyright Act (DMCA) has interfered with a staggering array of speech and innovation, from security research to accessibility for those with disabilities to remix and even repair. By forbidding unauthorized access to a copyrighted work—even for purposes that don’t infringe copyright—the DMCA effectively erased over a century of law that limits copyright to protect free expression. 

Every three years, the DMCA requires the Copyright Office and Librarian of Congress to consider the public’s requests for exemptions to this terrible, restrictive law. Building on our previous successes protecting security research, remix culture, jailbreaking, and more, we again participated this cycle.

The latest exemptions [PDF] are mostly an improvement over previous exemptions and represent a victory for security research, accessibility, education, preservation, and repair. While the exemptions do continue to contain unnecessary and harmful limitations, we’re pleased with the additional freedom to operate that the Librarian granted in this rulemaking, including new exemptions to jailbreak streaming video devices like Apple TV or the Fire Stick; to jailbreak routers; and to circumvent in order to identify violations of free, libre, and open-source licensing terms. The latter two exemptions were championed by our friends at the Software Freedom Conservancy.

On the repair front, we achieved an important victory by expanding the scope of the exemption to cover all consumer electronics (with a couple of small carveouts for certain vehicle systems and parts of video game consoles). This means that manufacturers won’t be able to use the law to prevent independent repair. This was a joint effort between advocacy groups and repair organizations representing independent repair of everything from medical devices to boats.

The largest disappointment, however, is that the agency failed to protect the public’s ability to make non-infringing modifications of device software as we requested. We previously successfully advocated for people to be able to make lawful modifications of the software that controls vehicles like cars and tractors, and this kind of user innovation has been a profoundly important driver of advances in technology. It lets communities who are not served by a technology’s default functions to customize them to their own needs, either adding new features or taking out unwanted spyware. 

EFF submitted multiple examples of noninfringing modification during the rulemaking [link our comments]: improving digital camera software to enable new artistic options for photographers, making your smart litter box accept third-party cleaning cartridges, customizing a drone to operate on a wire instead of flying, improving the interface on a device to make it less distracting or more accessible (e.g. for colorblind users), and more. While the exemption for repair was extended to cover consumer electronics generally as we asked, these kinds of lawful modifications were left out without much discussion. The National Telecommunications and Information Administration (NTIA), which has input on the process, joined us in supporting an exemption for modifying the functionality of devices, but the Copyright Office and Librarian disagreed.

Unfortunately, the rulemaking process is inherently limited: while the Librarian can authorize people to engage in circumvention, they cannot create exemptions to the part of the law that prohibits the dissemination of the technology needed to achieve circumvention. In other words, while you can take advantage of these exemptions, it may still be unlawful to provide you a tool to help you do so. Congress needs to remove that restriction in order to give these legal rights to circumvent their full practical effect. Even better, Congress should do away with this unnecessary and harmful law altogether.

Related Cases: 2021 DMCA Rulemaking
Kit Walsh

Inequitable Access: An Anti-Competitive Scheme by Textbook Publishers

2 months 3 weeks ago

Update: An earlier version of this post described the UC Davis 'Equitable Access' program as it was implemented in Fall 2020. We have updated this post to clarify the changes made to the program in August 2021.

It goes by many names, but no matter how you cut it, the new "Inclusive Access" model for college course materials is a bad deal for students. 

Educators are moving increasingly towards digital textbooks, especially during the COVID-19 pandemic. This has left publishers scrambling to keep access limited and revenues high with paywalls, DRM, and expiring access. These options force students to choose between a rotten deal and gambling with their grade by skipping the purchase altogether.

Rather than challenge these artificial scarcity tactics by embracing Open Education, colleges are making a deal with publishers by creating "inclusive access" models—but this positive sounding name isn't inclusive at all. Under inclusive access, colleges simply charge students for digital textbooks and materials on their tuition bill—and their access often expires when the course is over. This automatic billing only serves to ensnare students. Exploding digital textbooks don't belong on your tuition bill when open licensing offers more equitable alternatives.

Publishers Keeping Up an Old Grift

The rising cost of college textbooks has been an absurd joke for decades. Publishers convince an instructor to use their book and gain potentially hundreds of obligatory student customers, with a renewed demand every semester. While students’ pesky habits of sharing and reselling textbooks puts some downward pressure on the price of new books, strategic releases of new editions have managed to keep those forces at bay.

Exploding digital textbooks don't belong on your tuition bill when open licensing offers more equitable alternatives.

However, with the rise in demand for digital course materials, which has accelerated during the pandemic, publishers have sought new ways to enforce an artificial scarcity. Digital goods have virtually no reproduction costs and can be easily remixed for new innovative purposes, but rather than pass those benefits along, publishers instead implement DRM to make using, sharing and keeping these materials difficult or impossible. The other growing strategy is 'textbook as a service', where textbooks are replaced with paid access to online education platforms rife with privacy concerns and barriers to accessibility, and similarly have access revoked at the end of the term.

This is a terrible deal, and students know it. However, when robbed of the right to share or buy secondhand textbooks, they are only left with one alternative—gamble with their grade by skipping the purchase entirely. After being burned by a few courses in which assigned texts are never used, or better materials can be found online, this starts to look like a sensible strategy. However, this can also backfire when exams and assignments are tailored to a particular text. Often it’s the most vulnerable of students who are driven to take this gamble, and that perpetuates broader social inequities.

An "Inclusive" and "Equitable" Burden

Facing such a travesty, some schools are contracting directly with publishers and campus bookstores (often operated by major booksellers) to simply charge all students through their tuition bill after a brief opt-out or refund period.

What this means in practice is that the student will either be charged for materials they can't afford, or go through the opt-out process and still be at a disadvantage

This system of "inclusive" access burdens students with switching costs when they choose to buy materials elsewhere. Not only do they need to navigate an opt-out process, but publishers also charge more though other sellers. If you would prefer to support a local small bookstore instead of a Barnes & Noble on campus, you need to jump through hoops, and ultimately pay more.

While administrators will often point to these programs as a way to prevent the gamble students make when foregoing their textbook purchase, what this means in practice is that the student will either be charged for materials they can't afford, or go through the opt-out process and still be at a disadvantage. Even worse, if they decide to make a purchase from a competitor after opting out, like when studying for a midterm or final, they ultimately pay a higher price.

Practically, this means students must make a purchasing decision earlier and with higher stakes, possibly earlier than the deadline to commit to the course itself. If this sounds exhausting—that's the point. The publishers and major bookstores are banking on students feeling overwhelmed and just incurring the material costs.

While "inclusive access" is often a tuition charge per-course, some schools such as UC Davis are exploring what is called an "equitable" access program. In the initial implementation of this program, students were charged an equal $199 per quarter to cover all of their digital purchases. In case students needed to opt out, the deadline was set almost three weeks before classes even begin. This program was updated in 2021 to be a little less expensive ($169 per quarter) and to set the opt-out date to 20 days after the start of instruction.

While digital materials don't expire at the end of the semester in this case, restrictive DRM means they are only accessible through the third party Bookshelf app, a product owned by a subsidiary of the Ingram Content Group (in turn owner of many publishers such as Baker & Taylor, Hachette, and Perseus).

These programs at over 30 institutions are not just brazenly anti-competitive but totally unnecessary. All of the purported benefits of these programs are covered more comprehensively and more equitably by Open Education initiatives.

Open Educational Resources (OER)

Open Education is the simple idea that the power of open licensing should be applied to educational materials. That means students have instant access to all digital materials at no cost, and even better, both they and their instructor are free to use and remix materials under Creative Commons and other open licensing. This opens up the possibility of tailoring these resources to be more relevant and responsive to students of a given school and class.

As an example of truly equitable access, Rice University launched the nonprofit technology initiative OpenStax in 2012, which publishes high-quality and peer-reviewed digital course content for free. This isn't only benefiting their own students, but students at hundreds of universities and colleges across the world.

This isn’t just a good deal for students whose campuses and instructors adopt Open Education either. Projects like OpenStax have ignited competition in the textbook market, and since 2017, textbook prices have held steady after 50 years of growth that outpaced inflation. Fortunately they are not alone. There are many universities contributing to, curating, and maintaining a huge quantity of education resources which eliminate or drive down the cost of course materials for students.

So what is the hold-up if a library of high quality materials has been available long before the practice of automated textbook billing became widespread? One major barrier is that most instructors have still never heard of OER.

Pushing Back on Exclusionary Access Contracts

Fortunately, a broad coalition of groups defending free culture, coordinated by SPARC, has recently launched, a site which offers talking points and information to help students and other members of a school community educate decision-makers. If you already have automated textbook billing on campus, SPARC's contract library will help you sift through the fine print of the deal.

Campus advocacy is an essential first step for defending against unfair or invasive school contracts with publishers and vendors when they pop up. Reaching out to librarians and administrators about how to support Open Education is the best first step. If your efforts convince just one instructor to adopt Open Education Resources—or release their own material under an open license—you can contribute to more equity for students and less time in course prep for instructors.

With enough momentum, you and fellow organizers can make use of EFF’s own organizing toolkits as well as resources from open textbook alliance and OpenStax. If you start meeting regularly on the issue, consider joining our grassroots information sharing network, the Electronic Frontier Alliance, for guidance from the EFF and fellow alliance members on any digital rights issues on campus.

EFF is proud to celebrate Open Access Week.

Rory Mir

The Internet Archive Transforms Access to Books in a Digital World

2 months 3 weeks ago

In honor of Open Access Week, and particularly this year’s theme of structural equity, we wanted to highlight a project from the Internet Archive that is doing extraordinary work promoting access to knowledge. The bad news: that project is also under legal threat. The good news: the Archive, with help from EFF and Durie Tangri, is fighting back.

The Archive is a nonprofit digital library that has had one guiding mission for almost 25 years: to provide universal access to all knowledge. Democratizing access to books is a central part of that mission. That’s why the Archive has been working with other libraries for almost a decade to digitize and lend books via Controlled Digital Lending (CDL).

This service has been especially crucial during the pandemic, but will be needed long afterwards.

CDL allows people to check out digital copies of books for two weeks or less, and only permits patrons to check out as many digital copies as the Archive and its partner libraries physically own. Lending happens on an “own to loan” basis—if a digital copy is checked out to a patron, the physical copy is unavailable to other patrons as well. CDL does use DRM to enforce that limited access, but it is still true that anyone with an Internet connection can read digital versions of the great works of human history.

This service has been especially crucial during the pandemic, but will be needed long afterwards. Many families cannot afford to buy all the books they and their kids want or need to access, and look to libraries to fill the gap. Researchers may locate books they need, but discover they are out of print. Others simply want access to knowledge. And all of these people may not be able to visit the physical library that houses the works they need. CDL helps to solve that problem, creating a lifeline to trusted information. It also fosters research and learning by keeping books in circulation when their publishers are unable or unwilling to do so.

But four giant publishers want to shut that service down. Last year, Hachette, HarperCollins, Wiley, and Penguin Random House sued the Archive, alleging that CDL has cost their companies millions of dollars and is a threat to their businesses. They are wrong. Libraries have paid publishers billions of dollars for the books in their print collections. They are investing enormous resources in digitization in order to preserve those texts. CDL simply helps libraries ensure the public can make full use of the books that libraries have already bought and paid for. Digitizing enables the preservation of physical books, increasing the likelihood that the books a library owns can be used by patrons. Digitizing and offering books online for borrowing unlocks them for communities with limited or no access.

Readers in the internet age need a comprehensive library that meets them where they are.

The Archive and the hundreds of libraries and archives that support it are not thieves. They are librarians, striving to serve their patrons online just as they have done for centuries in the brick-and-mortar world. Governments around the world have recognized the importance of that mission and enacted a host of rules to ensure that copyright law does not impede it. It's a shame that these publishers would rather spend money on lawyers than on fostering and improving access to books. What is worse, the publishers want the Archive to defend CDL with one arm tied behind its back.  They’ve claimed CDL hurts their bottom line, but are doing their level best to limit investigation into that supposed harm. For example, the publishers spoke often about CDL with a powerful industry trade association, which presumably included discussions of any such harm, but they are refusing to share those communications based on claims of privilege that just don’t pass the smell test. Meanwhile, members of Congress recently launched an investigation into e-book licensing practices that may shed light on the digital book ecosystem, and the onerous restrictions that impede libraries’ ability to serve their patrons.

Within that context, the Archive has made careful efforts to ensure its uses are lawful. The CDL program is sheltered by copyright’s fair use doctrine, buttressed by traditional library protections. Specifically, the project serves the public interest in preservation, access, and research—all classic fair use purposes. Every book in the collection has already been published and most are out of print. Patrons can borrow and read entire volumes, to be sure, but that is what it means to check a book out from a library. As for its effect on the market for the works in question, the books have already been bought and paid for by the libraries that own them or, in some instances, individuals who donate them. The public derives tremendous benefit from the program, and rightsholders will gain nothing if the public is deprived of this resource.

Readers in the internet age need a comprehensive library that meets them where they are—an online space that welcomes everyone to use its resources, while respecting readers’ privacy and dignity. EFF is proud to represent the Archive in this important fight.

EFF is proud to celebrate Open Access Week.


Related Cases: Hachette v. Internet Archive
Corynne McSherry

Europe's Digital Services Act: On a Collision Course With Human Rights

2 months 3 weeks ago

Last year, the EU introduced the Digital Services Act (DSA), an ambitious and thoughtful project to rein in the power of Big Tech and give European internet users more control over their digital lives. It was an exciting moment, as the world’s largest trading bloc seemed poised to end a string of ill-conceived technological regulations that were both ineffective and incompatible with fundamental human rights.

We were (cautiously) optimistic, but we didn’t kid ourselves: the same bad-idea-havers who convinced the EU to mandate over-blocking, under-performing, monopoly-preserving copyright filters would also try to turn the DSA into yet another excuse to subject Europeans’ speech to automated filtering

We were right to worry.

The DSA is now steaming full-speed-ahead on a collision course with even more algorithmic filters - the decidedly unintelligent “AIs” that the 2019 Copyright Directive ultimately put in charge of 500 million peoples’ digital expression in the 27 European member-states.

Copyright filters are already working their way into national law across the EU as each country implements the 2019 Copyright Directive. Years of experience have shown us that automated filters are terrible at spotting copyright infringement, both underblocking (permitting infringement to slip through) and overblocking (removing content that doesn’t infringe copyright) - and filters can be easily tricked by bad actors into blocking legitimate content, including (for example) members of the public who record their encounters with police officials.

But as bad as copyright filters are, the filters the DSA could require are far, far worse.

The Filternet, Made In Europe

Current proposals for the DSA, recently endorsed by an influential EU Parliament committee, would require online platforms to swiftly remove potentially illegal content. One proposal would automatically make any “active platform” potentially liable for the communications of its users. What’s an active platform? One that moderates, categorizes, promotes or otherwise processes its users’ content. Punishing services that moderate or classify illegal content is absurd - these are both responsible ways to approach illegal content.

These requirements give platforms the impossible task of identifying illegal content in realtime, at speeds no human moderator could manage  - with stiff penalties for guessing wrong. Inevitably, this means more automated filtering - something the platforms often boast about in public, even as their top engineers are privately sending memos to their bosses saying that these systems don’t work at all.

Large platforms will overblock, removing content according to the fast-paced, blunt determinations of an algorithm, while appeals for the wrongfully silenced will go through a review process that, like the algorithm, will be opaque and arbitrary. That review will also be slow: speech will be removed in an instant, but only reinstated after days, or weeks,or 2.5 years

But at least the largest platforms would be able to comply with the DSA. It’s far worse for small services, run by startups, co-operatives, nonprofits and other organizations that want to support, not exploit, their users. These businesses (“micro-enterprises” in EU jargon) will not be able to operate in Europe at all if they can’t raise the cash to pay for legal representatives and filtering tools.

Thus, the DSA sets up rules that allow a few American tech giants to control huge swaths of Europeans’ online speech, because they are the only ones with the means to do so. Within these American-run walled gardens, algorithms will monitor speech and delete it without warning, and without regard to whether the speakers are bullies engaged in harassment - or survivors of bullying describing how they were harassed.

It Didn’t Have to be This Way

EU institutions have a long and admirable history of attention to human rights principles. Regrettably, the EU legislators who’ve revised the DSA since its introduction have sidelined the human rights concerns raised by EU experts and embodied in EU law.

For example, the E-Commerce Directive, Europe’s foundational technology regulation, balances the need to remove unlawful content with the need to assess content to evaluate whether removal is warranted. Rather than establishing a short and unreasonable deadline for removal, the E-Commerce Directive requires web hosts to remove content “expeditiously” after they have determined that it is actually illegal (this is called the “actual knowledge” standard) and “in observance of the principle of freedom of expression.” 

That means that if you run a service and learn about an illegal activity because a user notifies you about it, you must take it down within a reasonable timeframe. This isn’t great - as we’ve written, it should be up to courts, not disgruntled users of platform operators, to decide what is and isn’t illegal. But as imperfect as it is, it’s far better than the proposals underway for the DSA. 

Those proposals would magnify the defects within the E-Commerce Directive, following the catastrophic examples set with German’s NetzDG and France’s Online Hate Speech Bill (a law so badly constructed that it was swifty invalidated by France’s Constitutional Council) and set deadlines for removal that preclude any meaningful scrutiny. One proposal requires action within 72 hours, and another would have platforms remove content within 24 hours or even within 30 minutes for live-streamed content.  

The E-Commerce Directive also sets out a prohibition on “general monitoring obligations” - that is, it prohibits Europe’s governments from ordering online services to spy on their users all the time. Short deadlines for content removals run afoul of this prohibition and cannot help but violate freedom of expression rights. 

This ban on spying is complemented by the EU’s landmark General Data Protection Regulation (GDPR) - a benchmark for global privacy regulations - which stringently regulates the circumstances under which a user can be subjected to “automated decision-making” - that is, it effectively bans putting a user’s participation in online life at the mercy of an algorithm.

Taken together, a ban on general monitoring and harmful and non-consensual automated decision-making is a way to safeguard European internet users’ human rights to live without constant surveillance and judgment.

Many proposals for DSA revisions shatter these two bedrock principles, calling for platforms to detect and restrict content that might be illegal or that has been previously identified as illegal, or that resembles known illegal content. This cannot be accomplished without subjecting everything that every user posts to scrutiny.

It Doesn’t Have to be This Way

The DSA can be salvaged.It can be made to respect human rights, and kept consistent with the E-Commerce Directive and the GDPR. Content removal regimes can be balanced with speech and privacy rights, with timeframes that permit careful assessment of the validity of takedown demands. The DSA can be balanced to emphasize the importance of appeals systems for content removal as co-equal with the process for removal itself, and platforms can be obliged to create and maintain robust and timely appeals systems.

The DSA can contain a prohibition on automated filtering obligations, respecting the GDPR and making a realistic assessment about the capabilities of “AI” systems based on independent experts, rather than the fanciful hype of companies promising algorithmic pie in the sky.

The DSA can recognize the importance of nurturing small platforms, not merely out of some fetish for “competition” as a cure-all for tech’s evils - but as a means by which users can exercise technological self-determination, banding together to operate or demand social online spaces that respect their norms, interests and dignity. This recognition would mean ensuring that any obligations the DSA imposes take account of the size and capabilities of each actor. This is in keeping with recommendations in the EU Commission’s DSA Impact Assessment - a recommendation that has been roundly ignored so far.

The EU and the Rest of the World

European regulation is often used as a benchmark for global rulemaking. The GDPR created momentum that culminated with privacy laws such as California’s CCPA, while NetzDG has inspired even worse regulation and proposals in Australia, the UK, and Canada.

The mistakes that EU lawmakers make in crafting the DSA will ripple out all over the world, affecting vulnerable populations who have not been given any consideration in drafting and revising the DSA (so far).

The problems presented by Big Tech are real, they’re urgent, and they’re global. The world can’t afford a calamitous EU technology regulation that sidelines human rights in a quest for easy answers and false quick fixes.

Cory Doctorow

A Universal Gigabit Future Depends on Open Access Fiber

2 months 3 weeks ago

The future is online. Actually, the present is online, and the future more so. The COVID-19 pandemic, and the constant refrain about the “new normal” prove that not only is internet access vital to 21st century life; high-speed access is a necessity. It is no longer enough to just have internet access; one must have quality access. And that is going to depend on open access fiber.

Being a full participant in the world will eventually depend on access to gigabits of broadband capacity. That capacity will depend on fiber optics. Over the years, EFF has researched and advocated for policy changes at the local, state, and federal levels—all towards the goal of delivering universal fiber to everyone in the country. Part of that work has required us to look back at the mistakes made in the past, how they’ve led to the problems of today, and how to avoid making the same mistakes in the future. 

One of the biggest mistakes has been overly relying on large, publicly-traded, for-profit companies to deliver universal access. For decades, policymakers have given billions in subsidies to the likes of AT&T, Comcast, and Verizon to build out their networks, with the goal that the existing companies serve everyone. These companies were gifted with countless regulatory favors designed for and often by the largest corporations. Their lobbyists were given front-row status in guiding policy decisions in Congress, state legislatures, and the Federal Communications Commission. In return for nearly two decades of favoritism, still more than half of the country lacks 21st century-ready broadband. Millions in the United States remain unserved.

A new study, funded by EFF, explains why that is and how we can reorient our public investments into broadband infrastructure able to connect all people to the gigabit future. Put simply, the biggest mistake in broadband policy has been in subsidizing broadband carriers, hoping they would build infrastructure, as opposed to focusing directly on future-proof infrastructure development. As a result, when we spend $45 billion—and counting—on supporting any service reaching a bare minimum metric of 25/3 Mbps (the federal definition of broadband), we fail to build long-term infrastructure, while squandering resources on dated copper, cable, and long-range wireless solutions. With another $45 billion potentially getting queued up by Congress, now is the time to rethink how we spend those new funds, and focus not on getting just any service to people, but on getting infrastructure to them that will sustain us for decades.

What is an open access network and why must it be fiber?

A true open access network (also known as a wholesale network) is an entity that does not sell broadband services, but rather offers the wires that enable anyone else to sell broadband services, along with other data applications. In other words, a truly open access network is one of infrastructure. And then, anyone else can build on that existing infrastructure to become a broadband service provider.

These types of entities have existed for years in the EU, thanks to a regulatory scheme that incentivizes them. They have been deploying fiber optic infrastructure to the EU member nation-states. Fiber is their choice because it is a future-proof transmission medium that will handle internet growth for decades, without new investments. So, while fiber costs a lot at the outset, it will only become more valuable over time. As our study found, this results in what are known as patient capital investors—investors willing to wait the requisite number of years before an investment starts to pay back.

No one doubts that everyone will need greater amounts of data capacity as the internet continues to evolve and grow. A pure infrastructure provider, like an open access network, will not be looking to reap its rewards from broadband customers, but from selling access to a multiplicity of providers, giving it an incentive to build the fastest, most future-proof infrastructure possible. Furthermore, it has an incentive to lease space on its network to many different broadband providers, in order to recoup its costs. That will create competition in an area where it is sorely lacking. Many ISPs will now have the ability to offer plans in new markets, and consumers will be able to pick the one that they like best.

The future will create more services that require high-speed broadband. To build a network that can provide for those needs and make money, an infrastructure provider will need to be able to offer more and more without having to constantly upgrade what its built.  Right now, only fiber optic wires can do that. It cannot be replicated or replaced with other data transmission mediums, such as cable, wireless, and satellite. Fiber optic wires hold terabits of spectrum capacity, and we haven’t even invented the hardware that can make full use of that capacity yet.

In the long term, open access fiber networks are more efficient and able to reach more people than government subsidies.

The United States has ended up with slow, expensive, and non-universal internet access by relying on the wrong entities to build the country’s communications infrastructure. Large, private, vertically integrated (owning content production, telephone services, alarm systems, wireless services, and streaming) ISPs are burdened with attempting to achieve multiple goals in the chase for profits. Their mergers and acquisitions strategy has resulted in big telecoms accruing the most debt in the world, forcing them to seek ways to minimize investments. Every minute and dollar spent on the non-broadband services is a dollar and a minute not spent on upgrading and building infrastructure. In fact, it’s even worse: where a broadband provider is a monopoly, they have a guaranteed income to spend on the non-broadband services and no incentive to improve the broadband side. This multi-headed hydra approach negatively impacts their ability to focus on the core element of broadband access: rolling out next-generation fiber optics.

These giant legacy companies often tell policymakers and the media that financing fiber infrastructure is “too expensive” in order to avoid the actual truth, which is that it’s only too expensive for them. Once an entity’s only concern is to lay data transmission lines, the equation changes dramatically, according to our cost model analysis.

The truth is, with the appropriate amount of patience and a long-term investment strategy, universal fiber access is feasible. Provisioning fiber infrastructure on a wholesale model, which means selling capacity to all comers but not selling broadband service, makes it feasible to build out to large swaths of the United States where symmetrical gigabit and beyond demand exists or will exist in the future. When factoring in all the burdens (and risks) a vertically integrated ISP carries with it today compared to the simplified approach of an infrastructure-only deployment, our cost model shows that an infrastructure-only entity can deliver fiber optic wires to nearly 80% of the population for a profit while vertically integrated ISPs can reach at best only half.

In other words, if our broadband policy and subsidy dollars revolve around the premise that the AT&Ts and Comcast's of the world are the best or only solution to the problem of broadband, we’re doing it wrong. And it’s costing taxpayers a fortune. If our goal is to get fiber-optic connectivity to everyone, we need to change course and focus solely on entities delivering infrastructure. This will require a change in regulatory and public investment goals.

The Federal Communications Commission has to establish infrastructure policy, and states should prioritize building open access fiber networks.

Our model shows that an emphasis on open access infrastructure will yield tremendous savings to taxpayers by reducing subsidies, and expanding fiber access to tens of millions of more Americans stuck in cable monopoly markets. But our study shows that this will not happen on its own.

The Federal Communications Commission (FCC) needs to proactively adopt competition regulations that reduce risk to infrastructure providers in order to take full advantage of their efficiencies. Some examples of what the FCC could do include: identifying where accessible fiber is present through broadband mapping, ensuring that open access providers are given the same rights as AT&T and Comcast, and adopting rules that prevent predatory pricing by cable companies, who will want to prevent fiber deployment to preserve their monopolies.

Much of the focus of broadband mapping has been on identifying speed metrics, but not on the long-term viability of existing infrastructure. As a result, a speed-capped satellite connection is treated the same as a fiber wire with multi-gigabit potential. The FCC, particularly if Congress invests billions in broadband access, needs to help identify where future-proof capacity is lacking in order to better inform would-be investors of fiber opportunities.

However, given that open access providers are not traditional telecoms, they need to be given the rights of way and pole attachment, rights provided to Title II common carriers. Otherwise, they will run into the same problems Google Fiber did when AT&T withheld access to its poles in Texas. Lastly, many of the attractive long-term investment markets are going to be cable monopoly markets. However, if cable companies are allowed to engage in predatory pricing of broadband access to head off future competition, it will effectively undermine long-term fiber investment models. In other words, the FCC must adopt rules that prohibit cable companies from cross-subsidizing their monopoly markets with future competitive markets and require equivalency in pricing. If they are offering one market lower-cost high-speed broadband, they must offer that to all markets across their territory. But all of these suggested policies are contingent on the FCC restoring its authority over broadband carriers and reversing the deregulation that occurred with the Restoring Internet Freedom Order.

Ernesto Falcon

Open Access Fiber Networks Will Bring Much-Needed High-Speed Internet Service and Competition to Communities More Efficiently and Economically: Report

2 months 3 weeks ago
Wholesale Networks Will Build Future-Proof Communications Networks

San Francisco—Public investments in open access fiber networks, instead of more subsidies for broadband carriers, will bring high-speed internet on a more cost-efficient basis to millions of Americans and create an infrastructure that can handle internet growth for decades, according to a new report.

Commissioned by EFF, “Wholesale Fiber is the Key to Broad US Fiber to the Premises (FTTP) Coverage” shows how wholesale, open access networks that lease capacity to service providers who market  to consumers are the most cost-effective and efficient way to end the digital divide that has left millions of people, particularly those in rural and low-income areas, with inadequate or no internet service. These inequities were laid bare by the pandemic, when millions of workers and schoolchildren needed high-speed internet.

Billions of dollars funneled to AT&T, Comcast, and others to provide minimum speeds has left more than half of America without 21st century-ready broadband access to date. Investing in wholesale fiber networks will promote competition and lower prices for consumers stuck on cable monopolies and efficiently replace legacy infrastructure.

A wholesale network model could cover close to 80 percent of the U.S. with fiber to the premises before government subsidies would even be necessary, whereas the existing broadband carrier model is expected to only 50 percent profitably, according to the report by Diffraction Analyses, an independent, global broadband consulting and research firm.

“We can’t afford to repeat the mistakes of the past,” said EFF Senior Legislative Counsel Ernesto Falcon. “The federal government and states like California are gearing up to potentially invest billions of dollars on broadband. This report includes economic models showing that funding wholesale broadband operations is a better long-term investment strategy. It will provide more coverage than throwing money at large publicly traded for-profit companies that are making a killing on the current model and have no incentive to change and deploy fiber.”

For the report:

For more on community broadband:

Contact:  ErnestoFalconSenior Legislative
Karen Gullo

Resisting the Menace of Face Recognition

2 months 3 weeks ago

Face recognition technology is a special menace to privacy, racial justice, free expression, and information security. Our faces are unique identifiers, and most of us expose them everywhere we go. And unlike our passwords and identification numbers, we can’t get a new face. So, governments and businesses, often working in partnership, are increasingly using our faces to track our whereabouts, activities, and associations.

Fortunately, people around the world are fighting back. A growing number of communities have banned government use of face recognition. As to business use, many communities are looking to a watershed Illinois statute, which requires businesses to get opt-in consent before extracting a person’s faceprint. EFF is proud to support laws like these.

Face Recognition Harms

Let’s begin with the ways that face recognition harms us. Then we’ll turn to solutions.


Face recognition violates our human right to privacy. Surveillance camera networks have flooded our public spaces. Face recognition technologies are more powerful by the day. Taken together, these systems can quickly, cheaply, and easily ascertain where we’ve been, who we’ve been with, and what we’ve been doing. All based on a unique marker that we cannot change or hide: our own faces.

In the words of a federal appeals court ruling in 2019, in a case brought against Facebook for taking faceprints from its users without their consent:

Once a face template of an individual is created, Facebook can use it to identify that individual in any of the other hundreds of millions of photos uploaded to Facebook each day, as well as determine when the individual was present at a specific location. Facebook can also identify the individual’s Facebook friends or acquaintances who are present in the photo. … [I]t seems likely that a face-mapped individual could be identified from a surveillance photo taken on the streets or in an office building.

Government use of face recognition also raises Fourth Amendment concerns. In recent years, the U.S. Supreme Court has repeatedly placed limits on invasive government uses of cutting-edge surveillance technologies. This includes police use of GPS devices and cell site location information to track our movements. Face surveillance can likewise track our movements.

Racial Justice

Face recognition also has an unfair disparate impact against people of color.

Its use has led to the wrongful arrests of at least three Black men. Their names are Michael Oliver, Nijeer Parks, and Robert Williams. Every arrest of a Black person carries the risk of excessive or even deadly police force. So, face recognition is a threat to Black lives. This technology also caused a public skating rink to erroneously expel a Black patron. Her name is Lamya Robinson. So, face recognition is also a threat to equal opportunity in places of public accommodation.

These cases of “mistaken identity” are not anomalies. Many studies have shown that face recognition technology is more likely to misidentify people of color than white people. A leader in this research is Joy Buolamwini.

Even if face recognition technology was always accurate, or at least equally inaccurate across racial groups, it would still have an unfair racially disparate impact. Surveillance cameras are over-deployed in minority neighborhoods, so people of color will be more likely than others to be subjected to faceprinting. Also, history shows that police often aim surveillance technologies at racial justice advocates.

Face recognition is just the latest chapter of what Alvaro Bedoya calls “the color of surveillance.” This technology harkens back to “lantern laws,” which required people of color to carry candle lanterns while walking the streets after dark, so police could better see their faces and monitor their movements.

Free Expression

In addition, face recognition chills and deters our freedom of expression.

The First Amendment protects the right to confidentiality when we engage in many kinds of expressive activity. These include anonymous speech, private conversations, confidential receipt of unpopular ideasgathering news from undisclosed sources, and confidential membership in expressive associations. All of these expressive activities depend on freedom from surveillance because many participants fear retaliation from police, employers, and neighbors. Research confirms that surveillance deters speech.

Yet, in the past two years, law enforcement agencies across the country have used face recognition to identify protesters for Black lives. These include the U.S. Park Police, the U.S. Postal Inspection Service, and local police in Boca RatonBroward CountyFort LauderdaleMiamiNew York City, and Pittsburgh. This shows, again, the color of surveillance.

Police might also use face recognition to identify the whistleblower who walked into a newspaper office, or the reader who walked into a dissident bookstore, or the employee who walked into a union headquarters, or the distributor of an anonymous leaflet. The proliferation of face surveillance can deter all of these First Amendment-protected activities.

Information Security

Finally, face recognition threatens our information security.

Data thieves regularly steal vast troves of personal data. These include faceprints. For example, the faceprints of 184,000 travellers were stolen from a vendor of U.S. Customs and Border Protection.

Criminals and foreign governments can use stolen faceprints to break into secured accounts that the owner’s face can unlock. Indeed, a team of security researchers did this with 3D models based on Facebook photos.

Face Recognition Types

To sum up: face recognition is a threat to privacy, racial justice, free expression, and information security. However, before moving on to solutions, let’s pause to describe the various types of face recognition.

Two are most familiar. “Face identification” compares the faceprint of an unknown person to a set of faceprints of known people. For example, police may attempt to identify an unknown suspect by comparing their faceprint to those in a mugshot database.

“Face verification” compares the faceprint of a person seeking access, to the faceprints of people authorized for such access. This can be a minimally concerning use of the technology. For example, many people use face verification to unlock their phones.

There’s much more to face recognition. For example, face clustering, tracking, and analysis do not necessarily involve face identification or verification.

“Face clustering” compares all faceprints in a collection of images to one another, to group the images containing a particular person. For example, police might create a multi-photo array of an unidentified protester, then manually identify them with a mugshot book.

“Face tracking” follows the movements of a particular person through a physical space covered by surveillance cameras. For example, police might follow an unidentified protester from a rally to their home or car, then identify them with an address or license plate database.

“Face analysis” purports to learn something about a person, like their race or emotional state, by scrutinizing their face. Such analysis will often be wrong, as the meaning of a facial characteristic is often a social construct. For example, it will misgender people who are transgender or nonbinary. If it  “works,” it may be used for racial profiling. For example, a Chinese company claims it works as a “Uighur alarm.” Finally, automated screening to determine whether a person is supposedly angry or deceptive can cause police to escalate their use of force, or expand the duration and scope of a detention.

Legislators must address all forms of face recognition: not just identification and verification, but also clustering, tracking, and analysis.

Government Use of Face Recognition

EFF supports a ban on government use of face recognition. The technology is so destructive that government must not use it at all.

EFF has supported successful advocacy campaigns across the country. Many local communities have banned government use of face recognition, from Boston to San Francisco. The State of California placed a three-year moratorium on police use of face recognition with body cameras. Some businesses have stopped selling face recognition to police.

We also support a bill to end federal use of face recognition. If you want to help stop government use of face recognition in your community, check out EFF’s “About Face” toolkit.

Corporate Use of Face Recognition The Problem

Corporate use of face recognition also harms privacy, racial justice, free expression, and information security.

Part of the problem is at brick-and-mortar stores. Some use face identification to detect potential shoplifters. This often relies on error-prone, racially biased criminal justice data. Other stores use it to identify banned patrons. But this can misidentify innocent patrons, especially if they are people of color, as happened to Lamya Robinson at a roller rink. Still, other stores use face identification, tracking, and analysis to serve customers targeted ads or track their behavior over time. This is part of the larger problem of surveillance-based advertising, which harms all of our privacy.

There are many other kinds of threatening corporate uses of face recognition. For example, some companies use it to scrutinize their employees. This is just one of many high-tech ways that bosses spy on workers. Other companies, like Clearview AI, use face recognition to help police identify people of interest, including BLM protesters. Such corporate-government surveillance partnerships are a growing threat.

The Solution

Of all the laws now on the books, one has done the most to protect us from corporate use of face recognition: the Illinois Biometric Information Privacy Act, or BIPA.

At its core, BIPA does three things:

  1. It bans businesses from collecting or disclosing a person’s faceprint without their opt-in consent.
  2. It requires businesses to delete the faceprints after a fixed time.
  3. If a business violates a person’s BIPA rights by unlawfully collecting, disclosing, or retaining their faceprint, that person has a “private right of action” to sue that business.

EFF has long worked to enact more BIPA-type laws, including in Congress and the states. We regularly advocate in Illinois to protect BIPA from legislative backsliding. We have also filed amicus briefs in a federal appellate court and the Illinois Supreme Court to ensure that everyone who has suffered a violation of their BIPA rights can have their day in court.

BIPA prevents one of the worst corporate uses of face recognition: dragnet faceprinting of the public at large. Some companies do this to all people entering a store, or all people appearing in photos on social media. This practice violates BIPA because some of these people have not previously consented to faceprinting.

People have filed many BIPA lawsuits against companies that took their faceprints without their consent. Facebook settled one case, arising from their “tag suggestions” feature, for $650 million.

First Amendment Challenges

Other BIPA lawsuits have been filed against Clearview AI. This is the company that extracted faceprints from ten billion photographs, and uses these faceprints to help police identify suspects. The company does not seek consent for its faceprinting. So Clearview now faces a BIPA lawsuit in Illinois state court, brought by the ACLU, and several similar suits in federal court.

In both venues, Clearview asserts a First Amendment defense. EFF disagrees and filed amicus briefs saying so. Our reasoning proceeds in three steps.

First, Clearview’s faceprinting enjoys at least some First Amendment protection. It collects information about a face’s measurements, and creates information in the form of a unique mathematical representation. The First Amendment protects the collection and creation of information because these often are necessary predicates to free expression. For example, the U.S. Supreme Court has ruled that the First Amendment protects reading books, gathering news, creating video games, and even purchasing ink by the barrel. Likewise, appellate courts protect the right to record on-duty police.

First Amendment protection of faceprinting is not diminished by its use of computer code, because code is speech. To paraphrase one court: just as musicians can communicate among themselves with a musical score, computer programmers can communicate among themselves with computer code.

Second, Clearview’s faceprinting does not enjoy the strongest forms of First Amendment protection, such as “strict scrutiny.” Rather, it enjoys just “intermediate scrutiny.” This is because it does not address a matter of public concern. The Supreme Court has emphasized this factor in many contexts, including wiretapping, defamation, and emotional distress. Likewise, lower courts have held that common law claims of information privacy—namely, intrusion on seclusion and publication of private facts—do not violate the First Amendment if the information at issue was not a matter of public concern.

Intermediate review also applies to Clearview’s faceprinting because its interests are solely economic. The Supreme Court has long held that “commercial speech,” meaning “expression related solely to the economic interests of the speaker and its audience,” receives “lesser protection.” Thus, when laws that protect consumer data privacy face First Amendment challenge, lower courts apply intermediate judicial review under the commercial speech doctrine.

To pass this test, a law must advance a “substantial interest,” and there must be a “close fit” between this interest and what the law requires.

Third, the application of BIPA to Clearview’s faceprinting passes this intermediate test. As discussed earlier, the State of Illinois has strong interests in preventing the harms caused by faceprinting to privacy, racial justice, free expression, and information security. Also, there is a close fit from these interests to the safeguard that Illinois requires: opt-in consent to collect a faceprint. In the words of the Supreme Court, data privacy requires “the individual’s control of information concerning [their] person.”

Some business groups have contested the close fit between BIPA’s means and ends by suggesting Illinois could achieve its goals, with less burden on business, by requiring just an opportunity for people to opt-out. But defaults matter. Opt-out is not an adequate substitute for opt-in. Many people won’t know a business collected their faceprint, let alone know how to opt-out. Other people will be deterred by the confusing and time-consuming opt-out process. This problem is worse than it needs to be because many companies deploy “dark patterns,” meaning user experience designs that manipulate users into giving their so-called “agreement” to data processing.

Thus, numerous federal appellate and trial courts have upheld consumer data privacy laws that are similar to BIPA against First Amendment challenge. Just this past August, an Illinois judge rejected Clearview’s First Amendment defense.

Next Steps

In the hands of government and business alike, face recognition technology is a growing menace to our digital rights. But the future is unwritten. EFF is proud of its contributions to the movement to resist abuse of these technologies. Please join us in demanding a ban on government use of face recognition, and laws like Illinois’ BIPA to limit private use. Together, we can end this threat.

Adam Schwartz

Honoring Elliot Harmon—EFF Activism Director, Poet, Friend—1981-2021

2 months 3 weeks ago

It is with heavy hearts that we mourn and celebrate our friend and colleague Elliot Harmon, who passed away peacefully on Saturday morning following a lengthy battle with melanoma. We will deeply miss Elliot’s clever mind, powerful pen, generous heart, and expansive kindness. We will carry his memory with us in our work. 

Elliot understood how intellectual property could be misused to shut down curiosity, silence artists, and inhibit research—and how open access policies, open licensing, and a more nuanced and balanced interpretation of copyright could reverse those trends. A committed copyleft activist, he led campaigns against patent trolls and fought for open access to research. He campaigned globally for freedom of expression and access to knowledge, and his powerful articles helped define many of these issues for a global community of digital rights activists.

This photo was taken shortly before Elliot went to speak on top of a truck at a Stop SESTA/FOSTA rally in Oakland.

Elliot’s formidable activism touched upon every aspect of EFF’s work. In his early days with us, he continued the work that he began at Creative Commons campaigning for the late Palestinian-Syrian activist, technologist, and internet volunteer Bassel Khartabil. He also ran a successful campaign for Colombian student Diego Gomez, fighting against that country’s steep copyright infringement laws and advocating for open access and academic freedom. Following the same values, Elliot spearheaded EFF’s Reclaim Invention campaign urging universities to protect their inventions from patent trolls. He went on to help steer our campaign to get the FCC to restore net neutrality rules, framing the issue as a matter of free speech and calling on “Team Internet” to join him in the fight. In all of these efforts and more, Elliot brought a natural sense of how to build and nurture community around a shared cause. 

Elliot was also a leading advocate for free expression online, and helped educate the public on how laws policing online speech or ratcheting up the liability of online platforms could have serious consequences for marginalized communities. In 2018, when SESTA-FOSTA came to the legislative table and it looked as though many organizations feared standing up for sex workers, Elliot made sure we weren’t one of them, and directed his and EFF’s energy to fiercely advocating for their rights online. Elliot’s op-ed in the New York Times still stands as a crucial and powerful explanation of how Section 230 enables millions of the voiceless to have a voice. As he wrote: “History shows that when platforms clamp down on their users’ speech, the people most excluded are the ones most excluded from other aspects of public life, too.”

Elliot spoke to the press frequently about EFF's issues and campaigns. In this early 2020 photo, he was preparing to speak about protecting the .ORG domain.

More recently, Elliot coordinated a global effort to prevent a private equity firm from purchasing the .ORG domain, rallying the troops for what was undoubtedly one of the most dramatic shows of non-profit sector solidarity of all time, to use his own words. His sense of humor and humility are on full display in this Deeplinks post about the campaign.

But Elliot’s deepest digital rights commitment may have been his belief in open access to knowledge and culture—and he knew how to write about that belief as an invitation, not a command. To give just one of many examples, this post helped draw attention to the removal of a tool used by journalists and activists to save eyewitness videos. 

In an organization filled with tireless advocates, Elliot’s thoughtfulness, quick wit, and wide-ranging interests—along with his loud and buoyant laugh, sparked easily by the team members he worked alongside for three years then led for nearly three more—set him apart. We knew a meeting or planning session was going well when we could hear Elliot’s laughter from across the office. And an edit by Elliot on a blog post or call to action was sure to make it smarter, sharper, and more persuasive.  

Elliot with members of the activism team in 2018. His colleague Katharine shared: 'I do not remember what Elliot said to provoke this reaction, but it’s how I will remember him.'

Elliot joined EFF in 2015 from Creative Commons, where he had served in the role of Director of Communications—a role in which many EFF staffers first encountered him. It was not, however, his first introduction to EFF; he planted an easter egg in his cover letter applying for a role on the activism team encouraging us to search for the old Geocities website he launched as a teenager, where one would come across EFF’s Blue Ribbon Campaign sticker. He was a lifelong supporter and a true believer in equal digital rights for all, and he always took great care to look for the underdog.

In 2018, Elliot took over the role of Activism Director from Rainey Reitman and built up a powerful team with the delicate strength required of one stepping into the shoes of a long-serving team leader. He excelled in the position, bringing joy, structure, and quirky “funtivities” to his team during a particularly difficult time for the world and the internet. His careful leadership style, constant awareness of his team members’ needs, and conviction that our work can change the world will continue to serve as an inspiration.

Elliot also served as a member of EFF’s senior leadership team. He was a powerful and thoughtful voice in helping us figure out how to remain scrappy and smart even as we put into place management and other structures appropriate for our now-larger organization.   

Of course, Elliot was not just a digital rights activist; he was a husband, a friend, a pro wrestling fan, an accomplished poet and performer, a Master of Fine Arts in Writing, a mentor to many, and a skilled and caring manager. 

We will miss Elliot’s incredible talent and leadership, but more than that, we will miss his sincerity, his hearty laugh, and his extraordinary sense of fairness and kindness. And we will continue the fight and honor his dreams of a free and open internet.

Elliot in front of the Electronic Frontier Foundation offices, wearing an EFF 25th anniversary member shirt. This photo was taken during his first week working at EFF.

Jillian C. York

What About International Digital Competition?

2 months 3 weeks ago

EFF Legislative Intern Suzi Ragheb wrote this blog post

Antitrust has not had its moment since the 1911 breakup of Standard Oil. But this past year, policymakers and government leaders around the globe have been taking a hard look at the technology markets. ‘Break up Big Tech’ is the newest antitrust catchphrase. On both sides of the Atlantic, policies have been introduced to foster digital competition.

Congress has introduced several competition and anti-trust bills, including a bipartisan package that passed out of committee. The Biden administration has nominated antitrust advocates to key positions: Lina Khan as chair of the Federal Trade Commission, Jonathan Kanter as the Assistant Attorney General for Antitrust at the Department of Justice, and Tim Wu at the National Economic Council. And across the Atlantic, the European Commission is marking up two key pieces of legislation, the Digital Markets Act and the Digital Services Act, that would create new rules for digital services and enhanced competition in the technology sector.   

Early this summer and on his first international travel trip, President Biden headed to Brussels to talk about creating a new U.S.- EU Tech and Trade Council (TTC) and a Joint Technology Competition Policy Dialogue (JTCPD). There have been few details aside from the initial press releases on what policy approaches would be considered. However, it is a clear sign that there is a transatlantic appetite for tackling competition in the technology space. But what would an international competition policy look like?

International Interoperability and Data Portability Standards

At EFF, we have long advocated for interoperability and data portability as the answers to outsized market power. We believe that creating open standards and allowing users to move their data around to different platforms shifts the market power away from companies and into the hands of consumers. Pursuing this at an international level would be a seismic power shift and would boost innovation and competition.

Having open, interoperable standards between international platforms would allow users to easily transfer their information to the platform that best suits their needs. It would mean that platforms would compete not on the size of their networks, but the quality of their services. When platforms take advantage of network effects, it’s not a competition of offering the best functions, it’s a competition of who can collect the most personal data. The JTCPD would be remiss if they did not address platform and service interoperability, not just ancillary services, as a key part of digital competition.

In an interoperable data world, if you don’t like Facebook’s functions, you would be able to take your data to another platform, one with better services, and you would be able to connect with individuals across platforms.

Given the global nature of the internet, creating international standards would be less burdensome for tech companies, as they wouldn’t have to navigate a patchwork of differing standards. And despite pushback from the platforms, this is not an impossible feat. In fact, interoperability is a cornerstone of the internet. Consider that after Facebook purchased Instagram, the company added chat interoperability between the two platforms, and it plans to make WhatsApp interoperable with both platforms. If we had interoperability standards before the companies merged, the market would have looked and acted differently.

International Antitrust Is Incomplete Without Privacy

Privacy is a fundamental human right recognized by the UN and it must be a part of any international agreement on digital competition. Users today feel hopeless when it comes to their right to online privacy. While interoperability could address privacy concerns by allowing users to self-determine their platform of choice as well as give privacy-conscious platforms the ability to compete on a level playing field with big platforms, there is still a need to establish international privacy standards.  Setting a minimum privacy standard pushes companies away from the personal-data-for-profit model that has become inimical to tech monopolies.

In the EU, data privacy standards have been established by the GDPR in 2016, codifying it as a fundamental right with high data protection standards across the EU. The U.S. significantly lags on developing federal privacy standards, despite bipartisan support. Privacy is also a national security concern, as it endangers the welfare of its citizens. A recent report commissioned by the Department of Defense’s Cyberspace Solarium calls on Congress to create national privacy standards as baseline protection against cyberattacks. Setting international privacy standards greatly benefits tech companies. It reduces compliance costs and confusion. And it gives a fair competitive chance to all tech companies, regardless of size.

The Promise of a Truly Competitive Digital Economy Lies in an International Agreement

Otherwise, we create a fractured world for a global internet, rampant with confusion and unequal protection under the law. Under an international agreement, interoperable and portable data standards would be adopted by the industry, leveling the field for both old and new firms. Interoperability will expand opportunities for start-ups to build new tech that works in existing dominant systems. International privacy standards and data minimization enshrines privacy as a human right and pushes the digital market away from the model that relies on personal data exploitation. Creating an international agreement sets up consumers for broader data protections and companies for expanded market access. And a U.S.-EU agreement on tech competition would set the tone for the rest of the globe.

Ernesto Falcon

John Gilmore Leaves the EFF Board, Becomes Board Member Emeritus

2 months 4 weeks ago

Since he helped found EFF 31 years ago, John Gilmore has provided leadership and guidance on many of the most important digital rights issues we advocate for today. But in recent years, we have not seen eye-to-eye on how to best communicate and work together, and we have been unable to agree on a way forward with Gilmore in a governance role. That is why the EFF Board of Directors has recently made the difficult decision to vote to remove Gilmore from the Board.

We are deeply grateful for the many years Gilmore gave to EFF as a leader and advocate, and the Board has elected him to the role of Board Member Emeritus moving forward. "I am so proud of the impact that EFF has had in retaining and expanding individual rights and freedoms as the world has adapted to major technological changes,” Gilmore said. “My departure will leave a strong board and an even stronger staff who care deeply about these issues."

John Gilmore co-founded EFF in 1990 alongside John Perry Barlow, Steve Wozniak and Mitch Kapor, and provided significant financial support critical to the organization's survival and growth over many years. Since then, Gilmore has worked closely with EFF’s staff, board, and lawyers on privacy, free speech, security, encryption, and more.

In the 1990s, Gilmore found the government documents that confirmed the First Amendment problem with the government’s export controls over encryption, and helped initiate the filing of Bernstein v DOJ, which resulted in a court ruling that software source code was speech protected by the First Amendment and the government's regulations preventing its publication were unconstitutional. The decision made it legal in 1999 for web browsers, websites, and software like PGP and Signal to use the encryption of their choice.

Gilmore also led EFF’s effort to design and build the DES Cracker, which was regarded as a fundamental breakthrough in how we evaluate computer security and the public policies that control its use. At the time, the 1970s Data Encryption Standard (DES) was embedded in ATM machines and banking networks, as well as in popular software around the world. U.S. government officials proclaimed that DES was secure, while secretly being able to wiretap it themselves. The EFF DES Cracker publicly showed that DES was in fact so weak that it could be broken in one week with an investment of less than $350,000. This catalyzed the international creation and adoption of the much stronger Advanced Encryption Standard (AES), now widely used to secure information worldwide.

Among Gilmore’s most important contributions to EFF and to the movement for digital rights has been recruiting key people to the organization, such as former Executive Director Shari Steele, current Executive Director Cindy Cohn, and Senior Staff Attorney and Adams Chair for Internet Rights Lee Tien.

EFF has always valued and appreciated Gilmore’s opinions, even when we disagree. It is no overstatement to say that EFF would not exist without him. We look forward to continuing to benefit from his institutional knowledge and guidance in his new role of Board Member Emeritus.

Cindy Cohn

Police Can’t Demand You Reveal Your Phone Passcode and Then Tell a Jury You Refused

3 months ago

The Utah Supreme Court is the latest stop in EFF’s roving campaign to establish your Fifth Amendment right to refuse to provide your password to law enforcement. Yesterday, along with the ACLU, we filed an amicus brief in State v. Valdez, arguing that the constitutional privilege against self-incrimination prevents the police from forcing suspects to reveal the contents of their minds. That includes revealing a memorized passcode or directly entering the passcode to unlock a device.

In Valdez, the defendant was charged with kidnapping his ex-girlfriend after arranging a meeting under false pretenses. During his arrest, police found a cell phone in Valdez’s pocket that they wanted to search for evidence that he set up the meeting, but Valdez refused to tell them the passcode. Unlike many other cases raising these issues, however, the police didn’t bother seeking a court order to compel Valdez to reveal his passcode. Instead, during trial, the prosecution offered testimony and argument about his refusal. The defense argued that this violated the defendant’s Fifth Amendment right to remain silent, which also prevents the state from commenting on his silence. The court of appeals agreed, and now the state has appealed to the Utah Supreme Court.

As we write in the brief: 

The State cannot compel a suspect to recall and share information that exists only in his mind. The realities of the digital age only magnify the concerns that animate the Fifth Amendment’s protections. In accordance with these principles, the Court of Appeals held that communicating a memorized passcode is testimonial, and thus the State’s use at trial of Mr. Valdez’s refusal to do so violated his privilege against self-incrimination. Despite the modern technological context, this case turns on one of the most fundamental protections in our constitutional system: an accused person’s ability to exercise his Fifth Amendment rights without having his silence used against him. The Court of Appeals’ decision below rightly rejected the State’s circumvention of this protection. This Court should uphold that decision and extend that protection to all Utahns.

Protecting these fundamental rights is only more important as we also fight to keep automated surveillance that would compromise our security and privacy off our devices. We’ll await a decision on this important issue from the Utah Supreme Court.

Related Cases: Andrews v. New Jersey
Andrew Crocker

Victory! Oakland’s City Council Unanimously Approves Communications Choice Ordinance

3 months ago

Oakland residents shared the stories of their personal experience; a broad coalition of advocates, civil society organizations, and local internet service providers (ISPs) lifted their voices; and now the Oakland City Council has unanimously passed Oakland’s Communications Service Provider Choice Ordinance. The newly minted law frees Oakland renters from being constrained to their landlord's preferred ISP by prohibiting owners of multiple occupancy buildings from interfering with an occupant's ability to receive service from the communications provider of their choice.

Across the country—through elaborate kickback schemes—large, corporate ISPs looking to lock out competition have manipulated landlords into denying their tenants the right to choose the internet provider that best meets their family’s needs and values. In August of 2018, an Oakland-based EFF supporter emailed us asking what would need to be done to empower residents with the choice they were being denied. Finally, after three years of community engagement and coalition building, that question has been answered.  

Modeled on a San Francisco law adopted in 2016, Oakland’s new Communications Choice ordinance requires property owners of multiple occupancy buildings to provide reasonable access to any qualified communication provider that has received a service request from a building occupant. San Francisco’s law has already proven effective. There, one competitive local ISP, which had previously been locked out of properties of forty or more units with active revenue sharing agreements, gained access to more than 1800 new units by 2020. Even for those who choose to stay with their existing provider, a competitive communications market benefits all residents by incentivizing providers to offer the best services at the lowest prices. As Tracy Rosenberg, the Executive Director of coalition member Media Alliance—and a leader in the advocacy effort—notes, "residents can use the most affordable and reliable services available, alternative ISP's can get footholds in new areas and maximize competitive benefits, and consumers can vote with their pockets for platform neutrality, privacy protections, and political contributions that align with their values.”

Unfortunately, not every city is as prepared to take advantage of such measures as San Francisco and Oakland. The Bay Area has one of the most competitive ISP markets in the United States, including smaller ISPs committed to defending net neutrality and their users’ privacy. In many U.S. cities, that’s not the case.

We hope to see cities and towns across the country step up to protect competition and foster new competitive options by investing in citywide fiber-optic networks and opening that infrastructure to private ISPs.

Nathan Sheard

Why Is It So Hard to Figure Out What to Do When You Lose Your Account?

3 months ago

We get a lot of requests for help here at EFF, with our tireless intake coordinator being the first point of contact for many. All too often, however, the help needed isn’t legal or technical. Instead, users just need an answer to a simple question: what does this company want me to do to get my account back?

People lose a lot when they lose their account. For example, being kicked off Amazon could mean losing access to your books, music, pictures, or anything else you have only licensed, not bought, from that company. But the loss can have serious financial consequences for people who rely on the major social media platforms for their livelihoods, the way video makers rely on YouTube or many artists rely on Facebook or Twitter for promotion.

And it’s even worse when you can’t figure out why your account was closed, much less how to get it restored.  The deep flaws in the DMCA takedown process are well-documented, but at least the rules of a DMCA takedown are established and laid out in the law. Takedowns based on ill-defined company policies, not so much.

Over the summer, writer and meme king Chuck Tingle found his Twitter account suspended due to running afoul of Twitter’s ill-defined repeat infringer policy. That they have such a policy is not a problem in and of itself: to take advantage of the DMCA safe harbor, Twitter is required to have one. It’s not even a problem that the law doesn’t specify what the policy needs to look like—flexibility is vital for different services to do what makes the most sense for them. However, a company has to make a policy with an actual, tangible set of rules if they expect people to be able to follow it.

This is what Twitter says:

What happens if my account receives multiple copyright complaints?

If multiple copyright complaints are received Twitter may lock accounts or take other actions to warn repeat violators. These warnings may vary across Twitter’s services.  Under appropriate circumstances we may suspend user accounts under our repeat infringer policy. However, we may take retractions and counter-notices into account when applying our repeat infringer policy. 

That is frustratingly vague. “Under appropriate circumstances” doesn’t tell users what to avoid or what to do if they run afoul of the policy. Furthermore, if an account is suspended, this does not tell users what to do to get it back. We’ve confirmed that “We may take retractions and counter-notices into account when applying our repeat infringer policy” means that Twitter may restore the account after a suspension or ban, in response to counter-notices and retractions of copyright claims. But an equally reasonable reading of it is that they will take those things into account only before suspending or banning a user, so counter-noticing won’t help you get your account back if you lost it after a sudden surge in takedowns.

And that assumes you can even send a counter-notice. When Tingle lost his account under its repeat infringer policy, he found that because his account was suspended, he couldn’t use Twitter’s forms to contest the takedowns. That sounds like a minor thing, but it makes it very difficult for users to take the steps needed to get their accounts back.

Often, being famous or getting press attention to your plight is the way to fast-track getting restored. When Facebook flagged a video of a musician playing a public domain Bach piece, and Sony refused to release the claim, the musician got it resolved by making noise on Twitter and emailing the heads of various Sony departments. Most of us don’t have that kind of reach.

Even when there are clear policies, those rules mean nothing if the companies don’t hold up their end of the bargain. YouTube’s Content ID rules claim a video will be restored if, after an appeal, a month goes by with no word from the complaining party. But there are numerous stories from creators in which a month passes, nothing happens, and nothing is communicated to them by YouTube. While YouTube’s rules need fixing in many ways, many people would be grateful if YouTube would just follow those rules.

These are not new concerns. Clear policies, notice to users, and a mechanism for appeal are at the core of the Santa Clara principles for content moderation. They are basic best practices for services that allow users to post content, and companies that have been hosting content for more than a decade have no excuse not to follow them.

EFF is not a substitute for a company helpline. Press attention is not a substitute for an appeals process. And having policies isn’t a substitute for actually following them.

Katharine Trendacosta

Crowd-Sourced Suspicion Apps Are Out of Control

3 months ago

Technology rarely invents new societal problems. Instead, it digitizes them, supersizes them, and allows them to balloon and duplicate at the speed of light. That’s exactly the problem we’ve seen with location-based, crowd-sourced “public safety” apps like Citizen.

These apps come in a wide spectrum—some let users connect with those around them by posting pictures, items for sale, or local tips. Others, however, focus exclusively on things and people that users see as “suspicious” or potentially hazardous. These alerts run the gamut from active crimes, or the aftermath of crimes, to generally anything a person interprets as helping to keep their community safe and informed about the dangers around them.

"Users of apps like Citizen, Nextdoor, and Neighbors should be vigilant about unverified claims"

These apps are often designed with a goal of crowd-sourced surveillance, like a digital neighborhood watch. A way of turning the aggregate eyes (and phones) of the neighborhood into an early warning system. But instead, they often exacerbate the same dangers, biases, and problems that exist within policing. After all, the likely outcome to posting a suspicious sight to the app isn’t just to warn your neighbors—it’s to summon authorities to address the issue.

And even worse than incentivizing people to share their most paranoid thoughts and racial biases on a popular platform are the experimental new features constantly being rolled out by apps like Citizen. First, it was a private security force, available to be summoned at the touch of a button. Then, it was a service to help make it (theoretically) even easier to summon the police by giving users access to a 24/7 concierge service who will call the police for you. There are scenarios in which a tool like this might be useful—but to charge people for it, and more importantly, to make people think they will eventually need a service like this—adds to the idea that companies benefit from your fear.

These apps might seem like a helpful way to inform your neighbors if the mountain lion roaming your city was spotted in your neighborhood. But in practice they have been a cesspool of racial profiling, cop-calling, gatekeeping, and fear-spreading. Apps where a so-called “suspicious” person’s picture can be blasted out to a paranoid community, because someone with a smartphone thinks they don’t belong, are not helping people to “Connect and stay safe.” Instead, they promote public safety for some, at the expense of surveillance and harassment for others.

Digitizing an Age Old Problem

Paranoia about crime and racial gatekeeping in certain neighborhoods is not a new problem. Citizen takes that old problem and digitizes it, making those knee-jerk sightings of so-called suspicious behavior capable of being broadcast to hundreds, if not thousands of people in the area.

But focusing those forums on crime, suspicion, danger, and bad-faith accusations can create havoc. No one is planning their block party on Citizen like they might be on other apps, which is filled with notifications like “unconfirmed report of a man armed with pipe” and “unknown police activity.” Neighbors aren’t likely to coordinate trick-or-treating on a forum they exclusively use to see if any cars in their neighborhood were broken into. And when you download an app that makes you feel like a neighborhood you were formerly comfortable in is now under siege, you’re going to use it not just to doom scroll your way through strange sightings, but also to report your own suspicions.

There is a massive difference between listening to police scanners, a medium that reflects the ever-changing and updating nature of fluid situations on the street, and taking one second of that live broadcast and turning it into a fixed, unverified, news report. Police scanners can be useful by many people for many reasons and ought to stay accessible, but listening to a livestream presents an entirely different context than seeing a fixed geo-tagged alert on a map. 

As the New York Times writes, Citizen is “converting raw scanner traffic—which is by nature unvetted and mostly operational—into filtered, curated digital content, legible to regular people, rendered on a map in a far more digestible form.” In other words, they’re turning static into content with the same formula the long-running show Cops used to normalize both paranoia and police violence.

Police scanners reflect the raw data of dispatch calls and police response to them, not a confirmation of crime and wrongdoing. This is not to say that the scanner traffic isn’t valuable or important—the public often uses it to learn what police are doing in their neighborhood. And last year, protesters relied on scanner traffic to protect themselves as they exercised their First Amendment rights.

But publication of raw data is likely to give the impression that a neighborhood has far more crime than it does. As any journalist will tell you, scanner traffic should be viewed like a tip and be the starting point of a potential story, rather than being republished without any verification or context. Worse, once Citizen receives a report, many stay up for days, giving the overall impression to a user that a neighborhood is currently besieged by incidents—when many are unconfirmed, and some happened four or five days ago.

From Neighborhood Forum to Vigilante-Enabler

It’s well known that Citizen began its life as “Vigilante,” and much of its DNA and operating procedure continue to match its former moniker. Citizen, more so than any other app, is unsure if it wants to be a community forum or a Star Wars cantina where bounty hunters and vigilantes wait for the app to post a reward for information leading to a person’s arrest.

When a brush fire broke out in Los Angeles in May 2021, almost a million people saw a notification pushed by Citizen offering a $30,000 reward for information leading to the arrest of a man they thought was responsible. It is the definition of dangerous that the app offered money to thousands of users, inviting them to turn over information on an unhoused man who was totally innocent.

Make no mistake, this kind of crass stunt can get people hurt. It demonstrates a very narrow view of who the “public” is and what “safety” entails.

Ending Suspicion as a Service

Users of apps like Citizen, Nextdoor, and Neighbors should be vigilant about unverified claims that could get people hurt, and be careful not to feed the fertile ground for destructive hoaxes.

These apps are part of the larger landscape that law professor Elizabeth Joh calls “networked surveillance ecosystems.” The lawlessness that governs private surveillance networks like Amazon Ring and other home surveillance systems—in conjunction with social networking and vigilante apps—is only exacerbating age-old problems. This is one ecosystem that should be much better contained.

Matthew Guariglia

On Global Encryption Day, Let's Stand Up for Privacy and Security

3 months ago

At EFF, we talk a lot about strong encryption. It’s critical for our privacy and security online. That’s why we litigate in courts to protect the right to encrypt, build technologies to encrypt the web, and it’s why we lead the fight against anti-encryption legislation like last year’s EARN IT Act.

We’ve seen big victories in our fight to defend encryption. But we haven’t done it alone. That’s why we’re proud this year to join dozens of other organizations in the Global Encryption Coalition as we celebrate the first Global Encryption Day, which is today, October 21, 2021.

For this inaugural year, we’re joining our partner organizations to ask people, companies, governments, and NGOs to “Make the Switch” to strong encryption. We’re hoping this day can encourage people to make the switch to end-to-end encrypted platforms, creating a more secure and private online world. It’s a great time to turn on encryption on all the devices or services you use, or switch to an end-to-end encrypted app for messaging—and talk to others about why you made that choice. Using strong passwords and two-factor authentication are also security measures that can help keep you safe. 

If you already have a handle on encryption and its benefits, today would be a great day to talk to a friend about it. On social media, we’re using the hashtag #MakeTheSwitch.

The Global Encryption Day website has some ideas about what you could do to make your online life more private and secure. Another great resource is EFF’s Surveillance Self Defense Guide, where you can get tips on everything from private web browsing, to using encrypted apps, to keeping your privacy in particular security scenarios—like attending a protest, or crossing the U.S. border. 

We need to keep talking about the importance of encryption, partly because it’s under threat. In the U.S. and around the world, law enforcement agencies have been seeking an encryption “backdoor” to access peoples’ messages. At EFF, we’ve resisted these efforts for decades. We’ve also pushed back against efforts like client-side scanning, which would break the promises of user privacy and security while technically maintaining encryption.

If you already have a handle on encryption and its benefits, today would be a great day to talk to a friend about it. On social media, we’re using the hashtag #MakeTheSwitch.

The Global Encryption Coalition is listing events around the world today. EFF Senior Staff Technologist Erica Portnoy will be participating in an “Ask Me Anything” about encryption on Reddit, at 17:00 UTC, which is 10:00 A.M. Pacific Time. Jon Callas, EFF Director of Technology Projects, will join an online panel about how to improve user agency in end-to-end encrypted services, on Oct. 28.

Joe Mullin

New Global Alliance Calls on European Parliament to Make the Digital Services Act a Model Set of Internet Regulations Protecting Human Rights and Freedom of Expression

3 months ago

The European Parliament’s regulations and policy-making decisions on technology and the internet have unique influence across the globe. With great influence comes great responsibility. We believe the European Parliament (EP) has a duty to set an example with the Digital Services Act (DSA), the first major overhaul of European internet regulations in 20 years. The EP should show that the DSA can address tough challenges—hate speech, misinformation, and users’ lack of control on big platforms—without compromising human rights protections, free speech and expression rights, and users’ privacy and security.

Balancing these principles is complex, but imperative. A step in the wrong direction could reverberate around the world, affecting fundamental rights beyond European Union borders. To this end, 12 civil society organizations from around the globe, standing for transparency, accountability, and human rights-centered lawmaking, have formed the Digital Services Act Human Rights Alliance to establish and promote a world standard for internet platform governance. The Alliance is comprised of digital and human rights advocacy organization representing diverse communities across the globe, including in the Arab world, Europe, United Nations member states, Mexico, Syria, and the U.S.

In its first action towards this goal, the Alliance today is calling on the EP to embrace a human rights framework for the DSA and take steps to ensure that it protects access to information for everyone, especially marginalized communities, rejects inflexible and unrealistic take down mandates that lead to over-removals and impinge on free expression, and strengthen mandatory human rights impact assessments so issues like faulty algorithm decision-making is identified before people get hurt.

This call to action follows a troubling round of amendments approved by an influential EP committee that crossed red lines protecting fundamental rights and freedom of expression. EFF and other civil society organizations told the EP prior to the amendments that the DSA offers an unparalleled opportunity to address some of the internet ecosystem’s most pressing challenges and help better protect fundamental rights online—if done right.

So, it was disappointing to see the EP committee take a wrong turn, voting in September to limit liability exemptions for internet companies that perform basic functions of content moderation and content curation, force companies to analyze and indiscriminately monitor users’ communication or use upload filters, and bestow special advantages, not available to ordinary users, on politicians and popular public figures treated as trusted flaggers.

In a joint letter, the Alliance today called on the EU lawmakers to take steps to put the DSA back on track:

  • Avoid disproportionate demands on smaller providers that would put users’ access to information in serious jeopardy.
  • Reject legally mandated strict and short time frames for content removals that will lead to removals of legitimate speech and opinion, impinging rights to freedom of expression.
  • Reject mandatory reporting obligations to Law Enforcement Agencies (LEAs), especially without appropriate safeguards and transparency requirements.
  • Prevent public authorities, including LEAs, from becoming trusted flaggers and subject conditions for becoming trusted flaggers to regular reviews and proper public oversight.   
  • Consider mandatory human rights impact assessments as the primary mechanism for examining and mitigating systemic risks stemming from platforms' operations.

For the DSA Human Rights Alliance Joint Statement:

For more on the DSA:

Karen Gullo
2 hours 42 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed