Surveillance Self-Defense and Security Education: Year in Review 2020

2 months 1 week ago

As the world rapidly changed in 2020, new threats arose to our digital security. The shift to online education and the wave of police brutality protests brought new avenues for surveillance, so EFF created new resources to help people protect themselves.

EFF maintains a repository of self-help resources for fighting back against surveillance across a variety of different platforms, devices, and threat models. We call it Surveillance Self-Defense, or SSD for short. 

SSD covers myriad topics, and is broken up into four main sections:

  • Basics: Overviews on what digital surveillance is and how you can fight it.
  • Tool Guides: Step-by-step tutorials on installing and using privacy and security tools.
  • Further Learning: Deep-dives on the theory behind protecting your digital privacy.
  • Security Scenarios: Playlists of our resources for specific use cases.

In 2017, we also launched the Security Education Companion, also known as SEC, as a sister site to SSD. It’s geared toward people who would like to help their communities learn about digital security, but are new to the art of security training.

SEC also features four main areas of educational resources:

  • Security Education 101: Articles on foundational teaching concepts, and the logistics and considerations of planning a workshop or training.
  • Lesson Modules: Guides for training learners how to use password managers or lock down their social media, and more.
  • Teaching Materials: Handouts and gifs to help illustrate new concepts in approachable ways for any skill level.
  • Security News: An archive of curated EFF Deeplinks posts for trainers, technologists, and educators who teach digital security.
Major Updates to SSD and SEC in 2020 Privacy for Students

Our student privacy guide is a wide-ranging breakdown of the ways schools spy on students, both in and out of the classroom. It goes over the types of technologies schools can use and the data that they can gather, with strategies that students can use to protect themselves and their peers from invasive school surveillance.

Understanding and Circumventing Network Censorship

We revamped and renamed our previous censorship circumvention guide to break down more fully the ways in which censorship and surveillance go hand-in-hand, and the harms they bring to users. We also go over how network censorship happens–where the blocking occurs, and by what mechanisms. And finally, the guide provides users with options for circumvention techniques, and the risks and benefits of these options.

Attending a Protest

Our guide on attending protests went through a major update in response to the protests against police brutality this past summer. We added new sections on dressing for anonymity and safety to circumvent surveillance techniques used at protests and considerations for transit, location, and social media tracking. We also provided additional resources to help address issues of self-censorship in the face of surveillance risk and guidance on posting images in a mindful way to minimize exposing other protesters to potential harm.

Lesson Module: Phishing and Malware 

We added to SEC’s training repertoire this year by creating a lesson module on teaching others about phishing and malware. This resource is based firmly in learner empowerment, not fear. These topics can be daunting and scary for people just learning how to protect themselves online, and we frame this training in building up learners’ awareness and understanding, not recommending specific tools. An additional resource released in tandem with this lesson is our malware handout, a double-sided informative resource on the common types of malware, and protections against contracting this malicious software on devices.

Check out the rest of Surveillance Self-Defense to learn more about protecting yourself online, and Security Education Companion for more of our digital security training resources, at the beginner and intermediate levels of learning.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Lindsay Oliver

The U.S. Internet Is Being Starved of Its Potential: 2020 in Review

2 months 1 week ago

Over a year ago, EFF raised the desperate need for the United States to have a universal fiber infrastructure plan in order to ensure that all Americans can obtain access to 21st century communications technology. Since then, we’ve produced technical research showing why fiber is vastly superior to all the alternative last mile broadband options in terms of its future potential, published legal research on how the U.S. regulatory system started getting it wrong (as far back as 2005), and suggested a path forward at the federal and state level (including legislation) for transitioning the U.S. communications infrastructure toward a fiber-for-all future.

Since then, the pandemic changed our world, as remote work and education became a necessity for most people. At the very start of the stay-at-home orders, EFF expressed our concern that our failure to deliver ubiquitous, affordable, future-proofed infrastructure is going to hurt the most vulnerable. People that lack fiber infrastructure are stuck with second-class Internet access with limited potential as prices continue to rise, slow speeds become obsolete, and needs for better access grow. Most notably, in response to these problems, the House of Representatives passed a universal fiber plan as part of the COVID-19 recovery effort, and we continue to make the case to the U.S. Senate, which has passed no universal 21st-century broadband plan, as to why Majority Whip Clyburn’s Affordable, Accessible Internet Act is the federal answer.

But so long as our local, state, and federal governments do not prioritize delivering future-proofed infrastructure to all people, our ability to make full use of the 21st century Internet will be limited. New services and applications will be tested and created in Asia, not here, and the next Silicon Valley premised on high upload low latency applications and services will not be in California.

America Is Behind by Choices Made by a Handful of Political and Regulatory Leaders

A billion fiber optic connections to the Internet are coming online in just a few years. A large majority of them will be in Asia, primarily led by China. These connections have already proven to be future-proof, capable of reaching not just gigabit speeds, but multi-gigabit speeds. Fiber is not only faster; it’s also cheaper long-term.

On average, the United States has the slowest, most expensive Internet access market among advanced economies.

No other connection even comes close by comparison. The future of the Internet is going to be fiber. Just not in the United States. Yet. We could still change this.

But for now, the United States remains woefully behind dozens of advanced economies, with an overwhelming amount of the infrastructure dependent on slow legacy infrastructure primarily built in the late 20th century. Those legacy copper and coaxial cable connections have failed to deliver robust enough connectivity to handle the immediate remote work and remote education needs of COVID-19 pandemic. They will not handle the future. 

Moreover, their costs are increasing due to obsolescence and will be useless for future applications and services dependent on high-speed, low latency access. This lack of ubiquitous fiber is one of the reasons why the United States is so far behind 5G speeds available, even on downloads (see chart below).

 

On average, the United States has the slowest, most expensive Internet access market among advanced economies, which is choking off the Internet’s ability to be a force for improving American lives while the world marches forward. What the Internet becomes in the mid-to-late 21st century will not be an American story, unless we aggressively course-correct our infrastructure policies soon.

America Doesn’t Need a “Broadband Plan,” it Needs a Fiber Infrastructure Plan

A decade ago, the FCC issued a congressionally mandated “National Broadband Plan” establishing a goal of connecting 100 million U.S. homes to 100 mbps download and 50 mbps upload by 2020. While advancements in national download speeds have occurred due to some cable industry changes, hybrid fiber/coaxial cable systems are still failing to deliver robust upload speeds. In fact, during the pandemic when broadband access demand is extremely high, cable systems failed to deliver.

Essentially, the COVID-19 crisis increased our Internet usage by a year’s worth of growth in a few weeks.

Fiber was able to handle it, cable was not (and 5G just barely exists). Our technical analysis of broadband access options found overwhelmingly conclusive evidence that the inherent capacity in a fiber wire is orders of magnitude greater than all of the alternative wire and wireless options. And most recently we are now seeing wireless industry acknowledgement of the importance of widespread fiber to 5G’s future (but an absence of solutions other than “give us more money”).

While many in government will talk about how we need to get “broadband” to everyone, what they should really be talking about is how we get 21st-century-ready fiber infrastructure to everyone. This distinction is important because we have already spent billions upon billions of dollars building “broadband” with virtually nothing to show for it. That happened because we subsidized slow speeds on any old network with little expectation of future increases in capacity. For example, Frontier Communications received a large amount of federal subsidy but wasn’t forced to begin long term upgrades to cost-efficient fiber, resulting in the telecom carrier’s bankruptcy. They took all those federal dollars straight to the grave because all that was required was to deliver 10 mbps download/ 1 mbps upload Internet to as many people as possible. Those federal dollars were then squandered on propping up obsolete copper networks in rural markets, instead of long-term fiber, forcing us to have to spend the money again now on fiber.

This is why slow networks actually cost more than fiber; the number of years the investment remains useful is relevant to your total costs. The only state in the U.S. that appears to have escaped this fate was North Dakota, where nearly 67% of the state’s residents have gigabit fiber (the U.S. average sits around 30% of households). The reason broadband looks so different there is because local private and local public providers spent those dollars on fiber (and notably no national carriers sell broadband in North Dakota). Big legacy industry would love for the government to continue to spend large amounts of money on slow speed perpetual subsidies (which is still happening today from the FCC and in states like California) because it solves nothing and maintains their slow Internet monopoly.

Continued government spending on this approach though is akin to giving the Joker a pile of cash and watching him set it on fire.

 

The Absence of Regulation Is Part of the Problem 

The thing that holds back the large national broadband providers is the resistance to making long term investments in infrastructure as opposed to short term profits. As noted earlier, large publicly traded ISPs are ill-equipped to address the national need for fiber because of its high upfront costs and their standard three- to five-year return on investment formulas for determining where to build. This is why even densely populated cities like New York City (NYC) had to spend six years suing Verizon to expand fiber, despite the fact that it is completely profitable to serve all of New York City in the aggregate.

There are very few legitimate reasons densely populated cities like Los Angeles and Oakland aren’t near universal fiber at this point. Knowing this, EFF has called on the California Public Utilities Commission (CPUC) to simply require every broadband provider providing service throughout a major city with a population density in excess of 1,000 people per square mile to give everyone fiber as a condition of doing business in the state. It is already against state law to discriminate based on socio-economic status and the evidence is coming in that fiber is going to high-income and skipping low income neighborhoods. In fact, given that income can serve as a proxy for race, recent studies are showing that black neighborhoods are being skipped by fiber in Los Angeles County and high-speed access is being deployed along in a discriminatory fashion in Oakland that matches past redlining that occurred with housing.

California’s state law is already clear that you aren’t allowed to profit from unreasonable discrimination, but the regulator has to enforce those laws for it to matter. The FCC can also address this problem, but only after it reverses the federal deregulation that occurred in 2017 when it repealed net neutrality as part of the Restoring Internet Freedom Order. When broadband carriers are required to operate in a non-discriminatory manner (as required if we treat them as common carriers), it is much more than net neutrality, it is about how they deliver access infrastructure to the public as well. Until then, it will be on states and local governments to address this problem.

Localism in Broadband and Investments in Fiber Will Be How We Get 21st Century Access to All People 

If the large national carriers are ill-equipped to take on the societal challenge of connecting everyone to robust 21st-century ready access to the Internet, then we need to explore our alternatives and to rethink the government’s approach. The most promise appears to come from smaller, locally-held private and public entities who can take on long term patient investments without being subject to Wall Street fast profit expectations. Such entities are deploying fiber where national carriers have long ignored and are building the 21st century in areas previously left behind such as a Missouri cooperative United Fiber delivering fiber to the home at a density of only 2.5 people per square mile or the joint venture between Alabama Power (the state’s electric utility) and Mississippi’s C-Spire to deliver fiber to the home throughout the state of Alabama.

New models of delivering access are proving success such as Utah’s multi-city open access fiber, which has lowered the barrier of entry so much that more than a dozen small businesses can sell broadband services over the public network. When the pandemic hit, the network continued to expand within the state with new cities being announced on a regular basis as the need for high-speed access has exploded. And in places where fiber is already built, extraordinary opportunities are available to help low-income families such as Chattanooga’s free 10 year 100/100 mbps Internet offering with only $2.50 per student per month in charitable giving. 

If your community builds a highly efficient future proof network, things like free Internet access checked out at your local library with a little bit of government support become feasible, making it incumbent on every community to start figuring out how they’ll get it for themselves because by the year 2020 it should be clear the large nationals ISPs are not coming. To this extent, EFF will be supporting California legislation, SB 4, that would enable local communities to invest more than 1 billion dollars in public networks through bonds. And we will continue to support a national solution as proposed by Majority Whip James Clyburn’s Affordable, Accessible Internet for All, which establishes a universal fiber program that would completely eliminate the digital divide for this generation and the next. The federal legislation already passed the House of Representatives, but was not considered by the sitting United States Senate majority. We hope, given that broadband is as important as water and electricity today, that the Senate will move forward on a national broadband infrastructure package in 2021. The only reason the digital divide remains in 2020 is because too many in government willfully allowed it to continue.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Ernesto Falcon

EU and the Digital Services Act: 2020 Year in Review

2 months 1 week ago

While 2019 saw the EU ramming through a disastrous Internet copyright rule that continues to reverberate through legal and policy circles, 2020 was a very different story as the EU introduced the Digital Services Act (DSA), the most significant reform of Europe’s platform legislation the EU has undertaken in twenty years. It is an unparalleled opportunity to formulate a bold, evidence-based vision to address today’s most pressing challenges.

One area we’re especially excited by is the EU’s cautious enthusiasm for interoperability, an anti-monopoly remedy that is highly specific to tech and deeply embedded in the history of technology. Early rumblings about future enforcement hint at interop's centrality to the EU's coming strategy, with some specific interoperability mandates under discussion.

In our policy advocacy surrounding the DSA, we will focus on four key areas: platform liability, interoperability mandates, procedural justice and user control. As we have been introducing the principles that will guide our policy work, our message to the EU has been clear: Preserve what works. Fix what is broken. And put users back in control.

Limited Liability and No Monitoring: Preserve What Works

The DSA is an important chance to update the legal responsibilities of platforms and enshrine users’ rights vis-à-vis the powerful gatekeeper platforms that control much of our online environment. But there is also a risk that the Digital Services Act will follow the footsteps of the recent regulatory developments in Germany, France, and Austria. The German NetzDG, the French Avia bill (which we helped bring down in court), and the Austrian law against hate speech (we advised the Commission to push it back) show a worrying trend in the EU to force platforms to police users’ content without considering what matters most: giving a voice to users affected by content takedowns.

In our detailed feedback to the EU on this question, we stress fair and just notice and action procedures (strong safeguards to protect user rights when their content is taken down or made inaccessible).

  1. Reporting Mechanisms: Intermediaries should not be held liable for choosing not to remove content simply because they received a private notification by a user. Save for exceptions, the EU should adopt the principle that actual knowledge of illegality is only obtained by intermediaries if they are presented with a court order.
  2. A Standard for Transparency and Justice in Notice and Action: Platforms should provide a user-friendly, visible, and swift appeals process to allow for the meaningful resolution of content moderation disputes. Appeals mechanisms must also be accessible, easy to use, follow a clearly communicated timeline, and must include human review.
  3. Open the Black Box that is Automated Decision Making: In the light of automated content moderation’s fundamental flaws, platforms should provide as much transparency as possible about how they use algorithmic tools.
  4. Reinstatement of Wrongfully Removed Content: Because erroneous content moderation decisions are so common and have such negative effects, it is crucial that platforms reinstate users’ content when the removal decision cannot be justified by a sensible interpretation of the platforms’ rules or the removal was simply in error
  5. Coordinated and Effective Regulatory Oversight: Coordination between independent national authorities should be strengthened to enable EU-wide enforcement, and platforms should be incentivized to follow their due diligence duties through, for example, meaningful sanctions harmonized across the European Union.

Facing the most significant reform project of Internet law undertaken in two decades, the EU should choose to protect the Internet rather than coerce online platforms to police their users. EFF intends to fight for users’ rights, transparency, anonymity, and limited liability for online platforms every step of the way.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Christoph Schmon

Snowden: "We Can Fix a Broken System"

2 months 1 week ago

Below is a message from whistleblower Edward Snowden. His revelations about secret surveillance programs opened the world’s eyes to a new level of government misconduct, and reinvigorated EFF’s continuing work in the courts and with lawmakers to end unlawful mass spying.

EFF is grateful to Ed for his support in our court cases, and to people like you for sustaining EFF during our Year-End Challenge membership drive. Your help is essential to pushing back the tide of unchecked surveillance.

___________________________

Seven years ago I did something that would change my life and alter the world’s relationship to surveillance forever.

When journalists revealed the truth about state deception and illegal conduct against citizens, it was human rights and civil liberties groups like EFF—backed by people around the world just like you—that seized the opportunity to hold authority to account.

Surveillance quiets resistance and takes away our choices. It robs us of private space, eroding our dignity and the things that make us human.

When you’re secure from the spectre of judgement, you have room to think, to feel, and to make mistakes as your authentic self. That’s where you test your notions of what’s right. That’s when you question the things that are wrong.

By sounding the alarm and shining a light on mass surveillance, we force governments around the world to confront their wrongdoing.

Slowly, but surely, grassroots work is changing the future. Laws like the USA Freedom Act have just begun to rein in excesses of government surveillance. Network operators and engineers are triumphantly “encrypting all the things” to harden the Internet against spying. Policymakers began holding digital privacy up to the light of human rights law. And we’re all beginning to understand the power of our voices online.

This is how we can fix a broken system. But it only works with your help.

For 30 years, EFF members have joined forces to ensure that technology supports freedom, justice, and innovation for all people. It takes unique expertise in the courts, with policymakers, and on technology to fight digital authoritarianism, and thankfully EFF brings all of those skills to the fight. EFF relies on participation from you to keep pushing the digital rights movement forward.

Each of us plays a crucial role in advancing democracy for ourselves, our neighbors, and our children. I hope you’ll answer the call by joining EFF to build a better digital future together.

Sincerely,

Edward Snowden

Join EFF

For the future of privacy and free speech

Root User

High Tech Police Surveillance of Protests and Activism: Year in Review 2020

2 months 1 week ago

This summer’s Black-led protest movement against police violence was one of the largest political movements in the history of the United States--and with it, came a massive proliferation of government surveillance technology aimed at activists and demonstrators. 

EFF has been standing up for the right to protest without being surveilled for 30 years now, and this year was no different. As EFF Executive Director Cindy Cohn wrote earlier this year, “EFF stands with the communities mourning the victims of police homicide. We stand with the protesters who are plowed down by patrol cars. We stand with the journalists placed in handcuffs or fired upon while reporting these atrocities. And we stand with all those using their cameras, phones and digital tools to make sure we cannot turn away from the truth.” 

And we stood with protestors and provided support with lawsuits, by offering legal support to activists, through surveillance self defense education for protestors, reaffirming the right to film the police, by analyzing the latest surveillance tech, and by providing tools for people on the ground to know and understand what equipment their local police departments are using to spy on activists. 

We even uncovered police surveillance of protests in our own backyard. By using public records requests to see the correspondence between the San Francisco Police Department and the Union Square Business Improvement District, which operates several hundred surveillance cameras in the area, we exposed that the SFPD gained live access to over 400 cameras to spy on protestors. Because San Francisco has an ordinance prohibiting the police from gaining access to any new surveillance equipment without approval from the Board of Supervisors, EFF is now representing activists in a lawsuit against the city for violating this law. This may be the nation’s first test case to enforce a municipal CCOPS (community control of police surveillance) ordinance. 

When it comes to spying on protests, police have a lot of surveillance tools at their disposal to track, identify, and monitor demonstrators and people exercising their constitutional rights to assemble and protest. In 2020, reporters, civil rights advocates, and people on the ground documented government use of aerial surveillance, social media monitoring, and many more. As with the government surveillance of Occupy Wall Street or the 2015 demonstrations led by the Movement of Black Lives, it could take years for us to learn all of the myriad ways local, state, and federal law enforcement surveilled organizers. 

The right to protest, assemble, and associate is a cornerstone of democracy. That right is undermined when people legitimately fear retribution from authorities for their participation in the political system and public sphere. Digital surveillance at protests, and the ongoing surveillance of people’s online activities, is a profound threat to those rights. EFF has been standing up protestors in the streets and online for decades--and 2020 was no exception. As people continue to take to the streets in 2021, we’ll continue to have their back.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Matthew Guariglia

How COVID Changed Content Moderation: Year in Review 2020

2 months 2 weeks ago

In a year that saw every facet of online life reshaped the coronavirus pandemic, online content moderation and platform censorship were no exception.

After a successful Who Has Your Back? campaign in 2019 to encourage large platforms to adopt best practices and endorse the Santa Clara Principles, 2020 was poised to be a year of more progress toward transparency and accountability in content moderation across the board. The pandemic changed that, however, as companies relied even more on automated tools in response to disrupted content moderator workforces and new types and volumes of misinformation.

At a moment when online platforms became newly vital to people’s work, education, and lives, this uptick in automation threatens freedom of expression online. That makes the Santa Clara Principles on Transparency and Accountability in Content Moderation more important than ever—and, like clockwork, transparency reporting later in the year demonstrated the pitfalls and costs of automated content moderation.

As the pandemic wore on, new laws regulating fake news online led to censorship and prosecutions across the world, including notable cases in Cambodia, India, and Turkey that targeted and harmed journalists and activists.

In May, Facebook announced its long-awaited Oversight Board. We had been skeptical from day one, and were disappointed to see the Board launch without adequate representation from the Middle East, North Africa, or Southeast Asia, and further missing advocates for LGBTQ or disability communities. Although the Board was designed to identify and decide Facebook’s most globally significant content disputes, the Board’s composition was and is more directed at parochial U.S. concerns.

In June, Facebook disabled the accounts of more than 60 Tunisian users with no notice or  transparency. We reminded companies how vital their platforms are to speech; while the current PR storm swirls around whether or not they fact-check President Trump, we cannot forget that those most impacted by corporate speech controls are not politicians, celebrities, or right-wing provocateurs, but some of the world’s most vulnerable people who lack the access to corporate policymakers to which states and Hollywood have become accustomed.

As the EU’s regulation to curb “terrorist” or violent extremist content online moved forward in the latter half of the year, the Global Internet Forum to Counter Terrorism (GIFCT) took center stage as a driving force behind the bulk of allegedly terrorism-related takedowns and censorship online. And in September, we saw Zoom, Facebook, and YouTube cite U.S. terrorism designations when they refused to host Palestinian activist Leila Khaled.

At the same time, EFF has put forward EU policy principles throughout 2020 that would give users, not content cartels like GIFCT, more control over and visibility into content decisions.

The United States’ presidential election in November drove home the same problems we saw with Facebook’s Oversight Board in May and the string of disappeared accounts in June: tech companies and online platforms have focused on American concerns and politics to the detriment of addressing problems in—and learning from—the rest of the world. While EFF made clear what we were looking for and demanding from companies as they tailored content moderation policies to the U.S. election, we also reminded companies to first and foremost listen to their global user base

Looking ahead to 2021, we will continue our efforts to hold platforms and their policies accountable to their users. In particular, we’ll be watching developments in AI and automation for content moderation, how platforms handle COVID vaccine misinformation, and how they apply election-related policies to significant elections coming up around the world, including in Uganda, Peru, Kyrgyzstan, and Iran.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.



Gennie Gebhart

How We Saved .ORG: 2020 in Review

2 months 2 weeks ago

If you come at the nonprofit sector, you’d best not miss.

Nonprofits and NGOs around the world were stunned last November when the Internet Society (ISOC) announced that it had agreed to sell the Public Interest Registry—the organization that manages the .ORG top-level domain (TLD)—to private equity firm Ethos Capital. EFF and other leaders in the NGO community sprung to action, writing a letter to ISOC urging it to stop the sale. What follows was possibly the most dramatic show of solidarity from the nonprofit sector of all time. And we won.

Oversight by a nonprofit was always part of the plan for .ORG.

Prior to the announcement, EFF had spent six months voicing our concerns to the Internet Corporation for Assigned Names and Numbers (ICANN) about the 2019 .ORG Registry Agreement, which gave the owner of .ORG new powers to censor nonprofits’ websites (the agreement also lifted a longstanding price cap on .ORG registrations and renewals).

.ORG demonstration at ICANN

The Registry Agreement gave the owner of .ORG the power to implement processes to suspend domain names based on accusations of “activity contrary to applicable law.” It effectively created a new pressure point that repressive governments, corporations, and other bad actors can use to silence their critics without going through a court. That should alarm any nonprofit or NGO, especially those that work under repressive regimes or frequently criticize powerful corporations.

Throughout that six-month process of navigating ICANN’s labyrinthine decision-making structure, none of us knew that ISOC would soon be selling PIR. With .ORG in the hands of a private equity firm, those fears of censorship and price gouging became a lot more tangible for nonprofits and NGOs. The power to take advantage of .ORG users was being handed to a for-profit company whose primary obligation was to make money for its investors.

Oversight by a nonprofit was always part of the plan for .ORG. When ISOC competed in 2002 for the contract to manage the TLD, it used its nonprofit status as a major selling point. As ISOC’s then president Lynn St. Amour put it, PIR would “draw upon the resources of ISOC’s extended global network to drive policy and management.”

More NGOs began to take notice of the .ORG sale and the danger it posed to nonprofits’ freedom of expression online. Over 500 organizations and 18,000 individuals had signed our letter by the end of 2019, including big-name organizations like Greenpeace, Consumer Reports, Oxfam, and the YMCA of the USA. At the same time, questions began to emerge (PDF) about whether Ethos Capital could possibly make a profit without some drastic changes in policy for .ORG.

By the beginning of 2020, the financial picture had become a lot clearer: Ethos Capital was paying $1.135 billion for .ORG, nearly a third of which was financed by a loan. No matter how well-meaning Ethos was, the pressure to sell “censorship as a service” would align with Ethos’ obligation to produce returns for its investors. The sector’s concerns were well-founded: the registry Donuts entered a private deal with the Motion Picture Association in 2016 to fast-track suspensions of domains that MPA claims infringe on its members’ copyrights. It’s fair to ask whether PIR would engage in similar practices under the leadership of Donuts co-founder Jonathon Nevett. Six members of Congress wrote a letter to ICANN in January urging it to scrutinize the sale more carefully.

.ORG demonstration at ICANN

A few days later, EFF, nonprofit advocacy group NTEN, and digital rights groups Fight for the Future and Demand Progress participated in a rally outside of the ICANN headquarters in Los Angeles. Our message was simple: stop the sale and create protections for nonprofits. Before the protest, ICANN staff reached out to the organizers offering to meet with us in person, but on the day of the protest, ICANN canceled on us. That same week, Amnesty International, Access Now, the Sierra Club, and other global NGOs held a press conference at the World Economic Forum to tell world leaders that selling .ORG threatens civil society. All of the noise caught the attention of California Attorney General Xavier Becerra, who wrote to ICANN (PDF) asking it for key information about its review of the sale.

COVID-19 demonstrated that the NGO community doesn’t need fancy “products and services” from a domain registry: it needs simple, reliable, boring service.

Recognizing that the heat was on, Ethos Capital and PIR hastily tried to build bridges with the nonprofit sector. Ethos attempted to convene a secret meeting with NGO sector leaders in February, and then abruptly canceled it. Ethos then announced that it would voluntarily limit price increases on .ORG registrations and renewals and establish a “stewardship council.” Like many details of the .ORG sale, what level of influence the stewardship council would have over PIR’s decisions was unclear. EFF executive director Cindy Cohn and NTEN CEO Amy Sample Ward responded in the Nonprofit Times:

The proposed “Stewardship Council” would fail to protect the interests of the NGO community. First, the council is not independent. The Public Interest Registry (PIR) board’s ability to veto nominated members would ensure that the council will not include members willing to challenge Ethos’ decisions. PIR’s handpicked members are likely to retain their seats indefinitely. The NGO community must have a real say in the direction of the .ORG registry, not a nominal rubber stamp exercised by people who owe their position to PIR.

Even Ethos’ promise to limit fee increases was rather hollow: if Ethos raised fees as allowed by the proposed rules, the price of .ORG registrations would more than double over eight years. After those eight years, there would be no limits on free increases whatsoever.

All the while, Ethos and PIR kept touting that with the new ownership would come new “products and services” for .ORG users, but it failed to give any information about what those offerings might entail. Cohn and Ward responded:

The product NGOs need from our registry operator is domain registration at a fair price that doesn’t increase arbitrarily. The service that operator must provide is to stand up to governments and other powerful actors when they demand that it silence us. It is more clear than ever that you cannot offer us either.

It’s almost poetic that the debate over .ORG reached a climax just as COVID-19 was becoming a worldwide crisis. Emergencies like this one are when the world most relies on nonprofits and NGOs; therefore, they’re also pressure tests for the sector. The crisis demonstrated that the NGO community doesn’t need fancy “products and services” from a domain registry: it needs simple, reliable, boring service. Those same members of Congress who’d scrutinized the .ORG sale wrote a more pointed letter to ICANN in March (PDF), plainly noting that there was no way that Ethos Capital could make a profit on its investment without making major changes at the expense of .ORG users.

.ORG demonstration at ICANN

Finally, in April, the ICANN board rejected the transfer of ownership of .ORG. “ICANN entrusted to PIR the responsibility to serve the public interest in its operation of the .ORG registry,” they wrote, “and now ICANN is being asked to transfer that trust to a new entity without a public interest mandate.”

While .ORG is safe for now, the bigger trend of registries becoming chokepoints for free speech online is as big a problem as ever. That’s why EFF is urging ICANN to reconsider its policies regarding public interest commitments—or as the Internet governance community has recently started calling them, registry voluntary commitments. Those are the additional rules that ICANN allows registries to set for specific top-level domains, like the new provisions in the .ORG Registry Agreement that allow the owner of .ORG to set policies to fast-track censoring speech online.

The story of the attempted .ORG sale is really the story of the power and resilience of the nonprofit sector. Every time Ethos and PIR tried to quell the backlash with empty promises, the sector responded even more loudly, gaining the voices of government officials, members of Congress, two UN Special Rapporteurs, and U.S. state charities regulators. As I said to that crowd of activists in front of ICANN’s offices, I’ve worked in the nonprofit sector for most of my adult life, and I’ve never seen the sector respond this unanimously to anything.

Thank you to everyone who stood up for .ORG, especially NTEN for its partnership on this campaign as a trusted leader in the nonprofit sector. If you were one of the 27,183 people who signed our open letter, or if you work for or support one of the 871 organizations that participated, then you were a part of this victory.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Elliot Harmon

2020 in Review

2 months 2 weeks ago

“Now more than ever,” “in these uncertain times,” “unprecedented”—we’re sure you have seen these words repeated over and over in the last twelve months, including from us here at EFF. They are clichés because they are true. 2020 has been the year that lasted a whole decade.

It almost seems appropriate that EFF turned 30 this year, a year of extreme highs and lows. A year where being able to get and stay online became vital to everyday life. A year where people took the streets in protest and in celebration – and where mass surveillance often tracked them at both.

We are grateful to our over 37,000 members at last count, who allowed us to fight for digital rights this year and the last 29. We always feel the importance of our work, but we took on new challenges this year under very difficult circumstances. And it was you, our supporters, who helped us rise to meet them. Our goal this birthday year is to welcome 30,000 new or renewing members before July, and you have helped us get over halfway there so far. 

We began the year with a fight that ended in a major victory: saving .ORG from falling into the hands of private equity. At the end of 2019, The Internet Society (ISOC) announced that it intended to sell the Public Interest Registry (PIR, the organization that oversees the .ORG domain name registry) to a private equity firm. Eventually, petitions to reject the sale received over 64,000 signatures, and nearly 900 organizations signed on. Joining them in their concerns were Members of Congress, UN Special Rapporteurs, and state charity regulators. This culminated in an April decision by ICANN to disapprove the sale of .ORG, representing a win for the public interest Internet.

Early in 2020, we also worked tirelessly to protect free speech, security, and privacy online. First, we renewed our effort to overturn the Allow States and Victims to Fight Online Sex Trafficking Act, known as FOSTA. While the law FOSTA was intended to fight the genuine problem of sex trafficking, the way the law is written achieves the opposite effect: it makes it harder for law enforcement to actually locate victims, and it punishes organizations and individuals doing important work. In the process, it does irreparable harm to the freedom of speech guaranteed by the First Amendment. Second, we led the charge against EARN IT, a bill which, like FOSTA before it, was aimed at a real problem—this time, online child exploitation. And like FOSTA, the bill as written would seriously undermine speech, security, and innovation online. In 2020, EARN IT became the latest front in the ongoing war against encryption that certain parts of the U.S. government have been waging for decades. We will stand against it now, and continue to stand up for your security, just as we have in the past.

Of course, the COVID-19 pandemic soon took over our lives, same as everyone else’s. Like many, we navigated working, learning, and everything-elsing from home. We rapidly figured out that the COVID-19 crisis would bring many hasty, ill-thought-out proposals regarding technology, and seek to “COVID-wash” others,  and brought to bear our 30 years of experience and knowledge. We opposed technological surveillance of the populace purportedly in service of “fighting” the pandemic. We opposed plans that claimed that some form of app would work in place of testing and interview-based contact tracing. And we opposed sharing anyone’s health data with police departments. We opposed “bossware” that snoops on every mouse click in our home offices and proctoring apps that improperly surveil students who must take exams at home.

On the other side, we supported plans that would get faster, cheaper, more reliable Internet access to as many Americans as possible. Internet usage like we’re seeing this year is only going to become more common, not less, and so everyone—rural, urban, low-income, BIPOC—needs access to high-quality Internet. We supported open-access to scientific information and research into the virus. We supported tinkerers having the right to fix broken medical devices and easy online access to repair manuals for such devices.

2020 also made clear that longstanding EFF concerns, such as law enforcement surveillance and the ability to livestream encounters with the police, are part and parcel of a long-needed reckoning with racial injustice.  As we said this summer: Black lives matter on the streets. Black lives matter on the Internet. EFF stands with the communities mourning the victims of police homicide. We stand with the protesters who are plowed down by patrol cars. We stand with the journalists placed in handcuffs or fired upon while reporting these atrocities. And we stand with all those using their cameras, phones, and digital tools to make sure we cannot turn away from the truth.

As protestors took to the streets, EFF opened up our legal referral program to people facing legal troubles as a result of their participation in the Black-led demonstrations, especially where those involved surveillance or device searches. We also discovered surveillance of protestors in our own backyard. According to records we obtained, San Francisco Police Department (SFPD) conducted mass surveillance of protesters at the end of May and in early June using a downtown business district's camera network. We, with the ACLU, are representing plaintiffs in a suit against the city and county over SFPD’s unlawful use of these cameras.

It has been a long year. We also launched a podcast, introduced a new way for users to find and block trackers online, shined a light on Amazon Ring’s relationships with local law enforcement, encouraged European lawmakers to remember that the Internet is more than just the tech giants, and found ourselves defending fair use in the omegaverse. Please consider joining EFF. 2020 gave us new fights to fight, in addition to throwing into stark relief why fighting our old ones is so important. We cannot do this without our supporters.

EFF has an annual tradition of writing several blog posts on what we’ve accomplished this year, what we’ve learned, and where we have more to do. This year’s blog posts seem even wider-ranging than usual, since 2020 resisted all attempts at being about only one, five, or even a hundred things. We will update this page with new stories about digital rights in 2020 every day between now and New Year’s Day.

Donate to EFF

Support Digital Freedom

2020 in Review Articles:
Cindy Cohn

ExamSoft Flags One-Third of California Bar Exam Test Takers for Cheating

2 months 2 weeks ago

One of EFF’s chief concerns about exam proctoring software—in addition to the fact that it subjects students to excessive surveillance—is the risk that it will incorrectly flag students for cheating, called “false positives.” This can be due either to the software’s technical failures or to its requirements that students have relatively new computers and access to near-broadband speeds. Last week, the California Bar released data confirming our fear of false positives: during its use of ExamSoft for the October Bar exam, over one-third of the nearly nine-thousand online examinees were flagged by the software (13:00 into the video of the California Committee of Bar Examiners meeting).

It is clear that at least some of these flags are technical issues with ExamSoft.

This is outrageous. It goes without saying that of the 3,190 applicants flagged by the software, the vast majority were not cheating. Far more likely is that, as EFF and others have said before, remote proctoring software is surveillance snake oil—you simply can’t replicate a classroom environment online, and attempting to do so via algorithms and video monitoring only causes harm. In this case, the harm is not only to the students who are rightfully upset about the implications and the lack of proper channels for redress, but to the institution of the Bar itself. While examinees have been searching for help from other examinees as well as hiring legal counsel in their attempt to defend themselves from potentially baseless claims of cheating, the California Committee of Bar Examiners has said “everything is going well” and called these results “a good thing to see” (13:30 into the video of the Committee meeting).  

That is not how we see it. These flags have triggered concern for hundreds, if not thousands, of test takers, most of whom had no idea that they were flagged until recently. Many only learned about the flag after receiving an official “Chapter 6 Notice” from the Bar, which is sent when an applicant is observed (supposedly) violating exam conduct rules or seen or heard with prohibited items, like a cell phone, during the exam. In a depressingly ironic introduction to the legal system, the Bar has requested that students respond to the notices within 10 days, but it would appear that none of them have been given enough information to do so, as Chapter 6 Notices contain only a short summary of the violation. These summaries are decidedly vague: “Facial view of your eyes was not within view of the camera for a prolonged period of time”; “No audible sound was detected”; “Leaving the view of the webcam outside of scheduled breaks during a remote-proctored exam.” Examinees do not currently have access to the flagged videos themselves, and are not expected to receive access to them, or any other evidence against them, before they are required to submit a response.

It is clear that at least some of these flags are technical issues with ExamSoft. Despite apparent knowledge of the issue months ago, many examinees using Lenovo laptops seem to have been flagged en masse for an issue with the software’s inability to access the internal microphone, though examinees did not have an issue with the practice exam. Lenovo laptops are very commonly purchased because they are affordable, and are one of the more popular PC brands overall.  

Still other flags are likely due to the inability of proctoring software to correctly recognize the enormous variability of examinees’ demeanors and expressions, particularly students and examinees who exhibit behavior such as stimming. An inability to detect eyes during an exam could simply mean a test-taker was closing her eyes to think, or to rest after several hours of focused staring. Being flagged for leaving the view of the webcam could very well be a failure of the software to recognize a student’s face, which is more likely to occur to Black and Brown students.

We implore the California Bar to rethink its plans for remotely-proctored future exams

Apparently, some applicants who received notices had even taken the test at major law firms where IT departments worked to ensure they would be in compliance with the exam’s rules and procedures. If, despite that, and despite mock exams, and despite hundreds of tech support calls (which reportedly caused some students to be flagged for using cell phones during the exam), one-third of students are still flagged, what exactly is the benefit of proctoring software?  

According to the Guidelines Governing the Interpretation and Application of Chapter 6 of the Admissions Rules, “the State Bar has the burden of establishing by clear and convincing evidence that a Chapter 6 violation occurred and that the intended sanction is warranted.” So far, this has not been done, despite the Chapter 6 Notices being sent. It is unfair for students to be forced to respond to potential sanctions before they’re given adequate information about the behavior for which they may be sanctioned. Deans of several law schools have requested [pdf] that flagged examinees be allowed to view video of alleged violations before responding to notices.  

The Deans also note that appealing a flag currently means an examinee’s score will be withheld pending resolution, during which the examinee cannot reapply for a future exam. This leaves those who received a Notice at a loss for next steps, and in the dark about whether they should reapply or not. That the State Bar has created such an impossible process for responding to these algorithmic flags flies in the face of the California Supreme Court’s claim that proctoring software will not be the deciding factor in whether or not an examinee passes.

The California Bar’s mission includes supporting greater access to, and inclusion in, the legal system. Forcing thousands of examinees to defend themselves against an algorithm that claims they’ve cheated without seeing the evidence against them quite clearly goes against that mission. Other states have canceled the use of proctoring software for their bar exams due to the inability to ensure a “secure and reliable” experience. We implore the California Bar to rethink its plans for remotely-proctored future exams, and to work carefully to offer clearer paths for examinees who have been flagged by these inadequate surveillance tools. Until then, the Bar must provide examinees who have been flagged with a fair appeals process, including sharing the videos and any other information necessary for them to defend themselves before requiring a written response.

Jason Kelley

The CASE Act Is Just the Beginning of the Next Copyright Battle

2 months 2 weeks ago

As we feared, the “Copyright Alternative in Small-Claims Enforcement Act”—the CASE Act—that we’ve been fighting in various forms for two years has been included in a "must-pass" spending bill. This new legislation means Internet users could face up to $30,000 in penalties for sharing a meme or making a video, with liability determined not by neutral judges but by biased bureaucrats.

The CASE Act is supposed to be a solution to the complicated problem of online copyright infringement. In reality, it creates a system that will harm everyday users who, unlike the big players, won’t have the time and capacity to negotiate this new bureaucracy. In essence, it creates a new “Copyright Claims Board” in the Copyright Office that will be empowered to adjudicate copyright infringement claims, unless the accused received a notice, recognizes what it means, and opts out—in a very specific manner, within a limited time period. The Board will be staffed by “claims officers,” not judges or juries. You can appeal their rulings, but only on a limited basis, so you may be stuck with whatever amount the “claims board” decides you owe. Large, well-resourced players will not be affected, as they will have the resources to track notices and simply refuse to participate. The rest of us? We’ll be on the hook. 

The relief bill also included an altered version of a felony streaming bill that is, thankfully, not as destructive as it could have been. While the legislation as written is troubling, an earlier version would have been even more dangerous, targeting not only large-scale, for-profit streaming services, but everyday users as well. 

We’re continuing the fight against the CASE Act, but today brings even bigger problems. Senator Thom Tillis, who authored the felony streaming legislation, launched a "discussion draft" of the so-called Digital Copyright Act. Put simply, it is a hot mess of a bill that will rewrite decades of copyright law, give the Copyright Office (hardly a neutral player) the keys to the Internet, and drastically undermine speech and innovation in the name of policing copyright infringement. Read more analysis of this catastrophic bill here

Internet users and innovators, as well as the basic legal norms that have supported online expression for decades, are under attack. With your help, we will be continuing to fight back, as we have for thirty years, into 2021 and beyond. Fair use has a posse, and we hope you’ll join it.

Jason Kelley

Year-End Challenge for Online Rights

2 months 2 weeks ago

You weathered a year that pressed the limits of endurance. But thankfully, the more we leaned on technology to stay connected, the harder EFF members fought to protect privacy, security, and free expression. This collective mission is more meaningful than ever, and you can keep us going strong.

If you donate to EFF before the end of 2020, you’ll help defend digital freedom for all—and you’ll also help EFF unlock up to $75,000 in challenge grants.

Donate to EFF

Every person makes a difference in the online rights movement, and every person counts in our Year-End Challenge. As the number of supporters grows, EFF unlocks a series of challenge grants that grow larger after each milestone. Thank you to EFF's Board of Directors for making these potential grants possible! Pitch in to reach the final goal by December 31st.

The past ten months confirmed just how urgently we need secure, inclusive access to digital tools, and EFF has been on your side every step of the way.

This year alone: EFF led the charge against the EARN IT Act in the U.S. and similar efforts in the European Union that would break encryption; EFF launched the Atlas of Surveillance, the largest searchable database of U.S. law enforcement’s spying technologies; EFF is fending off persistent attempts to dismantle Section 230 and gut free speech online; EFF is fighting for an open Internet and real broadband for all; and we’re analyzing the privacy protections for tools made to track COVID-19; and we're battling both alarming copyright legislation slipped into in the must-pass omnibus spending package, and a DMCA "reform" proposal that would give the Copyright Office new powers to effectively regulate much of the Internet.

After 30 years of standing with tech users against the dark impulses of governments and corporations, EFF knows the unique impact of our mission: we must protect our right to explore ideas, express ourselves, and connect to each other online for the future of civil liberties and human rights.

This has been an exceptionally hard year in many respects, but with your help we have opportunities to set things right. It's time to turn the page. Donate to EFF during our Year-End Challenge, and help us unlock additional grants when it matters most!


Join us and be the extinguisher to the dumpster fire.

EFF is a U.S. 501(c)(3) nonprofit and donations are tax-deductible as allowed by law. Questions about donating? Contact the team at membership@eff.org or call +1 415-436-9333 x212 and we’ll help you out. Thanks for your support!

Aaron Jue

This Disastrous Copyright Proposal Goes Straight to Our Naughty List

2 months 2 weeks ago

Just yesterday we saw two wretched copyright bills-the CASE Act and a felony streaming bill -- slipped into law via a must-pass spending bill. But it seems some people in Congress were just getting started. Today, Senator Thom Tillis launched a "discussion draft" of the so-called Digital Copyright Act. But there's nothing to discuss: the bill, if passed, would absolutely devastate the Internet.

We’ll have a more in-depth analysis of this draft bill later, but we want to be clear about our opposition from the start: this bill is bad for Internet users, creators, and start-ups. The ones with the most to gain? Big Tech and Big Content.

This draft bill contains so many hoops and new regulations that the only Internet companies that will be able to keep up and keep on the “right” side of the law will be the Big Tech companies, who already have the resources and, frankly, the money to do so. It also creates a pile of new ways to punish users and creators in the service of Hollywood and the big record labels. Unless we stop this proposal, DMCA reform will crush huge swaths of online expression and innovation, not to mention the competition we need to develop alternatives to the largest platforms.

Some especially important things to note:

Filters, Filters Everywhere, Nor Any Drop to Drink

In several places in this bill—the requirements for “notice-and-staydown,” a duty for providers to monitor uploads, and development of “standard technical measures”—there are hidden filter requirements. The words “filter” or “copyright bots” may not appear in the text, but make no mistake: these new requirements will essentially mandate filters.

Filters not only do not work, they actively cause harm to legal expression. They operate on a black-and-white system of whether part of one thing matches part of another thing, not taking into account the context. So criticism, commentary, education—all of it goes out the window when a filter is in place. The only route left is not fair use but, as our whitepaper demonstrated, to edit around the filter’s requirements (or refrain from speaking altogether).

Once again, major studios, labels, and media companies will be entrenched as gatekeepers for art and expression. This is not the Internet we want or need. It’s a return to the days of Big Content domination, at the expense of small, independent creators.

“Repeat” Infringer Policies That Cut Off Internet Access

Under the Digital Millennium Copyright Act (DMCA), service providers get immunity from copyright liability if their users commit infringement if they meet certain requirements. One of those is to have a “repeat infringer policy,” which can terminate the account of someone who has committed multiple acts of copyright infringement. The details are left to the provider.

This draft changes that. It gives power to the Copyright Office, in consultation with the National Telecommunications and Information Administration, to develop a model repeat infringer policy to act as the minimum requirement for the policies of service providers.

That’s incredibly concerning. Earlier this year, the Copyright Office gave us a preview of what it thinks a reasonable repeat infringer policy looks like in its report on section 512 of the DMCA. And the Copyright Office concluded that not enough people are being punished by these policies and that a single, unsubstantiated claim of infringement should be enough to not only terminate a YouTube account but to terminate Internet access. Basically, according to the Copyright Office, your ISP—likely the only one in your area, if you’re among the majority of Americans who only have one choice for high-speed Internet—should terminate an entire household’s Internet based on copyright infringement.

Internet access is vital for participating in today’s world. An office that thinks it should be easier to cut off that access makes sense should not be in charge of determining a model repeat infringer policy.

And Speaking of the Copyright Office…

This draft gives the Copyright Office a whole suite of new regulatory powers over the Internet, basically making it the Internet Cops. Given the power and influence of U.S. based platforms, this means that the governing law of the Internet will be based not on human rights norms but on copyright restrictions.

An office that sees its constituency as copyright holders and not Internet users or the public interest should not be in charge of the Internet.

Whatever good things are in this draft—and there are a few modest improvements proposed—are vastly outweighed by how catastrophically bad the rest of it is. Do not worry too much, though; as ever, EFF will be fighting for the Internet every step of the way, just as we did during the SOPA/PIPA fight with the help of countless Internet users and a broad coalition committed to the free and open Internet. This proposal is far worse than SOPA/PIPA, so our coalition will have to be stronger and more united than ever before. But we can meet that challenge. We—the Internet—must stop this terrible legislation in its tracks.

Katharine Trendacosta

EFF to Ninth Circuit: Don’t Grant Immunity to Notorious Spyware Company

2 months 2 weeks ago

EFF filed a brief in the U.S. Court of Appeals for the Ninth Circuit in support of WhatsApp’s lawsuit against notorious Israeli spyware company NSO Group. WhatsApp discovered last year that NSO Group had breached its systems and enabled NSO Group’s government clients to hack into the mobile phones of approximately 1,400 users in April and May 2019. A federal judge allowed the case to move forward earlier this year and NSO Group appealed.

NSO Group sells its “Pegasus” spyware, which enables surreptitious digital surveillance of mobile devices, exclusively to government agencies across the globe. The company is arguing that it should therefore be granted “foreign sovereign immunity,” a longstanding legal doctrine that says that foreign governments should generally have immunity from suit in U.S. courts for reasons that focus on preserving stability in international relations, also called “comity.”

EFF supports WhatsApp’s arguments that foreign sovereign immunity is inappropriate for private corporations, and submitted our brief to further argue that no corporation, whether foreign or American, should be granted immunity for contracting with foreign governments, especially when those governments use a company’s powerful surveillance technology to violate human rights.

We explained that surreptitious surveillance tools not only invade privacy and chill freedom of speech and association, they can also facilitate physical harm—from unlawful arrest to summary execution.

EFF’s brief provided examples of how NSO Group, via the WhatsApp hack, helped its client governments target members of civil society, including Rwandan political dissidents and a journalist critical of Saudi Arabia. We also highlighted other examples of NSO Group’s complicity in human rights abuses, including many perpetrated by the Mexican government against journalists and the wife of a murdered journalist.

Corporate complicity in human rights abuses is a widespread and ongoing problem, and the Ninth Circuit should not expand the ability of technology companies like NSO Group to avoid accountability for facilitating human rights abuses by foreign governments.

Sophia Cope

The U.S. Government Is Targeting Cryptocurrency to Expand the Reach of Its Financial Surveillance 

2 months 2 weeks ago

One of the most important aspects of cryptocurrencies from a civil liberties perspective is that they can provide privacy protections for their users. But EFF is concerned that the U.S. government has been increasingly taking steps to undermine the anonymity of cryptocurrency transactions and importing the widespread financial surveillance of the traditional banking system to cryptocurrencies.  

On Friday, the Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN) announced a proposed regulation that would require money service businesses (which includes, for example, cryptocurrency exchanges) to collect identity data about people who transact with their customers using self-hosted cryptocurrency wallets or foreign exchanges. The proposed regulation would require them to keep that data and turn it over to the government in some circumstances (such as when the dollar amount of transactions in a day exceeds a certain threshold). 

The proposal appears designed to be a midnight regulation pushed through before the end of the current presidential administration, as its 15-day comment period is unusually short and coincides with the winter holiday. The regulation’s authors write that this abbreviated comment period is required to deal with the “threats to United States national interests” posed by these technologies, but they provide no factual basis for this claim. 

Although EFF is still reviewing the proposal, we have several initial concerns. First, the regulation would mean that people who store cryptocurrency in their own wallets (rather than using a professional service) would effectively be unable to transact anonymously with people who store their cryptocurrency with a money service business. The regulation will likely chill the ability to use self-hosted wallets to transact with the privacy of cash.

Second, for some cryptocurrencies like Bitcoin, transaction data—including users’ Bitcoin addresses—is permanently recorded on a public blockchain. That means that if you know the name of the user associated with a particular Bitcoin address, you can glean information about all of their Bitcoin transactions that use that address. Thus, the proposed regulation’s requirement that money service businesses collect identifying information associated with wallet addresses means that the government may  have access to a massive amount of data beyond just what the regulation purports to cover.

Third, the regulation could hamper broader adoption of self-hosted wallets and technologies that rely on them, or at least make it difficult to integrate these technologies with intermediaries like exchanges. The regulations make it significantly more difficult for self-hosted wallet users to seamlessly interact with other users who have wallets provided by a service subject to the regulations. Under the proposed rules, these hosted wallet services would have to collect certain information about self-hosted wallet users who transact with their customers in some circumstances. That may complicate certain automated transactions, such as smart contracts, or be difficult to implement in scenarios involving decentralized exchanges. Despite the name, “wallets” are not just personal stores of currency: they are a way for individuals and computing systems to hold and dispense money without relying on institutions. Adding friction to these types of transactions undermines the technology’s importance in giving individuals control over their finances. It could also chill the ability of innovators to create decentralized financial platforms with a wide range of lawful uses.

Fourth, although the proposed rules purport to simply apply pre-existing regulations involving cash transactions to cryptocurrencies, they ignore that these digital financial tools exist in part to afford financial privacy and anonymity equal to and perhaps beyond that of traditional cash. In this respect, the proposed regulations are part of a larger troubling trend of the U.S. government extending the financial surveillance of the traditional banking system to cryptocurrencies. This proposal comes just two months after the Department of Justice published its Cryptocurrency Enforcement Framework, which made it abundantly clear that the DOJ wants to undermine the ability of cryptocurrency users to transact anonymously. 

The Framework says, and this regulation repeats, that merely using privacy coins like Zcash and Monero is “indicative of possible criminal conduct.” The Framework also says that people operating mixers and tumblers, which make cryptocurrency transactions harder to trace, can be criminally liable for money laundering. Financial regulators, much like the NSA, apparently suspect that anyone attempting to protect their financial privacy is doing something illegal.

That Framework also targeted decentralized exchanges. Decentralized exchanges are typically open-source software allowing people to exchange cryptocurrency directly with each other, with no other party involved. The DOJ said that those projects have to register with FinCEN and have to “collect and maintain customer and transactional data” or else be subject to civil and criminal penalties.  

Other concerning developments this year include the 5th Circuit’s decision that law enforcement does not need to get a warrant in order to obtain financial transaction data from cryptocurrency exchanges, and FinCEN’s proposal to lower the threshold at which institutions must collect and store transaction data from $3,000 to $250 (in cryptocurrency or fiat currency) to satisfy “Travel Rule” obligations. 

These developments are an assault on the ability to transact privately online and an attempt to extend the widespread financial surveillance of the traditional banking system to cryptocurrency. Financial records contain a trove of sensitive information about people’s personal lives, beliefs, and affiliations. Nonetheless, courts and lawmakers have allowed widespread warrantless financial surveillance in the traditional banking system. The Bank Secrecy Act requires banks to maintain financial records because of their usefulness in investigations, and in 1976, the Supreme Court (in U.S. v. Miller) allowed the government to obtain bank customers’ data without a warrant. EFF is concerned about the U.S. government’s attempts to expand this surveillance to encompass cryptocurrency transactions. 

Cryptocurrency is important for civil liberties because—like cash—it allows for anonymous transactions. Photos from the Hong Kong protests showed long lines at subway stations as protestors waited to purchase tickets with cash so that their electronic purchases would not place them at the scene of the protest. These photos underscore that a cashless society is a surveillance society—and the importance of importing the anonymity of cash to the digital world.

Cryptocurrency is also important because it is censorship resistant. Many traditional financial intermediaries have engaged in arbitrary financial censorship, cutting off access to financial institutions for adult social networks, adult booksellers, and controversial websites, even when these services have not violated the law.

U.S. regulators’ recent actions, including this new proposed rulemaking, threaten to undermine the privacy and civil liberties protections afforded by peer-to-peer technologies. The rulemaking requests comments from the public by January 4, 2021. EFF hopes that the civil liberties community and individuals who want to protect their financial privacy will submit comments opposing this proposed rule, despite—indeed, partly because of—its abrupt deadline.

Marta Belcher

The Slow-Motion Tragedy of Ola Bini's Trial

2 months 2 weeks ago

EFF has been tracking the arrest, detention, and subsequent investigation of Ola Bini since its beginnings over 18 months ago. Bini, a Swedish-born open-source developer, was arrested in Ecuador's Quito Airport in a flurry of media attention in April 2019. He was held without trial for ten weeks while prosecutors seized and pored over his technology, his business, and his private communications, looking for evidence attaching him to an alleged conspiracy to destabilize the Ecuadorean government.

Now, after months of delay, an Ecuadorean pre-trial judge has failed to dismiss the case – despite Bini's defense documenting over hundred procedural and civil liberty violations made in the course of the investigation. EFF was one of the many human rights organizations, including Amnesty International, who were refused permission by the judge to act as observers at Wednesday's hearing.

Bini, a Swedish-born open-source developer, was seized by police at Quito Airport shortly after Ecuador's Interior Minister, Maria Paula Romo, held a press conference warning the country of an imminent cyber-attack. Romo spoke hours after the government had ejected Julian Assange from Ecuador's London Embassy, and claimed that a group of Russians and Wikileaks-connected hackers were in the country, planning an attack in retaliation for the eviction. No further details of this sabotage plot were ever revealed, nor has it been explained how the Minister knew of the gangs' plans in advance. Instead, only Bini was detained, imprisoned, and held in detention for 71 days without charge until a provincial court, facing a habeas corpus order, declared his imprisonment unlawful and released him to his friends and family. (Romo was dismissed as minister last month for ordering the use of tear gas against anti-government protestors.)

EFF visited Ecuador to investigate complaints of injustice in the case in August 2019. We concluded that the Bini affair had the sadly familiar hallmark of a politicized "hacker panic" where media depictions of hacking super-criminals and overbroad cyber-crime laws together encourage unjust prosecutions when the political and social atmosphere demands it. (EFF's founding in 1990 was in part due to a notorious, and similar, case pursued in the United States by the Secret Service, documented in Bruce Sterling's Hacker Crackdown.)

While the Ecuadorian government continues to portray him to journalists as a Wikileaks-employed malicious cybercriminal, his reputation outside the prosecution is very different. An advocate for a secure and open Internet and computer language expert, Bini is primarily known for his non-profit work on the secure communication protocol, OTP, and contributions to the Java implementation of the Ruby programming language. He has also contributed to EFF's Certbot project, which provides easy-to-use security for millions of websites. He moved to Ecuador during his employment at the global consultancy ThoughtWorks, which has an office in the country's capital.

After several months of poring over his devices, prosecutors have been able to provide only one piece of supposedly incriminating data: a copy of a screenshot, taken by Bini himself and sent to a colleague, that shows the telnet login screen of a router. From the context, it's clear that Bini was expressing surprise that the telco router was not firewalled, and was seeking to draw attention to this potential security issue. Bini did not go further than the login prompt in his investigation of the open machine.

Defense and prosecution will now make arguments on the admissibility of this and other non-technical evidence, and the judge will determine if and when Bini's case will progress to a full trial in the New Year.

We, once again, urge Ecuador's judiciary to impartially consider the shaky grounds for this case, and divorce their deliberations from the politicized framing that has surrounded this prosecution from the start.

Danny O'Brien

Facebook’s Laughable Campaign Against Apple Is Really Against Users and Small Businesses

2 months 2 weeks ago

Facebook has recently launched a campaign touting itself as the protector of small businesses. This is a laughable attempt from Facebook to distract you from its poor track record of anticompetitive behavior and privacy issues as it tries to derail pro-privacy changes from Apple that are bad for Facebook’s business.

Facebook’s campaign is targeting a new AppTrackingTransparency feature on iPhones that will require apps to request permission from users before tracking them across other apps and websites or sharing their information with and from third parties. Requiring trackers to request your consent before stalking you across the Internet should be an obvious baseline, and we applaud Apple for this change. But Facebook, having built a massive empire around the concept of tracking everything you do by letting applications sell and share your data across a shady set of third-party companies, would like users and policymakers to believe otherwise.

Make no mistake: this latest campaign from Facebook is one more direct attack against our privacy and, despite its slick packaging, it’s also an attack against other businesses, both large and small.

Apple’s Change

Apple has deployed AppTrackingTransparency for iOS 14, iPadOS 14, and tvOS 14. This kind of consent interface is not new, and it’s similar for other permissions in iOS: for example, when an app requests access to your microphone, camera, or location. It’s normal for apps to be required to request the user permission for access to specific device functions or data, and third-party tracking should be no different. (In an important limitation of AppTrackingTransparency, however, note that this change does not impact first-party tracking and data collection by the app itself.) 

Allowing users to choose what third-party tracking they will or will not tolerate, and forcing apps to request those permissions, gives users more knowledge of what apps are doing, helps protect users from abuse, and allows them to make the best decisions for themselves. You can mark your AppTrackingTransparency preferences app by app, or set it overall for all apps.

This new feature from Apple is one more step in the right direction, reducing developer abuse by giving users knowledge and control over their own personal data.

Small Business and the Ad Industry

So why the outcry from Facebook? Facebook claims that this change from Apple will hurt small businesses who benefit from access to targeted advertising services, but Facebook is not telling you the whole story. This is really about who benefits from the normalization of surveillance-powered advertising (hint: it’s not users or small businesses), and what Facebook stands to lose if its users learn more about exactly what it and other data brokers are up to behind the scenes.

For many years now, the behavioral advertising industry has promoted the notion that behavioral, targeted ads are better. These are the ads that track you everywhere you go online, with sometimes eerily accurate results. This is in contrast to “contextual” or non-targeted ads, which are based not on your personal information but simply on the content of the webpage you are visiting at the time. Many app developers appear to believe the targeted advertising hype. But are targeted ads better? And for whom are they actually better?

In reality, a number of studies have shown that most of the money made from targeted advertising does not reach the creators of the content—the app developers and the content they host.  Instead, the majority of any extra money earned by targeted ads ends up in the pockets of these data brokers. Some names are very well-known, like Facebook and Google, but many more are shady companies that most users have never even heard of.  

Bottom line: "The Association of National Advertisers estimates that, when the “ad tech tax” is taken into account, publishers are only taking home between 30 and 40 cents of every dollar [spent on ads]." The rest goes to third-party data brokers who keep the lights on by exploiting your information, and not to small businesses trying to work within a broken system to reach their customers.

The reality is that only a handful of companies control the online advertising market, and everyone else is at their mercy. Small businesses cannot compete with large ad distribution networks on their own. Because the ad industry has promoted this fantasy that targeted advertising is superior to other methods of reaching customers, anything else will inherently command less value on ad markets. That not only means that ads have a lower ad value if they aren’t targeting users, but it also drives the flow of money away from innovation that could otherwise bring us different advertising methods that don’t involve invasive profiling and targeting.

Facebook touts itself in this case as protecting small businesses, and that couldn’t be further from the truth.

Facebook touts itself in this case as protecting small businesses, and that couldn’t be further from the truth. Facebook has locked them into a situation in which they are forced to be sneaky and adverse to their own customers. The answer cannot be to defend that broken system at the cost of their own users’ privacy and control. 

To begin with, we shouldn’t allow companies to violate our fundamental human rights, even if it’s better for their bottom line. Stripped of its shiny PR language, that is what Facebook is complaining about. If businesses want our attention and money, they need to do so by respecting our rights, including our right to privacy and control over our data. 

Second, we recognize that businesses are in a bind because of Facebook’s dominance and the overpromises of the ad industry. So if we want small businesses to be able to compete, we need to make it a level playing field. If one app needs to ask for permission, all of them should, including Facebook itself. This points the way, again, to the need for a baseline privacy law that protects and empowers users. We hope app developers will join us in pushing for a privacy law so that they can all compete on the same grounds, instead of the worst privacy violators having (or, being perceived as having) a leg up.  

If we want small businesses to be able to compete, we need to make it a level playing field.

Overall, AppTrackingTransparency is a great step forward for Apple. When a company does the right thing for its users, EFF will stand with it, just as we will come down hard on companies that do the wrong thing. Here, Apple is right and Facebook is wrong. Next step: Android should follow with the same protections. Your move, Google.

Andrés Arrieta

Victory! Federal Appeals Court Confirms FOIA Requests Requiring a Database Query are Allowed Under the Law

2 months 2 weeks ago

At a time when the federal government is collecting and creating massive amounts of digital data that can implicate people’s privacy and free speech rights, it is crucial that the public know what the government is doing with that information. A ruling from a federal appellate court earlier this month ensures that the Freedom of Information Act, one of the most important legal tools citizens and reporters have for furthering government transparency, allows the public to understand the government’s use of digital data.

The ruling by the U.S. Court of Appeals for the Ninth Circuit came in a case brought by the Center for Investigative Reporting against the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) seeking aggregate data about the number of weapons used in crimes that could be traced back to being originally purchased by law enforcement. The district court below ruled that ATF did not have to produce the data because the search query of the agency’s data would have amounted to creating a new record, something that FOIA prohibits. 

The lower court’s ruling was dangerous because it had the potential to broadly restrict FOIA requesters from obtaining digital data based largely on a misunderstanding of how digital data is stored and what occurs when people query databases. EFF filed a friend-of-the-court brief to point out how the initial decision could “frustrate access to vast amounts of government digital data in which the public has a legitimate interest.” EFF’s brief argued further that the ruling was out of touch with the reality that the government is “collecting and centralizing extensive swaths of personally identifying data on members of the public, including extremely sensitive information like biometrics and expressive activity on social media.”

Another friend-of-the-court brief written by Harvard Law School’s Cyberlaw Clinic on behalf of data journalists and media organizations pointedly described how database queries that produce aggregate records in no way resemble creating a new physical or digital record. Instead, because a query is simply an instruction to the database “to select a specific subset of information from a database and return it in a particular arrangement,” the result is not a new record but rather just a representation of responsive data in the underlying database.

The Ninth Circuit recognized the broader concerns at issue in this case, writing, “as CIR and amici recognize, whether a search query of an existing database entails the creation of a ‘new record’ is a question of great importance in the digital age.” 

The Ninth Circuit’s opinion quotes EFF and the Cyberlaw clinic’s briefs throughout. For example, the court quotes EFF’s argument that FOIA requests seeking access to aggregate data are essential to balance the public’s interest in understanding how the government uses biometric and other personal data it collects without disclosing the underlying data that is often private or otherwise intrusive. The court wrote:

“Moreover, as in this case, ‘[r]eleasing statistical aggregate data from government Databases’ may sometimes prove the ‘only[] way to comply with FOIA’s mandate while properly balancing the public’s and the government’s interests in safeguarding sensitive information.’”

In rejecting the district court’s interpretation of FOIA, the Ninth Circuit concluded that “if running a search across these databases necessarily amounts to the creation of a new record, much government information will become forever inaccessible under FOIA, a result plainly contrary to Congress’s purpose in enacting FOIA.”

We are grateful that the Ninth Circuit understood both the underlying technology at issue and the stakes of this case. By recognizing that FOIA allows requesters to seek aggregate data from federal agencies, the court ensured that the transparency law remains an important tool for people to learn about government use and abuse of the data it collects.

Aaron Mackey

A Decade After the Arab Spring, Platforms Have Turned Their Backs on Critical Voices in the Middle East and North Africa

2 months 3 weeks ago

Many in the U.S. have spent 2020 debating the problems of content moderation on social media platforms, misinformation and disinformation, and the perceived censorship of political views. But globally, this issue has been in the spotlight for a decade. 

This year is the tenth anniversary of what became known as the "Arab Spring", in which activists and citizens across the Middle East and North Africa (MENA) used social media to document the conditions in which they lived, to push for political change and social justice, and to draw the world's attention to their movement. For many, it was the first time they had seen how the Internet could have a role to play in pushing for human rights across the world. Emerging social media platforms like Facebook, Twitter and YouTube all basked in the reflected glory of press coverage that centered their part in the protests: often to the exclusion of those who were actually on the streets. The years after the uprisings failed to live up to the optimism of the time. Offline, the authoritarian backlash against the democratic protests has meant that many of those who fought for justice a decade ago, are still fighting now. And rather than help that fight, the platform policies and content moderation procedures of the tech giants now too often led to the silencing and erasure of critical voices from across the region. Arbitrary and non-transparent account suspension and removal of political and dissenting speech has become so frequent and systematic in the area that it cannot be dismissed as isolated incidents or the result of transitory errors in automated decision-making.

Along with dozens of other organizations, today EFF has signed an open letter to Facebook, Twitter, and YouTube demanding the companies stop silencing critical voices in the MENA region. The letter asks for several concrete measures to ensure that users across the region are treated fairly and are able to express themselves freely:

  • Do not engage in arbitrary or unfair discrimination.
  • Invest in the regional expertise to develop and implement context-based content moderation decisions aligned with human rights frameworks.
  • Pay special attention to cases arising from war and conflict zones.
  • Preserve restricted content related to cases arising from war and conflict zones.
  • Go beyond public apologies for technical failures, and provide greater transparency, notice, and offer meaningful and timely appeals for users by implementing the Santa Clara Principles on Transparency and Accountability in Content Moderation.

Content moderation policies are not only critical to ensuring robust political debate. They are key to expanding and protecting human rights.  Ten years out from those powerful protests, it's clear that authoritarian and repressive regimes will do everything in their power to stop free and open expression. Platforms have an obligation to note and act on the effects content moderation has on oppressed communities, in MENA and elsewhere.

In 2012, Mark Zuckerberg, CEO and Founder of Facebook, wrote

By giving people the power to share, we are starting to see people make their voices heard on a different scale from what has historically been possible. These voices will increase in number and volume. They cannot be ignored. Over time, we expect governments will become more responsive to issues and concerns raised directly by all their people rather than through intermediaries controlled by a select few.

Instead, governments around the world have chosen authoritarianism, and platforms have contributed to the repression. It's time for that to end.

Read the full letter demanding that Facebook, Twitter, and YouTube stop silencing critical voices from the Middle East and North Africa, reproduced below.

17 December 2020

Open Letter to Facebook, Twitter, and YouTube: Stop silencing critical voices from the Middle East and North Africa

Ten years ago today, 26-year old Tunisian street vendor Mohamed Bouazizi set himself on fire in protest over injustice and state marginalization, igniting mass uprisings in Tunisia, Egypt, and other countries across the Middle East and North Africa. 

As we mark the 10th anniversary of the Arab Spring, we, the undersigned activists, journalists, and human rights organizations, have come together to voice our frustration and dismay at how platform policies and content moderation procedures all too often lead to the silencing and erasure of critical voices from marginalized and oppressed communities across the Middle East and North Africa.

The Arab Spring is historic for many reasons, and one of its outstanding legacies is how activists and citizens have used social media to push for political change and social justice, cementing the internet as an essential enabler of human rights in the digital age.   

Social media companies boast of the role they play in connecting people. As Mark Zuckerberg famously wrote in his 2012 Founder’s Letter

“By giving people the power to share, we are starting to see people make their voices heard on a different scale from what has historically been possible. These voices will increase in number and volume. They cannot be ignored. Over time, we expect governments will become more responsive to issues and concerns raised directly by all their people rather than through intermediaries controlled by a select few.”

Zuckerberg’s prediction was wrong. Instead, more governments around the world have chosen authoritarianism, and platforms have contributed to their repression by making deals with oppressive heads of state; opening doors to dictators; and censoring key activists, journalists, and other changemakers throughout the Middle East and North Africa, sometimes at the behest of other governments:

  • Tunisia: In June 2020, Facebook permanently disabled more than 60 accounts of Tunisian activists, journalists, and musicians on scant evidence. While many were reinstated, thanks to the quick reaction from civil society groups, accounts of Tunisian artists and musicians still have not been restored. We sent a coalition letter to Facebook on the matter but we didn’t receive a public response.
  • Syria: In early 2020, Syrian activists launched a campaign to denounce Facebook’s decision to take down/disable thousands of anti-Assad accounts and pages that documented war crimes since 2011, under the pretext of removing terrorist content. Despite the appeal, a number of those accounts remain suspended. Similarly, Syrians have documented how YouTube is literally erasing their history.
  • Palestine: Palestinian activists and social media users have been campaigning since 2016 to raise awareness around social media companies’ censorial practices. In May 2020, at least 52 Facebook accounts of Palestinian activists and journalists were suspended, and more have since been restricted. Twitter suspended the account of a verified media agency, Quds News Network, reportedly on suspicion that the agency was linked to terrorist groups. Requests to Twitter to look into the matter have gone unanswered. Palestinian social media users have also expressed concern numerous times about discriminatory platform policies.
  • Egypt: In early October 2019, Twitter suspended en masse the accounts of Egyptian dissidents living in Egypt and across the diaspora, directly following the eruption of anti-Sisi protests in Egypt. Twitter suspended the account of one activist with over 350,000 followers in December 2017, and the account still has yet to be restored. The same activist’s Facebook account was also suspended in November 2017 and restored only after international intervention. YouTube removed his account earlier in 2007.

Examples such as these are far too numerous, and they contribute to the widely shared perception among activists and users in MENA and the Global South that these platforms do not care about them, and often fail to protect human rights defenders when concerns are raised.  

Arbitrary and non-transparent account suspension and removal of political and dissenting speech has become so frequent and systematic that they cannot be dismissed as isolated incidents or the result of transitory errors in automated decision-making. 

While Facebook and Twitter can be swift in responding to public outcry from activists or private advocacy by human rights organizations (particularly in the United States and Europe), in most cases responses to advocates in the MENA region leave much to be desired. End-users are frequently not informed of which rule they violated, and are not provided a means to appeal to a human moderator. 

Remedy and redress should not be a privilege reserved for those who have access to power or can make their voices heard. The status quo cannot continue. 

The MENA region has one of the world’s worst records on freedom of expression, and social media remains critical for helping people connect, organize, and document human rights violations and abuses. 

We urge you to not be complicit in censorship and erasure of oppressed communities’ narratives and histories, and we ask you to implement the following measures to ensure that users across the region are treated fairly and are able to express themselves freely:

  • Do not engage in arbitrary or unfair discrimination. Actively engage with local users, activists, human rights experts, academics, and civil society from the MENA region to review grievances. Regional political, social, cultural context(s) and nuances must be factored in when implementing, developing, and revising policies, products and services. 
  • Invest in the necessary local and regional expertise to develop and implement context-based content moderation decisions aligned with human rights frameworks in the MENA region.  A bare minimum would be to hire content moderators who understand the various and diverse dialects and spoken Arabic in the twenty-two Arab states. Those moderators should be provided with the support they need to do their job safely, healthily, and in consultation with their peers, including senior management.
  • Pay special attention to cases arising from war and conflict zones to ensure content moderation decisions do not unfairly target marginalized communities. For example, documentation of human rights abuses and violations is a legitimate activity distinct from disseminating or glorifying terrorist or extremist content. As noted in a recent letter to the Global Internet Forum to Counter Terrorism, more transparency is needed regarding definitions and moderation of terrorist and violent extremist (TVEC) content
  • Preserve restricted content related to cases arising from war and conflict zones that Facebook makes unavailable, as it could serve as evidence for victims and organizations seeking to hold perpetrators accountable. Ensure that such content is made available to international and national judicial authorities without undue delay.
  • Public apologies for technical errors are not sufficient when erroneous content moderation decisions are not changed. Companies must provide greater transparency, notice, and offer meaningful and timely appeals for users. The Santa Clara Principles on Transparency and Accountability in Content Moderation, which Facebook, Twitter, and YouTube endorsed in 2019, offer a baseline set of guidelines that must be immediately implemented. 

Signed,

Access Now
Arabic Network for Human Rights Information — ANHRI
Article 19
Association for Progressive Communications — APC
Association Tunisienne de Prévention Positive
Avaaz
Cairo Institute for Human Rights Studies (CIHRS)
The Computational Propaganda Project
Daaarb — News — website
Egyptian Initiative for Personal Rights
Electronic Frontier Foundation
Euro-Mediterranean Human Rights Monitor
Global Voices
Gulf Centre for Human Rights (GCHR)
Hossam el-Hamalawy, journalist and member of the Egyptian Revolutionary Socialists Organization
Humena for Human Rights and Civic Engagement
IFEX
Ilam- Media Center For Arab Palestinians In Israel
ImpACT International for Human Rights Policies
Initiative Mawjoudin pour l’égalité
Iraqi Network for Social Media - INSMnetwork
I WATCH Organisation (Transparency International — Tunisia)
Khaled Elbalshy - Daaarb website - Editor in Chief
Mahmoud Ghazayel, Independent
Marlena Wisniak, European Center for Not-for-Profit Law
Masaar — Technology and Law Community
Michael Karanicolas, Wikimedia/Yale Law School Initiative on Intermediaries and Information
Mohamed Suliman, Internet activist
My.Kali magazine — Middle East and North Africa
Palestine Digital Rights Coalition (PDRC)
The Palestine Institute for Public Diplomacy
Pen Iraq
Quds News Network
Ranking Digital Rights
Rima Sghaier, Independent
Sada Social Center
Skyline International for Human Rights
SMEX
Syrian Center for Media and Freedom of Expression (SCM)
The Tahrir Institute for Middle East Policy (TIMEP)
Taraaz
Temi Lasade-Anderson, Digital Action
WITNESS
Vigilance Association for Democracy and the Civic State — Tunisia
7amleh – The Arab Center for the Advancement of Social Media

Jason Kelley

Doxxing: Tips To Protect Yourself Online & How to Minimize Harm

2 months 3 weeks ago

“Doxxing” is an eerie, cyber-sounding term that gets thrown around more and more these days, but what exactly does it mean? Simply put, it’s when a person or other entity exposes information about you, publicly available or secret, for the purpose of causing harm. It might be information you intended to keep secret, like your personal address or legal name. Often it is publicly available data that can be readily found online with just a bit of digging, like your phone number or workplace address.

By itself, being doxxed can be dangerous, as it may reveal information about you that could harm you if it were publicly known. More often it is used to escalate to greater harm such as mass online harassment, in-person violence, or targeting other members of your community. Your political beliefs or status as a member of a marginalized community can amplify these threats.

Although you aren’t always faced with the option, taking control of your data and considering precautionary steps to advance your personal security are best done before you’re threatened with a potential doxxing. Privacy does not work retroactively. A great place to start is to develop your personal threat model. After you’ve done that, you can take specific measures to advance your data hygiene.

First Steps To Protect Yourself

First: Take a look at the information that is already publicly available about you online. This is as simple as opening up a search engine and entering your name/nickname/handle/avatar and seeing what comes up. It’s common to be overwhelmed by what you find: there can be much more data about you than you expected readily available online to anyone that cares to do a little digging. Remind yourself that this is normal, and that you are on your way to reducing that information and taking the necessary steps to protecting yourself. Take note of any pieces that strike you as high priority to deal with. Keep track both of what the information is and where you found it.

Second: Identify who you can trust with your secrets. Friends, family, chosen family? If you are fearful of being doxxed, you’ll want to speak with these people directly. Not only because they can be implicated in a doxxing incident, but also because there is strength in your community. These trusted folks can help you plan how to prevent an incident from happening, and also what to do in the event of one (more on that below). Keep in mind that this list will change over time. It’s natural for relationships to ebb and flow, and so will the amount of trust you ascribe them with. Set a reminder for yourself to check in on this list once a year or so.

Set some data sharing community ground rules such as asking for permission before taking/posting photos, refraining from geotagging those photos, or using code words to imply something else that only trusted people know. These are all examples of steps you can take to strengthen your social community’s security posture.

Third: Read up on the policies your online accounts have. Most major social media platforms and other popular web apps have policies and procedures in place that protect users against doxxing and allow them to report any violations. Review that information and note how to get in contact with their support teams.

With these non-technical steps out of the way, you can begin to think about the more technical steps you can take: both as precautionary steps ahead of time, and if you have to respond to a doxxing incident.

Minimizing Your Publicly Available Data

The most obvious protective measure you can take to prevent being doxxed is to reduce the amount of material there is about you online.

Data brokers are companies that subsist entirely off collecting this data, repackaging it, and selling to the highest bidders. The information they gather is often from public records and online trackers, and augmented by commercial transactional data. It is a parasitic, rotten industry that survives by invading the privacy of everyday people online. Due to public pressure, many of these companies offer ways for users to opt out of their data being shared. We recommend starting with White Pages, Instant Check Mate, Acxiom, Intelius, and Spokeo. Also take a look at these other helpful guides on how to remove yourself from people-finding services and data brokerage companies.

For a more thorough—though more costly—approach, several professional services like DeleteMe or Privacy Duck claim to help minimize the data available about you online from these data brokers and similar sources. Beware that data brokers work by continually scraping public records and repopulating their data, and so services like these can require ongoing subscriptions to be most effective. They also cannot (and do not) promise comprehensive data minimization across all possible sources. Users should conduct their own research and consider whether these kinds of services can successfully target the data sources you are most concerned about.

Safe Browsing

Sometimes software behaving as expected can lead to our secrets ending up in places they shouldn’t. For example, suggested friends lists can sometimes “out” you to people despite your having multiple accounts for the very purpose of keeping parts of your life separate.

Most other common examples are the fault of user tracking, which you have the power to minimize. If that’s a concern you want to address, here are some steps you can take:

Check how “fingerprintable” your browser is with our tool Cover Your Tracks. This will give you an idea of how capable those very trackers are of uniquely identifying you and your actions online. We also recommend adding our install-and-forget tracker blocking tool, Privacy Badger, which is designed to silently halt those trackers and let you browse in peace.

From there, you can begin to assess the rest of your personal data hygiene online. To protect your account security, are you using strong unique passwords and multi-factor authentication on each of your accounts? Both of these steps will do wonders in preventing your account from being maliciously hijacked.

As you consider each of the accounts you use online, we highly recommend taking a moment with each to look at what information you share. Do you share with them the bare minimum so that you can continue to use their software, or are you giving more than what’s necessary? Instead of listing your mother’s maiden name, prom date, or pet’s name in response to security questions, consider inputting a random passphrase instead and keeping it in your password manager. And instead of handing over your phone number—a common bit of information behind account compromise and unwanted identification—consider what the phone number will be used for, whether Facebook or Twitter or whomever actually needs it, and if you can substitute your mobile number for something less individually identifying like a Google Voice number. Remaining mindful of what information you’re sharing, as well as when and where you’re sharing it will do wonders for your data hygiene.

Incident Response Plan

Being doxxed is a stressful, scary thing to endure. In the event of it happening, the last thing you’ll want to be doing is scrambling at the last minute to figure out how to respond. Having a ready-made plan in place will do wonders for you. Here are some suggestions on where to start:

Decide which accounts to lock or temporarily deactivate if you’re being doxxed. Make a list. It will help to walk through the process of deactivating/locking the account for each so that you can take note of any special steps that they may require.

Have a spreadsheet template handy to record incidents as they happen. You’ll want to have fields ready to mark when something took place, who it appears to be from, where it’s happening, and any details about what happened. Making this log will be incredibly useful: it can help you identify where the weakness is in your personal data security, as well as provide a detailed log of events that you could pass along to others.

Care for Yourself, Care for Others

Finally, you’ll want to include people from your trusted networks to help you in this process. Knowing you’ve got friends to support you if you’re being doxxed will not only ease the burden of stress and labor, but can also alert them to how they might be implicated. We recommend going over this whole process with a trusted friend. Knowing they’re available to take over during a crisis will ease your mind. Reciprocating that help for them builds community trust.

Data hygiene is a form of community self-care. Establishing data hygiene standards with your close network can be a way of caring for yourself, and them. After all, an incident on one node of a network could compromise other nodes on the same network. Caring for your own data hygiene will in part strengthen your community’s, and vice versa.

Daly Barnett

Vaccine Passports: A Stamp of Inequity

2 months 3 weeks ago

A COVID vaccine has been approved and vaccinations have begun. With them have come proposals of ways to prove you have been vaccinated, based on the presumption that vaccination renders a person immune and unable to spread the virus. The latter is unclear. It also raises digital rights concerns, particularly if you look at the history of healthcare access, and consider how it maps onto current proposals to digitize and streamline “vaccination passports” for travel.

We must make sure that, in our scramble to reopen the economy, we do not overlook inequity of access to the vaccine; how personal health data in newly minted digital systems operate as gatekeepers to workplaces, schools, and other spaces; and the potential that today’s vaccine passport will act as a catalyst toward tomorrow’s system of national digital identification that can be used to systematically collect and store our personal information.

We have already witnessed problems with COVID-19 testing and its intersection with digital rights. Some individuals weren’t able to access testing simply because they did not have access to a vehicle. The digital divide emerged in places like San Francisco’s Tenderloin district, one of the city’s poorest neighborhoods, where many weren't able to access testing because they did not have a smartphone. The danger of further social inequity is just one reason why we opposed a since-vetoed bill in California that proposed to create a blockchain-based system of verifiable credentials for medical test results, including COVID-19 antibody tests. We must draw on the lessons from the recent past and earlier vaccination efforts as we go forward.

Current Proposals

EFF is focused on proposals to distribute these vaccination credentials digitally. While paper-based credentials are possible, too, most proposed plans involve digital implementations. In fact, some companies already have digital passport systems. CLEAR is rolling out a HealthPass that logs testing or vaccination status. This company provides pre-flight screening in major airports around the country. Ticketmaster has considered partnering with CLEAR for another “Health Pass.” Such partnerships could lead to another intertwined network of unprecedented sharing of personal information, similar to issues we have currently with data brokers and advertising information.

Some have suggested using  W3C’s (The Worldwide Web Consortium) Verified Credentials and Digital Identifier specifications as a potential way to standardize vaccination passports. However, this standard does not tend to solve the equity issues of unequal access to vaccination and digital technologies. They are also not exempt from attacks that can potentially leak data.

Advocates of digital systems have suggested they would address the fraud and forgery concerns raised by paper-based credentials. Proposals like CommonPass—which notifies users of local travel rules and attempts to verify that airline passengers are complying with those rules—are designed to face this issue head on. Informing users of local information is a great feature. However, these systems do little to address the more prevalent fraud targeting individuals during this pandemic. Until these vaccinations become accessible to all, concerns over fabrication should not overshadow concerns to access to the vaccine in the first place. 

Blockchain Is Not a Silver Bullet

Many proposals for vaccination passports reference blockchain technology, a distributed public ledger, as a means to share vaccine credentials. But there are qualities of blockchain that contradict privacy concerns. One is immutability, meaning the fact that personal health information can’t be changed. Immutability may have anti-forgery benefits, but that does more for the credential verifier than the credential holder. Permanence eliminates the ability to delete or correct sensitive personal information from the system.

Interoperability of data with the private sector does not equate to decentralization of data.

Also, many healthcare systems have centralized authorities. One of blockchain’s main selling points is peer-to-peer decentralization—an attribute that’s diametrically opposed to the implementation of a health mandate. Interoperability of data with the private sector does not equate to decentralization of data.

Privacy is much more than just preventing a data breach or forgery. Limiting a definition of “privacy” to just these measures would short-change our need to control our personal information. Framing our policy goals should not be left to private companies seeking to sell products they say will help mitigate a pandemic. And, as researcher Harry Haplin notes in a recent paper,

“temporary measures meant for a purpose as seemingly harmless as reviving tourism could become normalized as the blockchain-based identity databases are by design permanent and are difficult to disassemble once the crisis has passed.”

For these reasons, layering blockchain to improve security or privacy for health documentation doesn’t make sense in this context, and has the potential to do far more harm than good.

Lessons Learned Should Be Lessons Applied

A digitized system based on proof of immunization will amplify the lack of access.

The COVID-19 pandemic is unprecedented in our lifetimes, but there are lessons we can learn from the past. In 2009, the H1N1 (“swine flu”) vaccination rollout was plagued with inequitable access. With supply potentially limited for COVID-19 vaccinations for the next 6 months, more of the same can occur. A digitized system based on proof of immunization will amplify the lack of access. Resources, especially tax dollars, should be focused on giving people more information about and access to vaccinations, rather than creating a digital fence against those who haven’t been vaccinated yet—and subjecting people who have been vaccinated to new privacy risks.

Trust is critical to public health. Today, many people are already wary of the COVID vaccination. Sweeping in smartphone-based products and new privacy concerns would only harm public health efforts to ease the public’s mind. Immunizations and providing proof of immunizations are not new. However, there's a big difference between utilizing existing systems to adapt to a public health crisis and vendor-driven efforts to deploy new, potentially dangerous technology under the guise of helping us all move past this pandemic.

Alexis Hancock
Checked
33 minutes 45 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed