On Global Encryption Day, Let's Stand Up for Privacy and Security

3 months ago

At EFF, we talk a lot about strong encryption. It’s critical for our privacy and security online. That’s why we litigate in courts to protect the right to encrypt, build technologies to encrypt the web, and it’s why we lead the fight against anti-encryption legislation like last year’s EARN IT Act.

We’ve seen big victories in our fight to defend encryption. But we haven’t done it alone. That’s why we’re proud this year to join dozens of other organizations in the Global Encryption Coalition as we celebrate the first Global Encryption Day, which is today, October 21, 2021.

For this inaugural year, we’re joining our partner organizations to ask people, companies, governments, and NGOs to “Make the Switch” to strong encryption. We’re hoping this day can encourage people to make the switch to end-to-end encrypted platforms, creating a more secure and private online world. It’s a great time to turn on encryption on all the devices or services you use, or switch to an end-to-end encrypted app for messaging—and talk to others about why you made that choice. Using strong passwords and two-factor authentication are also security measures that can help keep you safe. 

If you already have a handle on encryption and its benefits, today would be a great day to talk to a friend about it. On social media, we’re using the hashtag #MakeTheSwitch.

The Global Encryption Day website has some ideas about what you could do to make your online life more private and secure. Another great resource is EFF’s Surveillance Self Defense Guide, where you can get tips on everything from private web browsing, to using encrypted apps, to keeping your privacy in particular security scenarios—like attending a protest, or crossing the U.S. border. 

We need to keep talking about the importance of encryption, partly because it’s under threat. In the U.S. and around the world, law enforcement agencies have been seeking an encryption “backdoor” to access peoples’ messages. At EFF, we’ve resisted these efforts for decades. We’ve also pushed back against efforts like client-side scanning, which would break the promises of user privacy and security while technically maintaining encryption.

If you already have a handle on encryption and its benefits, today would be a great day to talk to a friend about it. On social media, we’re using the hashtag #MakeTheSwitch.

The Global Encryption Coalition is listing events around the world today. EFF Senior Staff Technologist Erica Portnoy will be participating in an “Ask Me Anything” about encryption on Reddit, at 17:00 UTC, which is 10:00 A.M. Pacific Time. Jon Callas, EFF Director of Technology Projects, will join an online panel about how to improve user agency in end-to-end encrypted services, on Oct. 28.

Joe Mullin

New Global Alliance Calls on European Parliament to Make the Digital Services Act a Model Set of Internet Regulations Protecting Human Rights and Freedom of Expression

3 months ago

The European Parliament’s regulations and policy-making decisions on technology and the internet have unique influence across the globe. With great influence comes great responsibility. We believe the European Parliament (EP) has a duty to set an example with the Digital Services Act (DSA), the first major overhaul of European internet regulations in 20 years. The EP should show that the DSA can address tough challenges—hate speech, misinformation, and users’ lack of control on big platforms—without compromising human rights protections, free speech and expression rights, and users’ privacy and security.

Balancing these principles is complex, but imperative. A step in the wrong direction could reverberate around the world, affecting fundamental rights beyond European Union borders. To this end, 12 civil society organizations from around the globe, standing for transparency, accountability, and human rights-centered lawmaking, have formed the Digital Services Act Human Rights Alliance to establish and promote a world standard for internet platform governance. The Alliance is comprised of digital and human rights advocacy organization representing diverse communities across the globe, including in the Arab world, Europe, United Nations member states, Mexico, Syria, and the U.S.

In its first action towards this goal, the Alliance today is calling on the EP to embrace a human rights framework for the DSA and take steps to ensure that it protects access to information for everyone, especially marginalized communities, rejects inflexible and unrealistic take down mandates that lead to over-removals and impinge on free expression, and strengthen mandatory human rights impact assessments so issues like faulty algorithm decision-making is identified before people get hurt.

This call to action follows a troubling round of amendments approved by an influential EP committee that crossed red lines protecting fundamental rights and freedom of expression. EFF and other civil society organizations told the EP prior to the amendments that the DSA offers an unparalleled opportunity to address some of the internet ecosystem’s most pressing challenges and help better protect fundamental rights online—if done right.

So, it was disappointing to see the EP committee take a wrong turn, voting in September to limit liability exemptions for internet companies that perform basic functions of content moderation and content curation, force companies to analyze and indiscriminately monitor users’ communication or use upload filters, and bestow special advantages, not available to ordinary users, on politicians and popular public figures treated as trusted flaggers.

In a joint letter, the Alliance today called on the EU lawmakers to take steps to put the DSA back on track:

  • Avoid disproportionate demands on smaller providers that would put users’ access to information in serious jeopardy.
  • Reject legally mandated strict and short time frames for content removals that will lead to removals of legitimate speech and opinion, impinging rights to freedom of expression.
  • Reject mandatory reporting obligations to Law Enforcement Agencies (LEAs), especially without appropriate safeguards and transparency requirements.
  • Prevent public authorities, including LEAs, from becoming trusted flaggers and subject conditions for becoming trusted flaggers to regular reviews and proper public oversight.   
  • Consider mandatory human rights impact assessments as the primary mechanism for examining and mitigating systemic risks stemming from platforms' operations.

For the DSA Human Rights Alliance Joint Statement:
https://www.eff.org/document/dsa-human-rights-alliance-joint-statement

For more on the DSA:
https://www.eff.org/issues/eu-policy-principles

Karen Gullo

EFF to Federal Court: Block Unconstitutional Texas Social Media Law

3 months ago

Users are understandably frustrated and perplexed by many big tech companies’ content moderation practices. Facebook, Twitter, and other social media platforms make many questionable, confounding, and often downright incorrect decisions affecting speakers of all political stripes. 

A new Texas law, which Texas Governor Greg Abbott said would stop social media companies that “silence conservative viewpoints and ideas,” restricts large platforms from removing or moderating content based on the viewpoint of the user. The measure, HB 20, is unconstitutional and should not be enforced, we told a federal court in Texas in an amicus brief filed Oct. 15. 

In NetChoice v. Paxton, two technology trade associations sued Texas to prevent the law from going into effect. Our brief, siding with the plaintiffs, explains that the law forces popular online platforms to publish speech they don’t agree with or don’t want to share with their users. Its broad restrictions would destroy many online communities that rely on moderation and curation. Platforms and users may not want to see certain kinds of content and speech that is legal but still offensive or irrelevant to them. They have the right under the First Amendment to curate, edit, and block everything from harassment to opposing political viewpoints.

Contrary to HB 20’s focus, questionable content moderation decisions are in no way limited to conservative American speakers. In 2017, for example, Twitter disabled the verified account of Egyptian human rights activist Wael Abbas. That same year, users discovered that Twitter had marked tweets containing the word “queer” as offensive. Recent reporting has highlighted how Facebook failed to enforce its policies against hate speech and promotion of violence, or even publish those policies, in places like Ethiopia.

However, EFF’s brief explains that users also rely on the First Amendment to create communities online, whether they are niche or completely unmoderated. Undermining speech protections would ultimately hurt users by limiting their options online. 

HB 20 also requires large online platforms to follow transparency and complaint procedures, such as publishing an acceptable use policy and biannual statistics on content moderation. While EFF urges social media companies to be transparent with users about their moderation practices, when governments mandate transparency, they must accommodate constitutional and practical concerns. Voluntary measures such as implementing the Santa Clara Principles, guidelines for a human rights framework for content moderation, best serve a dynamic internet ecosystem.

HB 20’s requirements, however, are broad and discriminatory. Moreover, HB 20 would likely further entrench the market dominance of the very social media companies the law targets because compliance will require a significant amount of time and money.

EFF has filed several amicus briefs opposing government control over content moderation, including in a recent successful challenge to a similar Florida law. We urge the federal court in Texas to rule that HB 20 restricts and burdens speech in violation of the Constitution.

Mukund Rathi

From Bangkok to Burlington — The Public Interest Social Internet

3 months ago

This blog post is part of a series, looking at the public interest internet—the parts of the internet that don’t garner the headlines of Facebook or Google, but quietly provide public goods and useful services without requiring the scale or the business practices of the tech giants. Read our earlier installments.

In the last installment, we discussed platforms that tie messaging apps together. These let users chat with more people more easily, no matter where they are or what app they’re using, making it possible for someone using the latest chat tool, like Slack, to talk to someone on a decades old-platform like IRC. But localized services matter to the public interest internet as well. While  forums like Nextdoor have drawn attention (and users) for offering neighborhood communication regardless of your zip code, other services that predate those—and get around many of their controversies—do exist. 

Is the best of the Internet doomed to exist in just some narrow strongholds? 

This post will be about two very different social networks:

The first is Front Porch Forum, a Vermont-local platform that is “a micro hyperlocal social network,” tied to local services and with a huge percentage of uptake of local users. A caveat that many find more freeing than restricting: comments, replies, and posts don’t reach their neighbors until the following day, in a newsletter-style digest.

The other is Pantip, which is one of the top ten websites in Thailand. It’s a giant compared to Front Porch Forum, but its ability to persist—and stay independent—make it a worthwhile subject.

Growing Slowly 

Cofounders Michael and Valerie Wood-Lewis, of Burlington, Vermont, began Front Porch Forum in the early 2000’s by passing out flyers in their neighborhood. The goal wasn’t to build a company, or create a startup—it was to meet their neighbors. Users of other neighborhood sites will be familiar with the benefits of a local online community—posts ranged early on from people looking to borrow tools to helping one another find lost pets. 

As the site grew, others outside of Burlington asked to join. But Wood-Lewis turned them down, opting to focus the community on his area only. At first, he created a how-to guide for those who wanted to build their own local network, but eventually, the site allowed anyone in Vermont to join (it’s now expanded to some parts of New York and Massachusetts). 

But even as it's grown, the focus has been on public good—not profit. Instead of increasing the amount of posts users can make to drum up more content (and space for ads), the site has continued functioning effectively as an upgrade to its earlier listserv format. And rather than collect data on users beyond their location (which is necessary to sign up for the site and communicate with neighbors), or plastering it with advertising, Wood-Lewis uses Front Porch Forum’s hyperlocal geography to its advantage

“We have been pretty much diametrically opposed to the surveillance business model from the beginning. So our basic business model is we sell ads, advertising space to local businesses and nonprofits. The ads are distributed by geography, and by date, and that's it. There's no, "Yeah, let's check people's browser history, or let's pry into people's lives." We do not do that.”

These simple ads make it easy for local businesses and others to offer services to their community without hiring a graphic designer or having to learn anything complicated about online advertising, like how to make contextual ads that rely on perceived user interests (and data). 

In contrast to the well-known issues of racism and gatekeeping on Nextdoor or Ring’s Neighbors app, Wood-Lewis attributes the general positivity of the site to a variety of factors that are all baked into the public interest mindset: slow-growth, a focus on community, and moderation. But not necessarily that kind of moderation—while posts are all reviewed by moderators, and there are some filtering tools, posts typically come out as a newsletter, once a day, by default. If you want to yell at your neighbor, you’ve got the option to mail them directly through the site, but you’re probably better off knocking on their door. Users say that while most of the internet “is like a fire hose of information and communication, Front Porch Forum is like slow drip irrigation.” 

While Front Porch Forum has grown, it’s done so through its own earnings and at its own pace. While many of the most popular social networks need to scale to perform for investors, which relies on moving fast and breaking things, Front Porch Forum could be described as a site for moving slowly and fixing things.


Staying Afloat Despite Free Speech Challenges

On the other side of the world, the forum Pantip, a sort of Thai reddit, has grown to be one of the most popular sites in the country since its creation in 1997. Pantip's growth (and survival) is all the more significant because Thailand has some of the harshest legal frameworks in the world for online speech. Intermediaries are strictly liable for the speech of their users, which is particularly troubling, since the crime of criticizing members of the royal family (“lese majeste”) can lead to imprisonment for both poster and site administrator. 

As a result, the site’s strict rules may seem overbearing to Western users—participating in the upvoting and points system requires validating your identity with a government ID, for example—yet the site remains popular after over twenty years of being run without outside investment. Pantip has navigated treacherous waters for a very long time, and has even had parts of the site shut down by the government, but it chugs along, offering a place for Thai users to chat online, while many other sites have been scuppered. For example, many newspapers have shut down comment sections for fear of liability. Though this legal regime puts Pantip’s owner in danger, particularly during regime changes—he still won't sell out to bigger companies:  “Maybe I’m too conservative. I don’t believe that internet [business] needs a lot of money to run. Because we can do internet business with a very small [investment].” 

Models for the Future?

Neither Front Porch Forum nor Pantip get the headlines of a Facebook or a Twitter—but not because they're unsuccessful. Rather, their relatively specific rules and localized audiences make them poor models for scaling to world domination. To a certain extent, they benefit from not garnering huge amounts of publicity. In Front Porch Forum's case, mass appeal is irrelevant — the site’s membership spreads by word of mouth in a local context, and advertising revenues grow with it. For Pantip, it's better to keep a low profile, even if hundreds of thousands of users are in on the secret. And both sites are proof that social media doesn't have to be run by venture capital-funded, globally-scaled services, and that we don’t need a Facebook to give people in local areas or in developing countries forums to connect and organize.

But is part of their success due to Front Porch Forum and Pantip’s lack of interest in disrupting the tech giants of the world? If the Public Interest Internet is an alternative to Facebook and Google, is it really a viable future for just a few lucky groups? Is the best of the Internet doomed to exist in just some narrow strongholds? Or can we scale up the human-scale: make sure that everyone has their own Front Porch, or Pantip close to hand, instead of having to stake everything in the chilly, globalised space of the tech giants?

One area of the Net that has been with it since the beginning, and has managed to both be a place for friends to collaborate, and somewhere that has made a wider impact on the world, is the subject of the next post in this series. We’re going to discuss the world of fan content, including the Hugo Award-winning fanfiction archive at Archive of Our Own, and how the Public Interest Internet makes it possible for people to comment on and add to the stories they love, in places that better serve them than our current giants.

This is the sixth post in our blog series on the public interest internet. Read more in the series:

Jason Kelley

EFF Files New Lawsuit Against California Sheriff for Sharing ALPR Data with ICE and CBP

3 months ago

The Marin County Sheriff illegally shares the sensitive location information of millions of drivers with out-of-state and federal agencies, including Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP). The Sheriff uses automated license plate readers (ALPRs)—high-speed cameras mounted on street poles or squad cars—to scan license plates and record the date, time, and location of each scan. This data can paint a detailed picture of the private lives of Marin County residents, including where they live and work, visit friends or drop their children off at school, and when they attend religious services or protests.

Last week, EFF filed a new lawsuit on behalf of three immigrant rights activists against Sheriff Bob Doyle and Marin County for violating two California laws that protect immigrants and motorists’ privacy. Our co-counsel are the ACLU Foundations of Northern California, Southern California, and San Diego & Imperial Counties, and attorney Michael Risher. We seek a court order prohibiting the Sheriff from sharing ALPR data with out-of-state and federal agencies.

The Marin Sheriff’s ALPRs scan thousands of license plates each month. That sensitive data, including photos of the vehicle and sometimes its drivers and passengers, is stored in a database. The Sheriff permits over 400 out-of-state and 18 federal agencies, including CBP and ICE, to run queries of full or partial license plates against information the Sheriff has collected.

This data sharing particularly impacts the safety and privacy of immigrants, communities of color, and religious minorities. Like many other surveillance technologies, ALPRs have a history of disproportionately impacting marginalized communities. ICE has used ALPR data to detain and deport immigrant community members. NYPD used ALPRs to scan license plates near mosques.

The Sheriff’s sharing of ALPR data to entities outside of California violates state law. S.B. 34, enacted in 2015, prohibits California law enforcement agencies from sharing ALPR data with entities outside of California. Moreover, the California Values Act (S.B. 54), enacted in 2018, limits the use of local resources to assist federal immigration enforcement, including the sharing of personal information.

To learn more, read the complaint, the press release, our case page, the ACLU of Northern California’s case page, and our clients’ statements.

Related Cases: Lagleva v. Marin County Sheriff
Saira Hussain

After Years of Delays and Alarmingly Flimsy Evidence, Security Expert Ola Bini’s Trial Set for This Week

3 months ago

Update, October 19: Ola Bini's trial has been suspended by request of the Corporación Nacional de Telecomunicaciones (CNT) whose representative was unable to attend the hearing. No new date has yet been set.

For over two years EFF has been following the case of Swedish computer security expert Ola Bini, who was arrested in April, 2019, in Ecuador, following Julian Assange's ejection from that country’s London Embassy. Bini’s pre-trial hearing, which was suspended and rescheduled at least five times during 2020, was concluded on June 29, 2021. Despite the cloud that has hung over the case—political ramifications have seemed to drive the allegations, and Bini has been subjected to numerous due process and human rights violations—we are hopeful that the security expert will be afforded a transparent and fair trial and that due process will prevail. 

Ola Bini is known globally as a computer security expert; he is someone who builds secure tools and contributes to free software projects. Ola’s team at ThoughtWorks contributed to Certbot, the EFF-managed tool that has provided strong encryption for millions of websites around the world, and in 2018, Ola co-founded a non-profit organization devoted to creating user-friendly security tools.

From the very outset of Bini’s arrest at the Quito airport there have been significant concerns about the legitimacy of the allegations against him. In our visit to Ecuador in July, 2019, shortly after his arrest, it became clear that the political consequences of Bini’s arrest overshadowed the prosecution’s actual evidence. In brief, based on the interviews that we conducted, our conclusion was that Bini's prosecution is a political case, not a criminal one. His arrest occurred shortly after Maria Paula Romo, then Ecuador’s Interior Minister, held a press conference to claim (without evidence) that a group of Russians and Wikileaks-connected hackers were in the country, planning a cyber-attack in retaliation for the government's eviction of Assange; a recent investigation by La Posta revealed that the former Minister knew that Ola Bini was not the "Russian hacker" the government was looking for when Bini was detained in Quito's airport. (Romo was dismissed as minister in 2020 for ordering the use of tear gas against anti-government protestors).

A so-called piece of evidence against Bini was leaked to the press and taken to court: a photo of a screenshot, supposedly taken by Bini himself and sent to a colleague, showing the telnet login screen of a router. The image is consistent with someone who connects to an open telnet service, receives a warning not to log on without authorization, and does not proceed—respecting the warning. As for the portion of a message exchange attributed to Bini and a colleague, leaked with the photo, it shows their concern with the router being insecurely open to telnet access on the wider Internet, with no firewall.

Bini’s arrest and detention were fraught with due process violations. Bini faced 70 days of imprisonment until a Habeas Corpus decision considered his detention illegal (a decision that confirmed the weakness of the initial detention). He was released from jail, but the investigation continued, seeking evidence to back the alleged accusations against him. After his release the problems continued, and as the delays dragged on, the Office of the Inter-American Commission on Human Rights (IACHR) Special Rapporteur for Freedom of Expression included its concern with the delay in Bini’s trial in its 2020's annual report. At the time of our visit, Bini's lawyers told us that they counted 65 violations of due process, and journalists told us that no one was able to provide them with concrete descriptions of what he had done. 

In April 2021, Ola Bini’s Habeas Data recourse, filed in October 2020 against the National Police, the Ministry of Government, and the Strategic Intelligence Center (CIES), was partially granted by the Judge. According to Bini's defense, he had been facing continuous monitoring by members of the National Police and unidentified persons. The decision requested CIES to provide information related to whether the agency has conducted surveillance activities against the security expert. The ruling concluded that CIES unduly denied such information to Ola Bini, failing to offer a timely response to his previous information request. 

Though the judge decided in June’s pre-trial hearing to proceed with the criminal prosecution against Bini, observers indicated a lack of a solid motivation in the judge's decision. The judge was later "separated" from the case in a ruling that admitted the wrongdoing of successive pretrial suspensions and the violation of due process. 

It is alarming, but perhaps not surprising, that the case will proceed after all these well-documented irregularities. While Ola Bini’s behavior and contacts in the security world may look strange to authorities, his computer security expertise is not a crime. Since EFF's founding in 1990, we have become all-too familiar with overly politicized "hacker panic" cases, which encourage unjust prosecutions when the political and social atmosphere demands it. EFF was founded in part due to a notorious, and similar, case pursued in the United States by the Secret Service. Our Coder’s Rights Project has worked for decades to protect the security and encryption researchers that help build a safer future for all of us using digital technologies, and who far too often face serious legal challenges that prevent or inhibit their work. This case is, unfortunately, part of a longstanding history of countering the unfair criminal persecution of security experts, who have unfortunately been the subject of the same types of harassment as those they work to protect, such as human rights defenders and activists. 

In June of this year, EFF called upon Ecuadors’ Human Rights Secretariat to give special attention to Ola Bini’s upcoming hearing and prosecution. As we stressed in our letter

Mr. Bini's case has profound implications for, and sits at the center of, the application of human rights and due process, a landmark case in the context of arbitrarily applying overbroad criminal laws to security experts. Mr. Bini's case represents a unique opportunity for the Human Rights Secretariat Cabinet to consider and guard the rights of security experts in the digital age.  Security experts protect the computers upon which we all depend and protect the people who have integrated electronic devices into their daily lives, such as human rights defenders, journalists, activists, dissidents, among many others. To conduct security research, we need to protect the security experts, and ensure they have the tools to do their work.

The circumstances around Ola Bini's detention have sparked international attention and indicate the growing seriousness of security experts' harassment in Latin America. The flimsy allegations against Ola Bini, the series of irregularities and human rights violations in his case, as well as its international resonance, situate it squarely among other cases we have seen of politicized and misguided allegations against technologists and security researchers. 

We hope that justice will prevail during Ola Bini’s trial this week, and that he will finally be given the fair treatment and due process that the proper respect of his fundamental rights requires.

Jason Kelley

EFF Joins Press Freedom Groups In Asking U.S. To Drop Assange Extradition Efforts

3 months 1 week ago

EFF has joined a coalition of press freedom, civil liberties, and human rights groups that sent a letter to Attorney General Merrick Garland urging the Department of Justice to drop its efforts to extradite and prosecute Julian Assange.

The renewed request comes after a Yahoo News report that the CIA discussed kidnapping or killing Assange in 2017, before charges against Assange were filed. The agency also reportedly planned extensive spying on WikiLeaks associates.

Assange has been charged under the Espionage Act. The charges have been widely condemned by journalists and press freedom organizations, including by outlets that have been critical of Assange. Leaks of information that the government would prefer to keep secret, and the publication of those leaks by journalists, are vital to our democracy. Regardless of what one thinks about other criminal charges against Assange, his indictment on charges that mostly reflect basic journalistic practices will have a chilling effect on critical national security journalism. 

In January, a British judge denied the Trump Administration’s extradition request, on the basis that the conditions of confinement in the U.S. would be overly harsh. The U.S. chose to appeal that decision. A hearing on the case is scheduled to be heard next week. Human rights and press freedom groups, including EFF, first asked in February for the Biden Administration to drop the extradition effort.

In addition to EFF, the letter to DOJ has been signed by the ACLU, Amnesty International, Center for Constitutional Rights, Fight for the Future, Freedom of the Press Foundation, Human Rights Watch, PEN America, Reporters Without Borders, and many other groups. 

Joe Mullin

Flight of the Concord Drones

3 months 1 week ago

This blog post was written by Kenny Gutierrez, EFF Bridge Fellow.

The City Council of Concord, California, is tone deaf to community concerns regarding a proposed police Unmanned Aerial Surveillance (UAS) system – commonly referred to as drones. In a city where the police department is responsible for nearly 60% of the city budget, this should come as no surprise. The UAS program, however, will strangely be funded by Marathon Petroleum, which has no offices or facilities in Concord. EFF, ACLU, and 15 other Contra Costa community organizations opposed this action. We also demanded police oversight and ample safeguards to protect civil liberties and civil rights if the program were to be adopted.

Privacy and police accountability are a massive issue with UAS systems, and are both high priority issues for California voters, as evidenced by the passage of California Consumer Privacy Act and police accountability legislation.

Potential Issues with Drones

Drones are unmanned aerial vehicles that can be equipped with high definition, live-feed video cameras, thermal infrared video cameras, heat sensors, and radar—all of which allow for sophisticated and persistent surveillance. Drones can record video or still images in daylight or at night (with infrared lenses). They can also be equipped with software tools like license plate readers, face recognition, and GPS trackers that extend the dangers they pose to privacy. There have even been proposals for law enforcement to attach lethal and non-lethal weapons to drones. Additionally, newly developed drone automation allows for automatic tracking of vehicles and individuals.

EFF Letter to Concord City Council

EFF and our allies sent a letter urging the Concord City Council to not adopt a UAS system for Concord Police Department. If, however, the city council decided to approve the UAS system, we made specific recommendations to provide ample safeguards to protect civil liberties and civil rights:

First, drones should be deployed only with a warrant, or in an emergency that threatens death or serious bodily injury to a person.  All deployments should be thoroughly documented, and that documentation must be made publicly available.

Second, facial recognition technology, artificial intelligence, automated tracking, heat sensors, license plate readers, cell-phone interception, and lethal and non-lethal weapons should be prohibited as incorporated technologies to UAS drones.

Third, there must be clear rules regarding access to UAS footage.  Officers suspected of misconduct must not be allowed to view footage until they have made an initial statement regarding the episode.  People depicted in footage must have access to the footage. Also, footage depicting police use of force must be available to the public. Similar to the flaws in body worn cameras, police can exercise too much control over the video before the public sees it without police oversight.

More generally, Concord should adopt a Community Control Over Police Surveillance (CCOPS) ordinance.  A CCOPS law acts to promote transparency, the public’s welfare, civil rights, and civil liberties in all decisions regarding the funding, acquisition, and deployment of surveillance equipment by local police.  Such a law would appropriately require city departments to provide the public with information about surveillance proposals, including a draft usage policy, weeks rather than days before the proposals are debated before the City Council.

Not only did the Concord City Council approve the police drones, it also failed to adopt these meaningful safeguards. Concord, like many other municipalities, decided instead to use a boilerplate, unprotective Lexipol policy. Will Concord become the next Chula Vista, which deploys a drone for nearly every 911 call? Only time will tell. Concord’s drone policy is permissive enough that it could be.

Adam Schwartz

California Activists Sue Marin County Sheriff for Illegally Sharing Drivers’ License Plate Data With ICE, CBP and Other Out-of-State Agencies

3 months 1 week ago
Immigrants’ Privacy, Security Threatened by Sheriff’s Practice, Which Violates California Law

San Francisco—Community activists in Northern California today sued Marin County Sheriff Robert Doyle for illegally sharing millions of local drivers’ license plates and location data, captured by a network of cameras his office uses, with hundreds of federal and out-of-state agencies—a practice that violates two California laws, endangers the safety and privacy of local immigrant communities, and facilitates location tracking by police.

The ACLU Foundations of Northern California, Southern California, and San Diego & Imperial Counties, the Electronic Frontier Foundation (EFF), and attorney Michael T. Risher represent community activists Lisa Bennett, Cesar S. Lagleva, and Tara Evans, long-time Marin community members, in a lawsuit filed in Marin County Superior Court. The suit seeks to end the sheriff’s illegal practice of giving hundreds of agencies outside California access to a database of license plate scans used to identify and track people, revealing where they live and work, when they visit friends or drop their kids at school, and when they attend religious services or protests.

“The information unveiled through this lawsuit shows that the freedoms that people think they possess in Marin County are a mirage: people cannot move about freely without being surveilled,” said Bennett. “Our county sheriff, who has sworn to uphold the law, is in fact violating it by sharing peoples’ private information with outside agencies. This has especially alarming implications for immigrants and people of color: two communities that are traditionally the targets of excessive policing, surveillance, and separation from loved ones and community through incarceration or deportation.”

License plate scans occur through Automated License Plate Readers (ALPRs): high-speed cameras mounted in a fixed location or atop police cars moving through the community that automatically capture all license plates that come into view, recording the exact location, date, and time that the vehicle passes by. The information can paint a detailed picture of our private lives, our daily schedules, and our social networks.

 Targeting Immigrant Communities

Documents show that the sheriff’s office shares and transfers ALPR information with Immigration and Customs Enforcement (ICE), Customs and Border Protection (CBP), over a dozen other federal law enforcement agencies, and over 400 out-of-state law enforcement agencies.

“In the hands of police, the use of ALPR technology is a threat to privacy and civil liberties, especially for immigrants. Federal immigration agencies routinely access and use ALPR information to locate, detain, and deport immigrants. The sheriff’s own records show that Sheriff Doyle is sharing ALPR information with two of the most rogue agencies in the federal government: ICE and CBP,” said Vasudha Talla, Immigrants’ Rights Program Director at the ACLU Foundation of Northern California (ACLU NorCal). “Police should not be purchasing surveillance technology, let alone facilitating the deportation and incarceration of our immigrant communities.”

Using its ALPR system, the Marin County Sheriff’s Office scans tens of thousands of license plates each month. That sensitive personal information, which includes photographs of the vehicle and sometimes its driver and passengers, is stored in a database. The sheriff permits hundreds of out-of-state agencies and several federal entities, including units of the Department of Homeland Security, to run queries of a license plate against information the sheriff has collected. The agencies are also able to compare their own bulk lists of vehicle license plates of interest, known as “hot lists,” against the ALPR information collected by the sheriff’s office.

California’s S.B. 34, enacted in 2015, bars this practice. The law requires agencies that use ALPR technology to implement policies to protect privacy and civil liberties, and specifically prohibits police from sharing ALPR data with entities outside of California. The sheriff also violates the California Values Act (S.B. 54), also known as California’s “sanctuary” law. Enacted in 2018, the law limits the use of local resources to assist federal immigration enforcement.

In 2019, ACLU NorCal released documents revealing that ICE agents use their access to the license plate data gathered by police agencies across the nation to find and arrest people.

“This lawsuit sends a message to other California law enforcement agencies that are unlawfully sharing ALPR data and helping ICE—and we know there are others,” said EFF Staff Attorney Saira Hussain. “In recent years, California has enacted laws specifically to protect immigrant communities and prohibit the sharing ALPR data with entities outside the state. Local police and sheriffs are not above the law and should be held accountable when they violate measures designed to protect the privacy of all Californians generally, and vulnerable communities specifically.”

The lawsuit is the first of its kind to challenge the sharing of private information collected by ALPR mass surveillance.

For the complaint:
https://www.eff.org/document/lagleva-v-marin-county-sheriff

For more on ALPRs:
https://www.eff.org/pages/automated-license-plate-readers-alpr

Contact:  press@aclunc.org
Karen Gullo

Be The Face of Change and Pledge for EFF Through CFC Today!

3 months 1 week ago

The pledge period for the Combined Federal Campaign (CFC) is underway and EFF needs your help! Last year, U.S. government employees raised over $38,000 for EFF through the CFC, helping us fight for privacy, free speech, and security on the internet so that we can help create a better digital future.

The Combined Federal Campaign is the world's largest and most successful annual charity campaign for U.S. federal employees and retirees. Since its inception in 1961, the CFC fundraiser has raised more than $8.5 billion for local, national, and international charities. This year's campaign runs from September 1 to January 15, 2022. Be sure to make your pledge before the campaign ends!

U.S. government employees can give to EFF by going to GiveCFC.org and clicking DONATE to give via payroll deduction, credit/debit, or an e-check! Be sure to use EFF's CFC ID # 10437. You can even scan the QR code below!

This year's CFC theme is "You Can Be The Face of Change," as your donations help countless charities to make the world better. With your support, EFF can continue our strides towards internet freedom. Recently we have: pushed Apple to pause its plan to install dangerous photo and message scanning features onto its devices, worked with California residents and lawmakers to pass one of the largest state investments in public fiber broadband in U.S. history, and launched the "Dark Patterns Tip Line" with partners to shed light on deceptive patterns in consumer tech products and services. 

Government employees have a tremendous impact on the shape of our democracy and the future of civil liberties and human rights online. Support EFF today by using our CFC ID #10437 when you make a pledge!

Christian Romero

Come Back with a Warrant: Congress Should Pass the Protecting Data at the Border Act

3 months 1 week ago

We do not lose our constitutional rights at the border. The U.S. Department of Homeland Security (DHS), however, believes you do. In fiscal year 2019 alone (before the pandemic curbed international travel), U.S. Customs and Border Protection (CBP) officers conducted nearly 41,000 electronic device searches without seeking a warrant supported by probable cause of wrongdoing from a judge. Unfettered border searches of electronic devices pose a significant threat to personal privacy. That’s why we urge Congress to pass the Protecting Data at the Border Act, a bill recently re-introduced by Sen. Ron Wyden (D-OR) and Sen. Rand Paul (R-KY) that would create a warrant requirement for these types of searches, thereby protecting our constitutional rights at the border. 

CBP as well as U.S. Immigration and Customs Enforcement (ICE) have been conducting intrusive warrantless border device searches since at least 2009, when CBP and ICE first published their device search policies (CBP’s currently operative policy was published in 2018). The number of device searches at the border has been steadily increasing, affecting tens of thousands of international travelers each year. Our electronic devices contain intimate information about our personal preferences and daily routines, as well as private communications with friends and family. They contain data revealing health conditions, financial standing, and religious and political beliefs. A search that reveals this information to anyone, let alone law enforcement, is a gross violation of our privacy and free speech rights. 

EFF and the American Civil Liberties Union (ACLU)—believing that any warrantless search of electronic devices at the border violates travelers’ rights to privacy under the Fourth Amendment, and freedom of speech and press, private association, and anonymity under the First Amendment—filed suit in 2017 on behalf of 11 individuals whose devices were searched without a warrant at the border. Where the U.S. Supreme Court in Riley v. California (2014) acknowledged electronic devices contain “the sum of an individual’s private life” and thus ruled that a warrant must be obtained before searching the cell phone of an arrestee, EFF/ACLU’s suit sought to extend this warrant requirement to border searches of electronic devices. Unfortunately, our path in the courts is currently stalled. The Supreme Court this summer declined to take our case, and despite making some progress in the appellate courts, no circuit court has required a warrant for border device searches in all circumstances.

The Protecting Data at the Border Act takes the fight to Congress (Rep. Ted Lieu (D-CA) is expected to introduce the House bill). Along with requiring government officials to obtain a probable cause warrant before accessing the contents of an electronic device, the bill would also protect our digital privacy and free speech rights in the following ways: 

  • Prohibiting border officers from denying entry or exit to a U.S. citizen or permanent resident if they refuse to provide a device password or unlock their device;
  • Requiring border officers to notify travelers - before requesting consent to a search of their devices - that they have the right to refuse; 
  • Requiring that consent to a search must be written; 
  • Requiring border officers to have probable cause that a traveller committed a felony before confiscating their device;
  • Forbidding border officers from keeping information obtained from a device search unless that information amounts to probable cause of a crime; 
  • Requiring that evidence gathered in violation of any of the above be inadmissible in court; and 
  • Requiring the government to gather and publish statistics regarding border searches of electronic devices, including how officers obtained access, the breakdown of U.S. versus non-U.S. persons whose devices were searched, the countries from which travelers arrived, and the perceived race and ethnicity of the traveler subject to a search. 

EFF voiced support for the 2017 and the 2019 versions of this bill as well. Since 2017, our lives have only become more digital with our devices holding ever increasing amounts of sensitive personal information. Congress should enact the Protecting Data at the Border Act and recognize that we do not lose our constitutional rights at the border. 

Related Cases: Merchant v. Mayorkas (formerly Alasaad v. Wolf)
Chao Liu

Meet the Alliance for Encryption in Latin America and the Caribbean

3 months 1 week ago

Today EFF and other internet and digital rights organizations are announcing the Alliance for Encryption in Latin America and the Caribbean (AC-LAC). The Alliance is a platform for collective capacity building and information, based on the principle that encryption is an essential tool for security and respect for human and fundamental rights in the region, including freedom of expression and privacy.

The virtual launch event is October 21, with the participation of member organizations. It is open to the public.

This regional Alliance seeks to advance a proactive agenda to promote and defend encryption in Latin America and the Caribbean. It aims to strengthen the use of encryption and generate an ecosystem of trust, security and stability within information and communications technologies (ICTs), particularly the critical infrastructure of the internet and its applications and services.

The platform, comprised of 14 organizations throughout the region, seeks to coordinate efforts with encryption initiatives at the global, regional, and national levels, and generate spaces for exchanging information and mobilizing actions to respond to the effects weakened encryption have on security and fundamental rights.

The member organizations, which have outlined a joint agenda despite their diverse natures and interests, are:  Access Now, ALAI, APC; Article 19; Coalizão Direitos na Rede (CDR); Derechos Digitales; EFF; Karisma Foundation; IP.rec; IRIS; ISOC Brazil; Nic.br; R3D. The eLAC initiative will participate as an observer member. The Alliance  is open to new members who share its  principles and ideas.

On  Thursday, October 21, during Global Encryption Day, AC-LAC will present its regional pro-encryption agenda. A live event will be held to introduce the Alliance and its mission, and discuss why encryption is imperative for a more secure internet.

In addition to the 14 member organizations, AC-LAC counts on the Institute for Digital Development of Latin America and the Caribbean (IDD LAC) as the Alliance's secretariat.

Follow us on our social networks: twitter: @aclac_alianza and linkedIn: AC-LAC or on our website www.ac-lac.org for more information. 

Veridiana Alimonti

Records Shed New Light on Trump White House Officials’ Efforts to Punish Social Media

3 months 1 week ago

Within a day of Twitter fact-checking President Donald Trump’s May 2020 false tweets about mail-in voting, federal officials began trying to find out how much government agencies spent to advertise on social media. This inquiry was likely part of a planned effort to cut that funding, according to records released last month.

The records, released to EFF and the Center for Democracy & Technology as part of a joint FOIA lawsuit, add additional details to the timeline before Trump issued his unconstitutional Executive Order retaliating against online social media services for moderating his posts. President Joseph R. Biden revoked the order in May.

Although Trump’s Executive Order is no longer in effect, the new documents show the lengths officials within the Office of Management and Budget (OMB) went to as part of an unconstitutional effort to leverage the federal government’s spending power to punish platforms for exercising their First Amendment rights to moderate Trump’s speech.

A day before Trump issued the order on May 28, 2020, OMB officials sought to learn whether the government already had data that would show how much money all federal agencies spent to advertise on social media. In an email exchange on May 27, 2020, officials inquired whether it was possible to use www.usaspending.gov to calculate the figure.

It is unclear from the May 27, 2020 thread, but it does not appear that OMB officials could calculate that number. Thus, a day later, Trump issued the Executive Order that required all federal agencies to report to OMB the amount of online advertising they had spent, as well as any laws that would permit the agencies to restrict further advertising spending online. (The Executive Order had several other unconstitutional aspects, which you can read about here.)

Earlier this year, EFF and CDT made public the OMB records showing that federal agencies spent more than $117 million to advertise online. OMB’s latest records document a different aspect of federal agencies’ responses: whether they had any legal authority to unilaterally cut their online advertising spending.

The bulk of the records released to EFF and CDT show that federal agencies largely did not believe they had any legal basis to withdraw their funding. A compilation of agency responses created by OMB begins on page 174 of the recently released records.

Most agencies reported that there was no law or regulation on point that would permit them to cut online advertising spending. The General Services Administration, however, provided a path forward for potentially accomplishing Trump’s retaliatory goal. The GSA stated that although it had no existing legal authority to cut online ad spending, the law governing the agency permitted it to write new “regulations prohibiting GSA services and staff from using GSA funds for advertising or marketing on online platforms.” The documents do not indicate whether Trump administration officials followed up with the GSA regarding its proposal, but it does not appear as though any federal agencies cut online advertising in retaliation for fact-checking Trump’s tweets.

The records also show that the White House remained interested in the results of OMB’s survey of federal agencies’ online advertising spending and whether they could cut that funding. A June 26, 2020 email circulating the results of OMB’s compilation of agency responses states that officials within the White House Office of General Counsel reached out about the results: “We assume you will connect with whoever in the White House needs to see this information, and share this information with them.”

Despite Biden rescinding Trump’s order, the effort was a chilling and unconstitutional abuse of power. That is why EFF was part of a legal team, along with Cooley LLP and Protect Democracy, representing voting rights and civil society plaintiffs that challenged the order in Rock The Vote v. Biden. It is also why EFF and CDT continue litigate our FOIA suit against OMB and Department of Justice and push the agencies to disclose more records that will shed light on what happened.

Related Cases: EFF v. OMB (Trump 230 Executive Order FOIA)
Aaron Mackey

Why Is PayPal Denying Service to Palestinians?

3 months 1 week ago

For many years, Palestinian rights defenders have championed the cause of Palestinians in the occupied territories, who are denied access to PayPal, while Israeli settlers have full access to PayPal products. A recent campaign, led by Palestinian digital rights group 7amleh, calls on PayPal to adhere to its own code of business conduct and ethics, by halting its discrimination against residents and citizens of Palestine. 7amleh has also published a detailed report on PayPal’s actions in Palestine. 

This is not the first time PayPal has denied service to a vulnerable group; the company routinely cuts off payments to those engaged in sex work or the sale of sexually explicit content, and last year, PayPal division Venmo was sued for blocking payments associated with Islam or Arab nationalities or ethnicities.

Just four months ago, EFF and 21 other rights groups wrote to PayPal, taking the company to task for censoring legal, legitimate transactions, and calling on both PayPal and Venmo to provide more transparency and accountability on account freezes and closures. Our coalition's demands included a call for regular transparency reports, meaningful notice to users, and a timely and meaningful appeals process.  These recommendations align with the Santa Clara Principles on Transparency and Accountability in Content Moderation, developed by free expression advocates and scholars to help companies protect human rights when moderating user-generated content and accounts.

It is unclear why PayPal chose to deny service to Palestinians, but they're not unique. Many American companies have taken an overly broad interpretation of anti-terrorism statutes and sanctions, denying service to entire groups or geographic areas—rather than narrowly targeting those parties whom they are legally obligated to block. This practice is deeply troubling, causing serious harm to those who rely on digital services for their basic needs.

PayPal is among the most global of payment processors, and for many it is a lifesaver, allowing people to sidestep local banks' extortionate overseas transfer fees and outright prohibitions. PayPal is how many around the world purchase goods and services from abroad, pay freelancers, or send money to family. By denying access to Palestinians, PayPal makes it hard or even impossible to engage in the normal commerce of everyday life.

We call on PayPal to explain their decision to deny services to Palestinians. And we renew our call—and that of our co-signers—for PayPal to review its practices to implement the Santa Clara Principles and permit lawful transactions on its platform, halting its discrimination against marginalized groups.



Jillian C. York

EFF to Tenth Circuit: First Amendment Protects Public School Students’ Off-Campus Social Media Speech

3 months 1 week ago

EFF filed an amicus brief in the U.S. Court of Appeals for the Tenth Circuit in support of public school students’ right to speak while off school grounds or after school hours, including on social media. We argued that Supreme Court precedent makes clear that the First Amendment rarely allows schools to punish students for their off-campus social media speech—including offensive speech.

In this case, C1.G. v. Siegfried, a student and some friends visited a thrift shop on a Friday night. The student took a picture of his friends wearing wigs and hats, including one hat that looked like a foreign military hat from World War II. Intending to be funny, the student posted a picture of his friends with an offensive caption related to violence against Jews to Snapchat (and deleted it a few hours later). The school suspended and eventually expelled the student.

EFF’s brief argued in favor of the expelled student, focusing on the Supreme Court’s strong protection for student speech rights in its decision from this summer in Mahanoy v. B.L. There, the Court explained that three “features” of students’ off-campus speech diminish a school’s authority to regulate student expression. Most powerfully, “from the student speaker’s perspective, regulations of off-campus speech, when coupled with regulations of on-campus speech, include all the speech a student utters during the full 24-hour day.” Mahanoy makes clear that students’ longstanding right to speak on campus except in narrow circumstances, as recognized by the Supreme Court in its 1969 decision in Tinker v. Des Moines, is even stronger off campus—and that includes, as the Mahanoy Court said, “unpopular expression.”

Our brief also urged the appellate court to reject a special rule for social media. The school argued, and the district court agreed, that the uniquely shareable and accessible nature of speech on the internet—that it can easily make its way onto campus—justifies greater school authority over students’ off-campus social media speech. Rejecting this argument is particularly important given that social media is a central means for young people to express themselves, connect with others, and engage in advocacy on issues they care about; and heeds the Supreme Court’s concern about “full 24-hour day” speech regulations.

As of 2018, 95 percent of U.S. teenagers reported that they have access to a smartphone, and 45 percent said that they use the internet “almost constantly.” Students and young people use social media to rally support for political candidates, advocate for racial justice, and organize around issues like gun control, climate change, and more recently COVID-19. For example, when University of Alabama student Zoie Terry became one of the first students in the U.S. to be quarantined, her posts about the experience on TikTok led to important changes in university policies, including medical monitoring of quarantined students.

Students must have an outlet for their expression, free from the censorial eye of public school officials. We hope the Tenth Circuit applies Mahanoy appropriately and overturns the student’s expulsion in this case.

Mukund Rathi

Coalition Against Stalkerware Named J.D. Falk Award Winner for Raising Awareness About and Helping Victims of Malicious Spying Apps

3 months 1 week ago
Award Honors Falk, Antispam Pioneer and a M3AAWG Founding Member

San Francisco—The Electronic Frontier Foundation (EFF) is thrilled to announce that the Coalition Against Stalkerware, co-founded by Cybersecurity Director Eva Galperin and leading antivirus companies and victim support groups, has received the J.D. Falk Award for its work raising awareness, increasing detection, and combating the spread of malware used for stalking and intimate partner abuse.

The award is being presented today by the Messaging, Malware and Mobile Anti-Abuse Working Group (M3AAWG), a global industry organization working against internet abuses including botnets, malware, spam, viruses, DDoS attacks and other online exploitation. The award honors the work of one of M3AAWG’s founding members, J.D. Falk, an antispam and email security pioneer. It recognizes individuals and organizations improving the internet experience and protecting end users.

The Coalition Against Stalkerware was created in 2019 by ten founding partners in response to the growing threat of commercially available apps and devices that enable someone to covertly spy on another person’s electronic devices. Stalkerware enables abusers to remotely monitor victims’ web searches, text messages, geolocation, photos, voice calls, and more. It affects hundreds of thousands of victims around the world and is often used to facilitate partner surveillance, gender-based and domestic violence, harassment, and sexual abuse.

The Coalition helps those targeted by stalkerware and works with antivirus makers to improve the detection of stalkerware on mobile phones, laptops, and other devices. The Stopstalkerware website, offered in seven different languages, has resources for victims to learn how to protect their devices, as well as find and remove stalkerware once it has been installed. It also offers a global directory of organizations for victims of stalking, domestic violence, online abuse, and more.

Galperin and the Coalition were the driving forces building awareness about the apps among antivirus software makers, whose products often failed to detect them as malicious and warn users. As a result of the Coalition’s work with the anti-virus industry, detection rates have grown rapidly, with many of the top antivirus programs catching between 80 percent and 100 percent of the most prevalent stalkerware strains for Android.

The Coalition also works to expose and hold accountable the companies and individuals behind stalkerware apps. The coalition this year launched basic technical training on stalkerware for support organizations and other stakeholder groups to provide knowledge- and skills-building to address this form of tech abuse. Further, the International Criminal Police Organization (INTERPOL) announced in April its support for the coalition and is promoting training sessions developed by the Coalition to its 194 member countries to enhance the ability to investigate the use of stalkerware, support victims requesting assistance, and hold perpetrators accountable.

Founded by EFF and Kaspersky, Avira, the European Network for the Work with Perpetrators of Domestic Violence, G Data Cyber  Defense, Malwarebytes, NNEDV, NortonLifeLock, Operation Safe Escape, and Weisser Ring, the Coalition has grown into a large international working group with more than 40 partner organizations working in domestic violence survivor support and perpetrator intervention, digital rights advocacy, IT security, and academic research.

The award is being accepted by Galperin and Tara Hairston, executive director, on behalf of the Coalition at a recorded presentation at M3AAWG’s 53rd general meeting today. The presentation and Q&A session can be viewed here.

“We are grateful to M3AAWG for this prestigious award and for shining a light on the Coalition and its work to put an end to the stalkerware industry,” said Galperin. “We’re proud of the work we’ve done so far, but it’s not over—stalkerware makers are continually finding ways to evade detection by antivirus software, which is why continued cooperation with the security community is so important.”

“Thank you again to M3AAWG for the 2021 J.D. Falk Award,” said Hairston. “Stalkerware and other forms of technology-facilitated gender-based violence remain pervasive in society, and therefore, it is imperative that the anti-abuse and anti-malware communities address interpersonal threats and abusive personas with the same commitment as third-party adversaries. Protecting those most vulnerable among us protects us all.”

For more about stalkerware:
https://www.eff.org/deeplinks/2020/05/watch-eff-cybersecurity-director-eva-galperins-ted-talk-about-stalkerware

For more about the Coalition:
https://stopstalkerware.org/

 

Contact:  EvaGalperinDirector of Cybersecurityeva@eff.org TaraHairstonExecutive Director, Coalition Against Stalkerwaretara@stopstalkerware.org
Karen Gullo

What the Facebook Whistleblower Tells Us About Big Tech

3 months 2 weeks ago

Through her leaks and Congressional testimony, Frances Haugen, the “Facebook Whistleblower,” revealed a lot about Facebook's operation. Many of these revelations are things we've long suspected but now have proof of: Facebook focuses on growth—of users and time spent on its platforms—to the exclusion of everything else. For Facebook, growth trumps all, even the health and safety of its most vulnerable users.

In her testimony, Haugen explained that at Facebook, metrics are king. Facebook’s “growth division” works to increase "user engagement," and it succeeds.  This is a circular process: Facebook identifies content that users "engage" with and promotes it, leading to more engagement. Facebook's automated systems don't evaluate what is being engaged with – they just identify and rank materials by engagement itself. So, according to Haugen, the automated scoring system will rank successful bullying as "engaging" alongside anything else that garners a lot of attention.  Politicians who make extreme statements get more engagement, and are therefore ranked higher by Facebook, and are therefore seen by more Facebook users.

It’s not like Facebook could discriminate between “good” and “bad” content even if it wanted to. Haugen says the "AI"  Facebook uses to evaluate content is bad at posts in English and worse at posts in other languages. Facebook “focused on scale over safety” and “chooses profit over safety.”

These aren't mere priorities—they are reflected in the incentives Facebook offers to its engineers, designers and product managers, whose bonuses are tied to the quantity of  “meaningful social interactions” (AKA "engagement") their products generate.

What’s more, Facebook isn't content to milk its existing, aging user base for "engagement." Facebook’s on again/off again plan for an "Instagram for kids" is a bid to grow its users by habituating people to its products at an early age, normalizing this kind of engagement-maximization as an intrinsic element of social interactions, including on play-dates. Haugen doesn’t believe Facebook’s "pausing" of this plan is permanent. She believes they’re just waiting for the heat to die down.

For Facebook, the heat never dies down. The company is always in the middle of one spectacular scandal  or another. Haugen’s testimony confirms what we long suspected – Facebook's neverending crises are the result of a rotten corporate culture and awful priorities.

Ms. Haugen told Congress that she thinks Facebook should be reformed, not broken up. But Facebook’s broken system is fueled by a growth-at-any-cost model. The number of Facebook users and the increasing depth of the data it gathers about them is its biggest selling point. In other words, Facebook’s badness is inextricably tied to its bigness.

EFF’s position is that if when a company’s badness is inseparable from its bigness, it's time to consider breaking that company up.

EFF’s position is that if when a company’s badness is inseparable from its bigness, it's time to consider breaking that company up.

Much of this latest Facebook controversy concerns Instagram ads—specifically, which ads it shows to young people, and what effect these have on their mental health.

Remember, though:  Facebook didn’t build Instagram. It bought it, explicitly to neutralize a competitor. That raises the question of whether that merger should have been permitted in the first place, and whether it should be unwound today.

Facebook bought Instagram because it was a “threat.” Instagram was growing, by attracting the younger users who were leaving Facebook.  Facebook's research showed that young users viewed it as a service for older people. Facebook's dwindling attractiveness caused friction after the company's merger with Instagram, as Facebookers seethed with jealousy of their Instagram colleagues. Facebook's corporate suspicion of Instagram eventually forced Instagram's founders out of the company, leaving everything about Instagram up to Facebook. Facebook’s focus on engagement, its insularity, its need to pull all services under the umbrella of the core Facebook app—all of that is rooted in its growth-at-any-cost mentality.

For most companies, the goal is to maximize profit. Without meaningful checks, that impulse can run amok, leading to unethical, abusive and, eventually, illegal conduct. The Facebook story, with its repeat offenses despite record fines, consent decrees, and market forces show that these simply do not do the trick.

By establishing breakups as a serious possibility that companies must consider, we can discipline them, so that they police themselves better, and we can open up space for more creative regulatory solutions. And if that doesn't succeed, we can break them up, creating more competition that will discipline their behavior.

Breakups aren't and never will be the first line of defense for every problem with tech. They can be complicated and expensive, and history has shown that when a breakup is not followed by enforcement, a monopoly’s splintered parts can simply reconstitute themselves. The 1984 breakup of AT&T was the result of nearly two decades of work by the Department of Justice, and it led to a radical diversification of the market. But in the two decades that followed, lax merger review and deregulation allowed the telecom market to concentrate into just a handful of big players once again.

We can and should pursue multiple strategies that will get us to a place where we don’t have to worry every morning about what Facebook is doing to us today.

Breakups are a powerful tool.  For breakups to be effective, we also need other tools, too—a whole toolbox full of ways to keep companies broken up and ensure a healthy supply of innovative competitors. That means enhancing merger reviews, removing barriers to interoperability, and well-crafted privacy laws to protect consumers and level the playing field.

Katharine Trendacosta

Virtual Workshop Tuesday: EFF’s Mitch Stoltz Will Discuss Free Software Movement On Panel Examining Ethics and Open Source Software Licensing

3 months 2 weeks ago
Workshop Looks at Feasibility of Embedding Human Rights Principles Into Licenses

San Francisco—On Tuesday, October 12, at 12 pm PT, EFF Senior Staff Attorney Mitch Stoltz will speak at an online workshop convened by the UCLA Institute for Technology, Law & Policy to explore the future of open source software. Registration for this free online event is here.

The possibility that privacy-intrusive and military technologies may be built using open source programs has raised debate about whether open source software licenses can or should include restrictions on what the software can be used for.

A panel of technology and ethics experts will discuss the origins of the debate on October 12, the first of a three-day workshop about where the open source software movement is headed. Stoltz will speak on the history of the free software and open source movements, and the practical and ethical issues they have faced. The online workshop is open to the public.

WHAT: The Future of Open Source Workshop (online)

When: Tuesday, October 12, 12 pm PT

Who: EFF Senior Staff Attorney Mitch Stoltz

To register for the workshop:
https://www.eventbrite.com/e/the-future-of-open-source-tickets-178131986567

Contact:  MitchStoltzSenior Staff Attorneymitch@eff.org
Karen Gullo

Livestream Panel Discussion Tuesday: EFF, Encryption Users Will Discuss Consequences of Apple’s Planned Scanning Tool, Suggest Changes

3 months 2 weeks ago
Photo Scanning Breaks Privacy and Security Promises to Users

San Francisco—On Tuesday, October 12, at 8 am PT, EFF Senior Staff Technologist Erica Portnoy and representatives from more than 10 human, digital, and children’s rights organizations will hold a livestreamed panel discussion about the ramifications of Apple’s plans to add scanning tools to its devices on people and groups that rely on encrypted platforms for privacy, safety, and security. The two hour event will be livestreamed here.

EFF, civil liberties and human rights organizations, researchers, and customers have demanded that Apple cancel its plan to install photo-scanning software onto devices. The technology opens a backdoor to users’ everyday conversations and the sensitive, private communications of those who rely on encryption to protect themselves and others. It will create an enormous danger to our privacy and security and give ammunition to authoritarian governments wishing to expand the surveillance.

EFF and organizations representing a diverse array of individuals who rely on encrypted platforms will discuss the impacts of Apple’s scanning tools, what should change about the products, and protective principles for initiatives that aim to police private digital spaces. Participants include the Child Rights International Network, Human Rights Watch, Defend Digital Me, Freedom Network, Hacking//Hustling, Derechos Digitales, Access Now, and others.

WHAT: Panel Discussion: Perspectives on Encryption and Child Safety (Livestream)

WHEN: Tuesday, October 12, 8 am-10 am PT

WHO: EFF Staff Technologist Erica Portnoy

To RSVP: https://supporters.eff.org/civicrm/event/register?reset=1&id=323

To watch the livestream: https://www.youtube.com/watch?v=Acm_Pxc2uy8

For more on the campaign to end the Apple program:
https://www.eff.org/deeplinks/2021/08/apples-plan-think-different-about-encryption-opens-backdoor-your-private-life

Contact:  JoeMullinPolicy Analystjoe@eff.org
Karen Gullo

Face Recognition Technology: Commonly Used Terms

3 months 2 weeks ago

As face recognition technology evolves at a dizzying speed, new uses and terminologies seem to develop daily. On this page, we attempt to define and disambiguate some of the most commonly used terms.

For more information on government use of face recognition and how to end it in your community, visit EFF’s About Face resource page.

Face detection: Determines whether an image includes a human face. Some government agencies use face detection to aid in obscuring identifiable faces before releasing video footage in response to requests for public records. As a result, many bans on government use of face recognition technology specifically exclude face detection for this purpose, provided that no information about the faces is collected or stored. Generally, this use does not raise significant privacy concerns.

Face recognition: Any collection and processing of faceprints, including both face matching and face analysis (two terms defined below). Face recognition raises significant digital rights concerns.

Faceprinting: A fundamental step in the process of face recognition, faceprinting is the automated analysis and translation of visible characteristics of a face into a unique mathematical representation of that face. Both collection and storage of this information raise privacy and safety concerns.

Face matching: Any comparison of two or more faceprints. This includes face identification, face verification, face clustering, and face tracking (four terms defined below).

  • Face identification: Compares (i) a single faceprint of an unknown person to (ii) a set of faceprints of known people. The goal is to identify the unknown person. Face identification may yield multiple results, sometimes with a "confidence" indicator showing how likely the system determines the returned image matches the unknown image.

  • Face verification: Compares (i) a single faceprint of a person seeking verification of their authorization to (ii) one or more faceprints of authorized individuals. The verified person might or might not be identified as a specific person; a system may verify that two faceprints belong to the same person without knowing who that person is. Face verification may be used to unlock a phone or to authorize a purchase. 

  • Face clustering: Compares all the faceprints in a collection of images to one another, in order to group the images containing a particular person or group of people. The clustered people might or might not then be identified as known individuals. For example, each of the people in a library of digital photos (whether a personal album or a police array of everyone at a protest) could have their various pictures automatically clustered into a discrete set.

  • Face tracking: Uses faceprints to follow the movements of a particular person through a physical space covered by one or more surveillance cameras, such as the interior of a store or the exterior sidewalks in a city’s downtown. The tracked person might or might not be identified. The tracking might be real-time or based on historical footage.

Face analysis, also known as face inference: Any processing of a faceprint, without comparison to another individual’s faceprint, to learn something about the person from whom the faceprint was extracted. Face analysis by itself will not identify or verify a person. Some face analysis purports to draw inferences about a person’s demographics (such as race or gender), emotional or mental state (such as anger), behavioral characteristics, and even criminality.

For more information about the various kinds of face recognition, check out this more detailed post.

Adam Schwartz
Checked
2 hours 4 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed