Recent Surveillance Revelations, Enduring Latin American Issues: 2023 Year in Review

3 months ago

 The challenges in ensuring strong privacy safeguards, proper oversight of surveillance powers, and effective remedy for those arbitrarily affected continued during 2023 in Latin America. Let’s take a few, non-exhaustive, examples.

We saw a scandal unveiling that Brazilian Intelligence agents monitored movements of politicians, journalists, lawyers, police officers, and judges. In Perú, leaked documents indicated negotiations between the government and an U.S. vendor of spying technologies. Amidst the Argentinian presidential elections, a thorny surveillance scheme broke in the news. In México, media reports highlighted prosecutors’ controversial data requests targeting public figures. New revelations reinforced that the Mexican government shift didn’t halt the use of Pegasus to spy on human rights defenders, while the trial on Pegasus’ abuses in the previous administration has finally begun.

Those recent surveillance stories have deep roots in legal and institutional weaknesses, many times topped by an entrenched culture of secrecy. While the challenges cited above are not (at all!) exclusive to Latin America, it remains an essential task to draw attention to and look at the arbitrary surveillance cases that occasionally emerge, allowing a broader societal scrutiny. 

The Opacity of Intelligence Activities and Privacy Loopholes

First revealed in March, the use of location tracking software by Intelligence forces in Brazil hit the headlines again in October when a Federal Police’s investigation led to 25 search warrants and the arrest of two officials. The newspaper O Globo uncovered that during three years of former President Bolsonaro’s administration, Intelligence Agency’s officials used First Mile to monitor the steps of up to 10,000 cell phone owners every 12 months without any official protocol. According to O Globo, the software First Mile, developed by the Israeli company Cognyte, can detect an individual based on the location of devices using 2G, 3G and 4G networks. By simply entering a person’s phone number, the system allows you to follow their last position on a map. It also provides targets’ displacement records and "real-time alerts" of their movements. 

 News reports indicate that the system likely exploits the Signaling System n. 7 (SS7), which is an international telecommunication protocol standard that defines how the network elements in a telephone network exchange information and control signals. It’s by using the SS7 protocol that network operators are able to route telephone calls and SMS messages to the correct recipients. Yet, security vulnerabilities in the SS7 protocol also enable attackers to find out the location of a target, among other malicious uses. While telecom companies have access to such data as part of their activities and may disclose it in response to law enforcement requests, tools like First Mile allow intelligence and police agents to skip this step. 

A high-ranking source at Abin told O Globo that the agency claimed using the tool for "state security" purposes, and on the grounds there was a “legal limbo” on the privacy protections for cell phone metadata. The primary issue the case underscores is the lack of robust regulation and oversight of intelligence activities in Brazil. Second, while the Brazilian law indeed lacks strong explicit privacy protections for telephone metadata, the access to real-time location data enjoys a higher standard at least for criminal investigations. Moreover, Brazil counts on key constitutional data privacy safeguards and case law that can provide a solid basis to challenge the arbitrary use of tools like First Mile.

The Good and the Bad Guys Cross Paths

We should not disregard how the absence of proper controls, safeguards, and tech security measures opens the door not only for law enforcement and government abuses but can feed actions from malicious third-parties – also in their relations with political powers. 

The Titan software used in Mexico also exploits the SS7 protocol and combines location data with a trove of information it gets from credit bureaus’, government, telecom, and other databases. Vice News unveiled that Mexican cartels are allegedly piggy-backing police’s use of this system to track and target their enemies.

In the Titan’s case, Vice News reported that by entering a first and last name, or a phone number, the platform gives access to a person’s Mexican ID, “including address, phone number, a log of calls made and received, a security background check showing if the person has an active or past warrant or has been in prison, credit information, and the option to geolocate the phone.” The piece points out there is an underground market of people selling Titan-enabled intel, with prices that can reach up to USD 9.000 per service.   

In turn, the surveillance scheme uncovered in Argentina doesn’t rely on a specific software, but it may involve hacking and apparently mixes up different sources and techniques to spy on persons of interest. The lead character here is a former federal police officer who compiled over 1,000 folders about politicians, judges, journalists, union leaders, and more. Various news reports suggest how the former police officer's spying services relate to his possible political ties.

Vulnerabilities on Sale, Rights at Stake

Another critical aspect concerns the current incentives to perpetuate, rather than fixing security vulnerabilities – and governments’ role in it. As we highlighted, “governments must recognize that intelligence agency and law enforcement hostility to device security is dangerous to their own citizens,” and shift their attitude from often facilitating the spread of malicious software to actually supporting security for all of us. Yet, we still have a long way ahead.

In Perú, La Encerrona reported that an U.S. based vendor, Duality Alliance, offered spying systems to the Intelligence Division of Perú’s Ministry of Interior (DIGIMIN). According to La Encerrona, leaked documents indicated negotiations during 2021 and 2022. Among the offers, La Encerrona underlines the tool ARPON, which the vendor claimed had the ability to intercept WhatsApp messages by a zero-click attack able to circumvent security restrictions between the app and the operating system Android. DIGIMIN has assured the news site that the agency didn’t purchase any of the tools that Duality Alliance offered.

 Recent Mexican experience shows the challenges of putting an end to the arbitrary use of spywares. Despite major public outcry against the use of Pegasus by security forces to track journalists, human rights defenders, political opponents, among others, and President López Obrador’s public commitment to halt these abuses, the issue continues. New evidence of Mexican Armed Forces’ spying during Obrador’s administration burst into the media in 2023.  According to media reports, the military used Pegasus to track the country’s undersecretary for human rights, a human rights defender, and journalists.

The kick-off of the trial on the Mexican Pegasus case is definitely good news. It started in December already providing key witnesses' insights on the spying operations, According to the Mexican digital rights organization R3D, a trial witness included the former President Enrique Peña Nieto and other high-ranked officials in the chain of command behind infections with Pegasus. As R3D pointed out, this trial must serve as a starting point to investigate the espionage apparatus in Mexico built between public and private actors, which should also consider most recent cases.

Recurrent Issues, Urgent Needs

On a final but equally important note, The New York Times published that the Mexico City's Attorney General's Office (AGO) and prosecutors in the state of Colima issued controversial data requests to the Mexican telecom company Telcel targeting politicians and public officials. According to The New York Times, Mexico City's AGO denied having requested that information, although other sources confirmed. The requests didn't require previous judicial authorization as they fell into a legal exception for kidnapping investigations. R3D highlighted how the case relates to deep-seated issues, such as the obligation for indiscriminate telecom data retention set in Mexican law and the lack of adequate safeguards to prevent and punish the arbitrary access to metadata by law enforcement.

Along with R3D and other partners in Latin America, EFF has been furthering the project ¿Quién Defiende Tus Datos? ("Who Defends Your Data) since 2015 to push for stronger privacy and transparency commitments from Internet Service Providers (ISPs) in the region. In 2023, we released a comparative report building on eight years of findings and challenges. Despite advances, our conclusions show persistent gaps and new concerning trends closely connected to the set of issues this post indicates. Our recommendations aim to reinforce critical milestones companies and states should embrace for paving a way forward.

During 2023 we continued working for them to come true. Among others, we collaborated with partners in Brazil on a draft proposal for ensuring data protection in the context of public security and law enforcement, spoke to Mexican lawmakers about how cybersecurity and strong data privacy rights go hand in hand, and joined policy debates upholding solid data privacy standards. We will keep monitoring Latin America privacy's ups and downs, and contribute to turning the recurring lessons from arbitrary surveillance cases into consistent responses towards robust data privacy and security for all.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Veridiana Alimonti

The Last Mile of Encrypting the Web: 2023 Year in Review

3 months ago

At the start of 2023, we sunsetted the HTTPS Everywhere web extension. It encrypted browser communications with websites and made sure users benefited from the protection of HTTPS wherever possible. HTTPS Everywhere ended because all major browsers now offer the functionality to make HTTPS the default. This is due to the grand efforts of the many technologists and advocates involved with Let’s Encrypt, HTTPS Everywhere, and Certbot over the last 10 years.

The immense impact of this “Encrypt the Web” initiative has translated into default “security for everybody,” without each user having to take on the burden of finding out how to enable encryption. The “hacker in a cafe” threat is no longer as dangerous as it once was, when the low technical bar of passive network sniffing of unencrypted public WiFi let bad actors see much of the online activity of people at the next table. Police have to work harder as well to inspect user traffic. While VPNs still serve a purpose, they are no longer necessary just to encrypt your traffic on the web.

“The Last Mile”

Firefox reports that over 80% of the web is encrypted, and Google reports 95% over all of its services. The last 5%-20% exists for several reasons:

  • Some websites are old and abandoned.
  • A small percentage of websites intentionally left their sites at HTTP.
  • Some mobile ecosystems do not use HTTPS by default.
  • HTTPS may still be difficult to obtain for accessibility reasons.

To the last point, tools like Certbot could be more accessible. For places where censors might be blocking it, we now have a Tor-accessible .onion address available for certbot.eff.org. (We’ve done the same for eff.org and ssd.eff.org, EFF’s guides for individuals and organizations to protect themselves from surveillance and other security threats.)

Let’s Encrypt made much of this possible, by serving as a free and easily supported Certificate Authority (CA) that issued TLS certificates to 363 million websites. Let’s Encrypt differs from other prominent CAs. For example, Let’s Encrypt from the start encouraged short-lived certificates that were valid for 90 days. Other CAs were issuing certificates with lifespans of two years. Shorter lifespans encouraged server administrators to automate, which in turn encouraged encryption that is consistent, agile, and fast. The CA/B Forum, a voluntary consortium of CAs, browser companies, and other partners that maintain public key infrastructure (PKI) adopted ballot SC-063. Which allows 10-day certificates, and in 2026 will allow 7-day certificates. This pivotal change will make the ecosystem safer, reduce the toll on partners that manage the metadata chain, encourage automation, and push for the ecosystem to encrypt faster, with less overhead, and with better tools.

Chrome will require CAs in its root store (a trusted list of CAs allowed to secure traffic) to support the Automatic Certificate Management Environment (ACME) protocol. While Google steers this shift with ACME, the protocol is not a Google product or part of the company’s corporate agenda. Rather, ACME is a beneficial protocol that every CA should adopt, even without a “big tech” mandate to do so.

Chrome also expanded its HTTPS-First Mode to all users by default. We are glad to see the continued push for HTTPS by default, without the users needing to turn it on themselves. HTTPS “out of the box” is the ideal to strive for, far better than the current fragmented approach of requiring users to activate “enable HTTPS” settings on all major browsers.

While this year marks a major victory for the “Encrypt the Web” initiative, we still need to make sure the backbone infrastructure for HTTPS continues to work in the interest of the users. So for two years we have been monitoring eIDAS, the European Union’s digital identity framework. Its Article 45 requires browsers to display website identity with a Qualified Web Authentication Certificates (QWAC) issued by a government-mandated Root Certificate Authority. These measures hinder browsers from responding if one of these CAs acts inappropriately or has bad practices around issuing certificates. Final votes on eIDAS will occur in the upcoming weeks. While some of the proposal’s recitals suggest that browsers should be able to respond to a security event, that is not strong enough to overrule our concerns about the proposal’s most concerning text. This framework enables EU governments to snoop on their residents’ web traffic. This would roll back many of the web security and privacy gains over the past decade to a new, yet unfortunately familiar, fragmented state. We will fight to make sure HTTPS is not set up for failure in the EU.

In the movement to make HTTPS the default for everyone, we also need to be vigilant about how mobile devices handle web traffic. Too often, mobile apps are still sending clear text (insecure HTTP). So the next fight for “HTTPS Everywhere” should be HTTPS by default for app requests, without users needing to install a VPN.

The last stretch to 100% encryption will make the web ecosystem agile and bold enough to (1) ensure HTTPS as much as possible, and (2) block HTTP by default. Reaching 100% is possible and attainable from here. Even if a few people out there intentionally try to interact with an HTTP-only site once or twice a session.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Alexis Hancock

Protecting Encryption And Privacy In The US: 2023 Year in Review

3 months ago

EFF believes you have the right to have a private conversation–in the physical world, and in the digital world. The best technology to protect that right is end-to-end encryption. 

Governments around the world are working hard to monitor online conversations, far beyond the bounds of traditional targeted law enforcement. 2023 has been a year of unprecedented threats to encryption and privacy. 

In the US, three Senate bills were introduced that, in our view, would discourage, weaken, or create backdoors into encryption technology. With your help, we’ve stopped all three from moving forward–and we’ll continue to do so in the year to come. 

EARN IT, S. 1207

Simply put, EARN IT allows providers of secure communications services to be sued or prosecuted. The excuse for EARN IT is to combat online child abuse. EARN IT would allow state attorneys general to regulate the internet, as long as the stated purpose for their regulation is to protect kids from online exploitation.  

There’s no doubt that the purpose of this bill is to scan user messages, photos, and files. In a Q&A document published last year, the bill sponsors even suggested specific software that could be used to monitor users. If you offer your users encrypted services, the bill specifically allows the fact that you offered encryption to constitute evidence against you in court. 

Constantly scanning every internet user is not a reasonable technique for investigating crimes. What’s more, evidence continues to mount that the scanning software used to detect child abuse does not work and creates false accusations. If EARN IT passes, it will push companies to either stop using encryption services or even create a dangerous backdoor to encryption that would weaken privacy and security for everyone. 

We were disappointed that EARN IT passed through a committee vote, although heartened that more senators expressed concerns with the bill’s effects. EARN IT has not seen a vote on the Senate floor, and we’re continuing to express our strong opposition, together with other groups that are concerned about human rights and privacy. 

STOP CSAM, S. 1199

Possessing or distributing child abuse images is a serious crime. Anyone who has actual knowledge of such images on a service they control is required to notify the National Center for Missing and Exploited Children (a government entity), which then forwards reports to law enforcement agencies. 

That’s why we were surprised and disappointed to see some Senators introduced a bill that falsely suggests this existing law-enforcement framework would work better with the addition of mass surveillance. 

The STOP CSAM bill, introduced in April, would create new crimes, allowing those who “knowingly promote or facilitate” the exploitation of children to be prosecuted, based on the very low legal standard of negligence. This is the same legal standard that applies to car accidents and other situations where the defendant did not intend to cause harm. 

At first glance, it may sound good to fight those who “promote” or “facilitate” these crimes, but the bill’s broad terms will likely reach passive conduct like, you guessed it, simply providing an encrypted app. 

STOP CSAM is one more attempt to criminalize and demonize anyone who uses encryption to communicate online. That’s why we’ve opposed it throughout the year. This bill passed out of the Senate Judiciary Committee, but has not received a vote on the Senate floor. 

Cooper Davis, S. 1080

This bill is a misguided attempt to deal with the nation’s fentanyl crisis by turning your smartphone into a DEA informant.  It threatens communications service providers with huge fines if they don’t report to the DEA suspected drug sales on their platforms. 

Faced with massive potential punishments, service providers will inevitably censor a wide variety of communications about drugs–including peoples’ descriptions of their own experiences, and even attempts to support others who are trying to get social or medical help with an addiction problem. 

If S.1080 were to pass Congress, legislators seeking to persecute certain groups will be eager to expand the framework. In many states, politicians and prosecutors have been vocal about their desire to find and prosecute marijuana users and doctors, people who may use abortion pills, or people who want gender-related medication

S. 1080 also has no provision to ensure the DEA deletes incorrect reports, does not properly notify users who get targeted, and does not require law enforcement to get a warrant to preserve the massive troves of private data they will be sent about users. The bill was passed in committee in a 16-5 vote in July, but has not received a vote on the Senate floor. 

EFF will continue to oppose proposals that seek to vacuum up our private communications, or push platforms towards censorship of legitimate content. The thousands of messages we sent to Congress opposing these wrongheaded proposals have stopped them from becoming law. We held the line in 2023 with your help–thank you. 

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Joe Mullin

Corporate Spy Tech and Inequality: 2023 Year in Review

3 months ago

Our personal data and the ways private companies harvest and monetize it plays an increasingly powerful role in modern life. Throughout 2023, corporations have continued to collect our personal data, sell it to governments, use it to reach inferences about us, and exacerbate existing structural inequalities across society. 

EFF is fighting back. Earlier this year, we filed comments with the U.S. National Telecommunications and Information Administration addressing the ways that corporate data surveillance practices cause discrimination against people of color, women, and other vulnerable groups. Thus, data privacy legislation is civil rights legislation. And we need it now.

In early October, a bad actor claimed they were selling stolen data from the genetic testing service, 23andMe. This initially included display name, birth year, sex, and some details about genetic ancestry results—of one million users of Ashkenazi Jewish descent and another 100,000 users of Chinese descent. By mid-October this expanded out to another four million accounts. It's still unclear if the thieves deliberately targeted users based on race or religion. EFF provided guidance to users about how to protect their accounts. 

When it comes to corporate data surveillance, users’ incomes can alter their threat models. Lower-income people are often less able to avoid corporate harvesting of their data, as some lower-priced technologies collect more data than other technologies, whilst others contain pre-installed malicious programmes. This year, we investigated the low-budget Dragon Touch KidzPad Y88X 10 kid’s tablet, bought from online vendor Amazon, and revealed that malware and pre-installed riskware were present. Likewise, lower-income people may suffer the most from data breaches, because it costs money and takes considerable time to freeze and monitor credit reports, and to obtain identity theft prevention services.

Disparities in whose data is collected by corporations leads to disparities in whose data is sold by corporations to government agencies. As we explained this year, even the U.S. Director of National Intelligence thinks the government should stop buying corporate surveillance data. Structural inequalities affect whose data is purchased by governments. And when government agencies have access to the vast reservoir of personal data that businesses have collected from us, bias is a likely outcome.  

This year we’ve also repeatedly blown the whistle on the ways that automakers stockpile data about how we drive—and about where self-driving cars take us. There is an active government and private market for vehicle data, including location data, which is difficult if not impossible to de-identify. Cars can collect information not only about the vehicle itself, but also about what's around the vehicle. Police have seized location data about people attending Black-led protests against police violence and racism. Further, location data can have a disparate impact on certain consumers who may be penalized for living in a certain neighborhood.

Technologies developed by businesses for governments can yield discriminatory results. Take face recognition, for example. Earlier this year, the Government Accountability Office (GAO) published a report highlighting the inadequate and nonexistent rules for how federal agencies use face recognition, underlining what we’ve said over and over again: governments cannot be trusted with this flawed and dangerous technology. The technology all too often does not work—particularly pertaining to Black people and women. In February, Porcha Woodruff was arrested by six Detroit police officers on the charges of robbery and carjacking after face recognition technology incorrectly matched an eight-year-old image of her (from a police database) with video footage of a suspect. The charges were dropped and she has since filed a lawsuit against the City of Detroit. Her lawsuit joins two others against the Detroit police for incorrect face recognition matches.

Developments throughout 2023 affirm that we need to reduce the amount of data that corporations can collect and sell to end the disparate impacts caused by corporate data processing. EFF has repeatedly called for such privacy legislation. To be effective, it must include effective private enforcement, and prohibit “pay for privacy” schemes that hurt lower-income people. In the U.S., states have been more proactive and more willing to consider such protections, so legislation at the federal level must not preempt state legislation. The pervasive ecosystem of data surveillance is a civil rights problem, and as we head into 2024 we must continue thinking about them as parts of the same problem. 

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Paige Collings

Sketchy and Dangerous Android Children’s Tablets and TV Set-Top Boxes: 2023 in Review

3 months ago

You may want to save your receipts if you gifted any low-end Android TV set-top boxes or children's tablets to a friend or loved one this holiday season. In a series of investigations this year, EFF researchers confirmed the existence of dangerous malware on set-top boxes manufactured by AllWinner and RockChip, and discovered sketchyware on a tablet marketed for kids from the manufacturer Dragon Touch. 

Though more reputable Android devices are available for watching TV and keeping the little ones occupied, they come with a higher price tag as well. This means that those who can afford such devices get more assurance in the security and privacy of these devices, while those who can only afford cheaper devices by little-known manufacturers are put at greater risk.

The digital divide could not be more apparent. Without a clear warning label, consumers who cannot afford devices from well-known brands such as Apple, Amazon, or Google are being sold devices which come out-of-the-box ready to spy on their children. This malware opens their home internet connection as a proxy to unknown users, and exposes them to legal risks. 

Traditionally, if a device like a vacuum cleaner was found to be defective or dangerous, we would expect resellers to pull these devices from the department store floor and to the best of their ability notify customers who have already bought these items and brought them into their homes. Yet we observed the devices in question continued to be sold by online vendors months after widely circulated news of their defects.

After our investigation of the set-top boxes, we urged the FTC to take action against the vendors who sell devices known to be riddled with malware. Amazon and AliExpress were named in the letter, though more vendors are undoubtedly still selling these devices. Not to spoil the holiday cheer, but if you have received one of these devices, you may want to ask for another gift and have the item refunded.

In the case of the Dragon Touch tablets, it was apparent that this issue went beyond just Android TV boxes and even encompassed budget Android devices specifically marketed for children. The tablet we investigated had an outdated pre-installed parental controls app that was labeled as adware, leftover remnants of malware, and sketchy update software. It’s clear this issue reached a wide variety of Android devices and it should not be left up to the consumer to figure this out. Even for devices on the market that are “normal,” there still needs to be work done by the consumer just to properly set up devices for their kids and themselves. But there’s no total consumer-side solution for pre-installed malware and there shouldn’t have to be.

Compared with the products of yesteryear, our “smart” and IOT devices carry a new set of risks to our security and privacy. Yet, we feel confident that with better digital product testing—along with regulatory oversight—can go a long way in mitigating these dangers. We applaud efforts such as Mozilla’s Privacy Not Included to catalog just how much our devices are protecting our data, since as it currently stands it is up to us as consumers to assess the risks ourselves and take appropriate steps.

Bill Budington

Artificial Intelligence and Policing: Year in Review 2023

3 months ago

Machine learning, artificial intelligence, algorithmic decision making–regardless of what you call it, and there is hot debate over that, this technology has been touted as a supposed threat to humanity, the future of work, as well as the hot new money-making doohickey. But one thing is for certain, with the amount of data required to input into these systems, law enforcement are seeing major opportunities, and our civil liberties will suffer the consequences. In one sense, all of the information needed to, for instance, run a self-driving car, presents a new opportunity for law enforcement to piggyback on new devices covered in cameras, microphones, and sensors to be their eyes and ears on the streets. This is exactly why even at least one U.S. Senator has begun sending letters to car manufacturers hoping to get to the bottom of exactly how much data vehicles, including those deemed autonomous or with “self-driving” modes, collect and who has access to them.

But in another way, the possibility of plugging a vast amount of information into a system and getting automated responses or directives is also rapidly becoming a major problem for innocent people hoping to go un-harassed and un-surveilled by police. So much has been written in the last few years about how predictive policing algorithms perpetuate historic inequalities, hurt neighborhoods already subject to intense amounts of surveillance and policing, and just plain-old don’t work. One investigation from the Markup and WIRED found, “Diving deeper, we looked at predictions specifically for robberies or aggravated assaults that were likely to occur in Plainfield and found a similarly low success rate: 0.6 percent. The pattern was even worse when we looked at burglary predictions, which had a success rate of 0.1 percent.”

This year, Georgetown Law’s Center on Privacy and Technology also released an incredible resource: Cop Out. This is a massive and useful  investigation into automation in the criminal justice system and the several moments from policing to parole when a person might have their fate decided by a machine making decisions.

EFF has long called for a ban on predictive policing and commended cities like Santa Cruz when they took that step. The issue became especially important in recent months when Sound Thinking, the company behind ShotSpotter—an acoustic gunshot detection technology that is rife with problems—was reported to be buying Geolitica, the company behind PredPol, a predictive policing technology known to exacerbate inequalities by directing police to already massively surveilled communities. Sound Thinking acquired the other major predictive policing technology—Hunchlab—in 2018. This consolidation of harmful and flawed technologies means it’s even more critical for cities to move swiftly to ban the harmful tactics of both of these technologies.

In 2024, we’ll continue to monitor the rapid rise of police utilizing machine learning, both by canibalizing the data other “autonomous” devices require and by creating or contracting their own algorithms to help guide law enforcement and other branches of the criminal justice system. This year we hope that more cities and states will continue the good work by banning the use of this dangerous technology. 

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Matthew Guariglia

Electronic Frontier Alliance Comes Back Strong: 2023 in Review

3 months ago

The Electronic Frontier Alliance (EFA) is a loose network of local groups fighting for digital rights in the United States, chaired by EFF. Members’ efforts have been recovering from the limitations put on local organizing caused by the pandemic. More EFA members have been holding in-person workshops and meet-ups that help cement the relationships needed to do their work.

If you’re a member of a local or state group in the United States that fights for digital rights and might want to join, please learn more at our FAQ page. If your group feels like a good fit, please fill out an application here. The Alliance has scores of members, all doing great things this year. This review highlights just a few.

EFA members are organizing for your rights

This year, we caught up with our friends at the Surveillance Technology Oversight Project (S.T.O.P.), a growing organization that has become a force to be reckoned with in New York. STOP has worked to pass the Public Oversight of Surveillance Technology Act in their City Council, and used the law to uncover previously unknown NYPD surveillance contracts. They also won key successes against discriminatory policies by the NYPD by taking the department to court.

We talked to Portland’s Techno-Activism 3rd Mondays (TA3M), which came out of a nationwide effort to increase digital rights activism by providing regular workshops on related topics. Portland’s TA3M hasn’t just survived when most other chapters have disbanded. They have kept a great pace of trainings and panel discussions which has helped keep the digital rights movement alive in Portland, even through the pandemic when these educational events had to move online.

We checked-in with CCTV Cambridge on their efforts to close the digital divide with their Digital Navigator program, as well as their advocacy for digital equality. CCTV Cambridge does work across all demographics. For example, they implemented a Youth Media Program where teens get paid while developing skills to become professional media artists. They also have a Foundational Technology program for elders and others who struggle with technology.

EFA groups kept the conversation going in their communities

Alliance members got together for a podcast interview on Firewalls Don’t Stop Dragons, including EFF, Portland-based PDX Privacy, and Chicago-based Lucy Parsons Labs. It’s a great introduction to the Electronic Frontier Alliance, a couple of its superstar members, and how to get involved.

The Electronic Frontiers track at the sci-fi, fantasy, and comic book-oriented Dragon*Con in Atlanta was produced in coordination with EFA member Electronic Frontiers Georgia and garnered some fantastic conversations. After a few years of hiatus or virtual panels, the digital rights component to the convention came back strong last year and carried on full steam ahead in 2023. Members of EF-Georgia, EFF and allied organizations presented on a variety of topics, including:

More of the Dragon*Con panels can be found at EF-Georgia’s special Dragon*Con playlist.

EFF-Austin also moved back in-person events, including monthly expert talks in Texas and meet-ups for people in their city interested in privacy, security, and related issues. Subjects included:

New members

This past year, we also had the opportunity to expand the alliance, especially among youth-led groups, by welcoming six impressive new members:  

  • Cyber Security Club @SFSU, San Francisco, CA: The Cyber Security Club is a student group for digital security-minded members of the San Francisco State University community.
  • Encode Justice North Carolina: Encode Justice NC is mostly made up of high school students learning the tools of organizing by focusing on issues like algorithmic machine-learning and law enforcement surveillance.
  • Encode Justice Oregon: Like the EJ-NC chapter, EC-Oregon is composed of  high school students training their peers to take an active role in local decision-making.
  • MOKANCAN, Lawrence, KS: The Missouri & Kansas Cyber Alliance Network is a growing new group of volunteer activists who have been meeting on privacy and other digital rights in cities near the border of the two states.
  • New York Law School’s Privacy Law Association, New York, NY: The PLA is a group of law students that train and organize around digital privacy and its impact in many fields of the law.
  • Security Club @OSU, Portland, OR: The OSU SEC is a group for security-minded students at Oregon State University that engages in cyber defense training and related digital security education.
Looking forward

As we continue to fight for our digital rights, more groups are connecting to build and maintain a movement for change. In the coming year, a lot of EFA members will be focused on effecting positive social change, whether it’s by training new generations of digital justice activists or preventing attacks on rights to privacy and free expression. 

To learn more about how the EFA works, please check out our FAQ page, and to join the fight, please apply to join us.

Learn more about some of our other EFA members in these past profiles:

 This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

José Martinez

EFF Continues Fight Against Unconstitutional Geofence and Keyword Search Warrants: 2023 Year in Review

3 months ago

EFF continues to fight back against high-tech general warrants that compel companies to search broad swaths of users’ personal data. In 2023, we saw victory and setbacks in a pair of criminal cases that challenged the constitutionality of geofence and keyword searches. 

These types of warrants—mostly directed at Google—cast a dragnet that require a provider to search its entire reserve of user data to either identify everyone in a particular area (geofence) or everyone who has searched for a particular term (keyword). Police generally have no identified suspects. Instead, the usual basis for the warrant is to try and find a suspect by searching everyone’s data.  

EFF has consistently argued these types of warrants lack particularity, are overbroad, and cannot be supported by probable cause. They resemble the unconstitutional “general warrants” at the founding that allowed exploratory rummaging through people’s belongings. 

EFF Helped Argue the First Challenge to a Geofence Warrant at the Appellate Level 

In April, the California Court of Appeal held that a geofence warrant seeking user information on all devices located within several densely-populated areas in Los Angeles violated the Fourth Amendment. It became the first appellate court in the United States to review a geofence warrant. EFF filed an amicus brief and jointly argued the case before the court.

In People v. Meza, the court ruled that the warrant failed to put meaningful restrictions on law enforcement and was overbroad because law enforcement lacked probable cause to identify every person in the large search area. The Los Angeles Sheriff’s Department sought a warrant that would force Google to turn over identifying information for every device with a Google account that was within any of six locations over a five-hour window. The area included large apartment buildings, churches, barber shops, nail salons, medical centers, restaurants, a public library, and a union headquarters.  

Despite ruling the warrant violated the Fourth Amendment, the court refused to suppress the evidence, finding the officers acted in good faith based on a facially valid warrant. The court also unfortunately found that the warrant did not violate California’s landmark Electronic Communications Privacy Act (CalECPA), which requires state warrants for electronic communication information to particularly describe the targeted individuals or accounts “as appropriate and reasonable.” While CalECPA has its own suppression remedy, the court held it only applied when there was a statutory violation, not when the warrant violated the Fourth Amendment alone. This is in clear contradiction to an earlier California geofence case, although that case was at the trial court, not at the Court of Appeal.

EFF Filed Two Briefs in First Big Ruling on Keyword Search Warrants 

In October, the Colorado Supreme Court became the first state supreme court in the country to address the constitutionality of a keyword warrant—a digital dragnet tool that allows law enforcement to identify everyone who searched the internet for a specific term or phrase. In a weak and ultimately confusing opinion, the court upheld the warrant, finding the police relied on it in good faith. EFF filed two amicusbriefs and was heavily involved in the case.

In People v. Seymour, the four-justice majority recognized that people have a constitutionally-protected privacy interest in their internet search queries and that these queries impact a person’s free speech rights. Nonetheless, the majority’s reasoning was cursory and at points mistaken. Although the court found that the Colorado constitution protects users’ privacy interests in their search queries associated with a user’s IP address, it held that the Fourth Amendment does not, due to the third-party doctrine—reasoning that federal courts have held that there is no expectation of privacy in IP addresses. We believe this ruling overlooked key facts and recent precedent. 

EFF Will Continue to Fight to Convince Courts, Legislatures, and Companies  

EFF plans to make a similar argument in a Pennsylvania case in January challenging a keyword warrant served on Google by the state police.  

EFF has consistently argued in court, to lawmakers, and to tech companies themselves that these general warrants do not comport with the constitution. For example, we have urged Google to resist these warrants, be more transparent about their use, and minimize the data that law enforcement can gain access to. Google appears to be taking some of that advice by limiting its own access to users’ location data. The company recently announced a plan to allow users to store their location data directly on their device and automatically encrypt location data in the cloud—so that even Google can’t read it. 

This year, at least one company has proved it is possible to resist geofence warrants by minimizing data collection. In Apple’s latest transparency report, it notes that it “does not have any data to provide in response to geofence warrants.” 

 

 This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Mario Trujillo

Stupid Patent of the Month: Selfie Contests

3 months ago

Patents are supposed to be an incentive to invent. Too often, they end up being a way to try to claim “ownership” of what should be basic building blocks of human activity, culture, and knowledge. This is especially true of software patents, an area EFF has been speaking out about for more than 20 years now. 

This month’s Stupid Patent, No. 8,655,715, continues the tradition of trying to use software language to capture a monopoly on a basic human cultural activity–in this case, contests. 

A company called Opus One, which does business under the name “Contest Factory,” claims this patent and a related one cover a huge array of online contests. So far, they’ve filed five lawsuits against other companies that help build online contests, and even threatened a small photo company that organizes mostly non-commercial contests online. 

The patents held by Contest Factory are a good illustration of why EFF has been concerned about out-of-control software patents. It’s not just that wrongly issued patents extort a vast tax on the U.S. economy (although they do—one study estimated $29 billion in annual direct costs). The worst software patents also harm peoples’ rights to express themselves and participate in online culture. Just as we’re free in the physical world to sign documents, sort photos, store and label information, clock in to work, find people to date, or teach foreign languages, without paying extortionate fees to others, we must also be free to do so online. 

Patenting Contests

Claim 1 of the ‘715 patent has steps that claim: 

  • Receiving, storing, and accessing data on a computer; 
  • Sorting it and generating “contest data”; 
  • Tabulating votes and picking a winner.

The patent also uses other terms for common activities of general purpose computers, such as “transmitting” and “displaying” data. 

In other words, the patent describes everyday use of computers, plus the idea of users participating in a contest. This is a classic abstract idea, and it never should have been eligible for a patent. 

In a 2017 article in CIO Review, the company acknowledges how incredibly broad its claims are. Contest Factory claims it patented “voting in online contests long before TV contest shows with public voting components made their appearance,” and that it holds patents “associated with online contests and integrating online voting with virtually any type of contest.” 

Lawsuit Over Radio Station Contest 

In its most recent lawsuit, Contest Factory says that a Minneapolis radio station’s “Mother’s Day Giveaway” for a mother/daughter spa day infringed its patent. The radio station asked people to post mother-daughter selfies online and share their entry to collect votes. 

Contest Factory sued Pancake Labs (complaint), the company that helped the radio station put the contest online. Contest Factory also claimed a PBS contest in which viewers created short films and voted on them was an example of infringement. 

For the “Mother’s Day Giveaway” contest, the patent infringement accusation reads in part that, “the executable instructions … cause the generation of a contest and the transmission of the first and second content data to at least one user to view and rate the content.” 

Contest Factory has sued over quite a few internet contests, dating back more than a decade. Its 2016 lawsuits, based on the ‘715 patent and two earlier related patents, were filed against three small online marketing firms: Vancouver-based Strutta, Florida-based Elettro, and California-based Votigo, for contests that go back to 2011. We don’t know how many more companies or online communities have been threatened in all. 

Sharing user-generated content like photos—cooperatively or competitively—is the kind of sharing that the digital world is ideal for. When patent owners demand a toll for these activities, it doesn’t matter whether they’re patent “trolls” or operating companies seeking to extract settlements from competitors. They threaten our freedoms in unacceptable ways. 

The government shouldn’t be issuing patents like these, and it certainly shouldn’t be making them harder to challenge

  • Opus One d/b/a Contest Factory v. Pancake Labs complaint
  • Opus One d/b/a Contest Factory v. Telescope complaint 
  • Opus One d/b/a Contest Factory v. Elletro complaint 
  • Opus One d/b/a Contest Factory v. Votigo complaint 
  • Opus One d/b/a Contest Factory v. Strutta complaint
Joe Mullin

EFF Urges Supreme Court to Set Standard for How Government Can and Can’t Talk to Social Media Sites About Censoring Users’ Posts

3 months 1 week ago
First Amendment Bars Coercive Censorship Demands But Some Communications Are Permissible

WASHINGTON, DC—The Supreme Court should clarify standards for determining if the government permissibly advised or convinced social media companies to censor content from 2020 to 2022, or impermissibly coerced or threatened sites in violation of the First Amendment, the Electronic Frontier Foundation (EFF) said in a brief filed today

“Government co-option of content moderation systems is a serious threat to freedom of speech,” said EFF Civil Liberties Director David Greene. “But there are clearly times when it is permissible, appropriate, and even good public policy for government agencies and officials to inform, communicate with, attempt to persuade, or even criticize sites—free of coercion—about the user speech they publish.”  

In Murthy v. Missouri, Louisiana, Missouri, and several individuals have accused federal agencies and officials of illegal “jawboning”—urging private persons and entities to censor another’s speech. The suit alleges agencies pushed the platforms to censor content about COVID safety measures and vaccines, elections, and Hunter Biden’s laptop, among other issues. 

In a brief filed today with the Center for Democracy and Technology (CDT), EFF urged the court to rely on the First Amendment test in its 1963 Bantam Books v. Sullivan ruling to determine whether the government contacts were permissible or impermissible. The test says the Constitution bans not only direct government demands for censorship, but also indirect means, like hinting at legal sanctions to intimidate or coerce a private party into censorship.   

In Bantam, book publishers sued a Rhode Island commission which—in an attempt to suppress “obscene” material—used threats of state prosecution to keep books considered “objectional” from being distributed. The commission’s actions amounted to informal censorship, the court found. But the test also recognizes that not every communication to an intermediary about users’ speech is unconstitutional.  

Courts must look at all factors, including whether the contacts are from law enforcement, convey coercion or threats of adverse consequences, or were solicited to gain the government’s input or expertise, EFF and CDT said in their brief. The Supreme Court should provide adequate guidance to help courts, agencies, and private parties distinguish between attempts to convince and attempts to coerce. 

In the Murthy case, a federal judge sided with the states and issued an injunction limiting government contacts with social media platforms. The U.S. Court of Appeals for the Fifth Circuit partially upheld the injunction. Neither court adequately distinguished between improper and proper communications, EFF and CDT said in their brief.  

“This court must independently review the record and make the searching distinctions that the lower courts did not,” said Greene.  

Kate Ruane, Director of CDT's Free Expression Project, agreed that “lower courts' sweeping opinions in Murthy have made the line between permissible government attempts to persuade social media platforms and unconstitutional coercion less clear than ever.” 

“We're asking the Supreme Court to clarify when and how the government can appropriately communicate with platforms while protecting speech, which would allow useful information-sharing to resume, and encourage accountability and transparency in government-platform interactions," said Ruane. 

For the brief: https://www.eff.org/document/murthy-v-missouri-amicus-brief

For EFF’s earlier brief to the Fifth Circuit appeals court: https://www.eff.org/document/missouri-v-biden-amicus-brief 

For more on Murthy v. Missouri: https://www.scotusblog.com/case-files/cases/murthy-v-missouri-2/

Contact:  DavidGreeneCivil Liberties Directordavidg@eff.org
Josh Richman

EFF Has a Guiding Star 🌠

3 months 1 week ago

Do you ever look at something once and then get targeted ads? Have you ever been exposed in some company’s data breach? Have you ever heard a lawmaker push restrictions on technology that they don’t even understand?

To live in the modern world is to interact with technology in ways that are wonderful—and others that are an absolute pain in the butt. It's not fair to trade in your dignity or safety, and that’s why people like you stand with the Electronic Frontier Foundation. Will you keep EFF fighting and boost us up in our Year-End Challenge?

Give a Year-End Donation

Unlock bonus grants in 2023!

Make ANY donation by December 31 and you’ll help unlock bonus grants for EFF! Every single supporter counts toward getting EFF closer to a series of seven Year-End Challenge grants set by EFF’s board of directors. These grants become larger as the number of online rights supporters grows. See the counter.

EFF is Not Messing Around

Tech touches nearly every aspect of the world, and your rights need protection. Recently, EFF members enabled our team to:

EFF has one guiding star: your freedom as a tech user. You can help EFF’s lawyers, activists, policy analysts, and technologists keep advancing toward a brighter future for everybody. Donate to support digital rights today and you’ll help EFF unlock bonus grants before the year ends!

_________________________

EFF is celebrating TEN YEARS of top ratings from the nonprofit watchdog Charity Navigator. EFF is a member-supported U.S. 501(c)(3) organization and your donation is tax-deductible as allowed by law. You can even start a convenient monthly donation.

Aaron Jue

The Great Interoperability Convergence: 2023 Year in Review

3 months 1 week ago

It’s easy to feel hopeless about the collapse of the tech sector into a group 0f monopolistic silos that harvest and exploit our data, hold our communities hostage, gouge us on prices, and steal our wages.

But all over the world and across different government departments, policymakers are converging on a set of muscular, effective solutions to Big Tech dominance.

This convergence spans financial regulators and consumer protection agencies; it’s emerging in Europe, the USA, and the UK. It’s kind of a moment.

How Not To Fix Big Tech 

To understand what’s new in Big Tech regulation, we should talk briefly about what’s old. For many years, policymakers have viewed the problems of Big Tech as tech problems, not big problems. From disinformation to harassment to copyright infringement, the go-to policy response of the past two decades has been to make tech platforms responsible for policing and controlling their users.

This approach starts from the assumption that the problems that occur after hundreds of millions or billions of people are locked inside of a platform’s walled garden are problems of mismanagement, not problems of scale. The thinking goes that the dictators of these platforms aren’t sufficiently benevolent or competent, and they must either be incentivized to do better or be replaced with more suitable autocrats.

This approach has consistently failed - gigantic companies have proved as unperfectable as they are ungovernable. What’s more, deputizing giant companies to police their users has the perverse effect of making them more powerful by creating barriers to entry that clear the field of competitors who might offer superior alternatives for both users and business customers.

Take copyright enforcement: in 2019, the EU passed a rule requiring platforms to intercept and filter all their users’ communications to screen out copyright infringement. These filters are stupendously expensive to build - YouTube’s version of them, the notorious Content ID, has cost Google more than $100 million to build and maintain. Not only is the result an unnavigable, Kafkaesque nightmare for creators, it’s also far short of what the EU rule requires.

Any law that requires every digital service to mobilize the resources of a trillion-dollar multinational will tend to produce an internet run by trillion-dollar multinationals.

A Better Approach

We think that the biggest problem facing the internet today is bigness itself. Very large platforms are every bit as capable of committing errors in judgment or making trade-offs that harm their users as small platforms. The difference is that when very large platforms make even small errors, millions or even billions of users are in harm’s way.

What’s more, if users are trapped inside these platforms - by high switching costs, data lock-in, or digital rights management - they pay a steep price for seeking out superior alternatives. And in a market dominated by large firms who have locked in their users, investors are unwilling to fund those alternatives.

For EFF, the solution to Big Tech is smaller tech: allowing lots of different kinds of organizations (from startups to user groups to nonprofits to local governments to individual tinkerers) to provide interoperable services that all work together. These smaller platforms are closer to their users, and stand a better chance of parsing out the fine-grained nuances in community moderation. Smaller platforms are easier to regulate, too.

Giving users the choice of more, interoperable platforms that are less able to capture their regulators means that if a platform changes the rules in ways you dislike, you can go elsewhere, or simply revert those bad changes with a plugin that makes the system work better for you.

Interoperability From the Top Down and the Bottom Up

Since the earliest days of the internet, interoperability has been a key driver of technological self-determination for users. Sometimes, that interoperability was attained through adherence to formal standards, but often interoperability was hacked into existing, dominant services by upstarts who used careful reverse-engineering, bots, scraping, and other adversarial interoperability techniques to let users leave or modify the products and services they relied on.

Decades of anticompetitive mergers and acquisitions by tech companies have created a highly concentrated internet where companies no longer feel the pressure to interoperate, and where attempts to correct this discrepancy with unauthorized plugins, scraping or other guerrilla tactics gives rise to eye-watering legal risks.

The siloing of the internet is the result of both too little tech regulation and too much regulation.

In failing to block anticompetitive mergers, regulators allowed a few companies to buy their way to near-total dominance, and to use that dominance to prevent other forms of regulation and enforcement on issues like privacy, labor and consumer protection.

Meanwhile, restrictions on reverse-engineering and violating terms of service has all but ended the high-tech liberation tactics of an earlier era.

To make the internet better, policymakers need to make it easier for better services to operate, and for users to switch to those services. Policymakers also need to protect users’ privacy, labor, and consumer rights from abuse by today’s giant services and the smaller services that will come next.

Privacy Without Monopoly, Then and Now

Two years ago, we published Privacy Without Monopoly, a detailed analysis of the data-protection issues associated with a transition from a siloed, monopolized internet to a decentralized, interoperable internet.

Dominant platforms, from Apple to Facebook to Google, point to the many times that they step in to protect their users from bad actors, but are conspicuously silent about the many times when their users come to harm when they are targeted by the companies who own the dominant platforms.

In Privacy Without Monopoly, we argue that it’s possible for internet users to have the benefits of being protected by tech platforms, without the risks of being victimized by them. To get the best of both worlds, governments must withdraw tech platforms’ legal right to block interoperators, while simultaneously creating strong privacy protections for users.

That means that tech companies can still take technical actions to block bad actors from abusing their platforms, but if they want to enlist the law to aid them in doing so, they must show that their adversaries are violating their users’ legal rights to privacy.

Under this system, the final word on which privacy rights a platform’s users are entitled to comes from democratically accountable lawmakers who legislate in public - not from shareholder-accountable executives who make policies behind locked boardroom doors.

Convergence, At Last

This past year has been a very good one for this approach. 2023 saw regulators challenging the market power of the largest tech companies and even beginning the long, slow process of restoring a prudent regime of merger scrutiny.

The global resurgence of these long-dormant established antitrust actions is a welcome development, but at EFF, we think that interoperability, backstopped by privacy and other legal protections, offers a more immediate prospect of relief and protection for users.

That’s why we’ve been so glad to see 2023’s other developments, ones that aim to make it easier for users to leave Big Tech and go somewhere smaller and more responsive to their needs.

In Europe, the Digital Markets Act, passed into law in 2022, has made significant progress towards a regime of mandatory interoperability for the largest platforms. In the USA, the bipartisan AMERICA Act could require ad-tech giants to break into interoperable pieces, a key step towards a more secure economic future for the news industry.

The US Consumer Financial Protection Bureau is advancing a rule to force banks to support interoperable standards to facilitate shopping for a better bank and then switching to it. This rule explicitly takes away incumbents’ power to block new market entrants in the name of protecting users’ privacy. Instead, it establishes bright-line rules restricting what the finance sector may do with users’ data. What’s more, this rule acknowledges the importance of adversarial interoperability, by including a framework for scraping user data on behalf of the user (a tactic with a proven track record for getting users a better deal from their bank).

Finally, in the UK, the long overdue Digital Markets, Competition and Consumers Bill has finally been introduced.  This bill will give the Competition and Markets Authority’s large and exceptionally skilled Digital Markets Unit the enforcement powers it was promised when it was formed in 2021. Among these proposed powers are the ability to impose interoperability mandates on the largest tech companies, something the agency has already investigated in detail.

With lawmakers from different domains and territories all converging on approaches that solve the very real problems of bad platforms by centering user choice and user protections, tech regulation is at a turning point: away from the hopeless task of perfecting Big Tech and towards the necessary work of abolishing Big Tech.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Cory Doctorow

Surveillance and the U.S.-Mexico Border: 2023 Year in Review

3 months 1 week ago

The U.S.-Mexico border continues to be one of the most politicized spaces in the country, with leaders in both political parties supporting massive spending on border security, including technological solutions such as the so-called "virtual wall." We spent the year documenting surveillance technologies at the border and the impacts on civil liberties and human rights of those who live in the borderlands.

In early 2023, EFF staff completed the last of three trips to the U.S.-Mexico border, where we met with the residents, activists, humanitarian organizations, law enforcement officials, and journalists whose work is directly impacted by the expansion of surveillance technology in their communities.

Using information from those trips, as well as from public records, satellite imagery, and exploration in virtual reality, we released a map and dataset of more than 390 surveillance towers installed by Customs and Border Protection (CBP) along the U.S.-Mexico border. Our data serves as a living snapshot of the so-called "virtual wall," from the California coast to the lower tip of Texas. The data also lays the foundation for many types of research ranging from border policy to environmental impacts.

We also published an in-depth report on Plataforma Centinela (Sentinel Platform), an aggressive new surveillance system developed by Chihuahua state officials in collaboration with a notorious Mexican security contractor. With tentacles reaching into 13 Mexican cities and a data pipeline that will channel intelligence all the way to Austin, Texas, the monstrous project is unlike anything seen before along the U.S.-Mexico border. The strategy adopts nearly every cutting-edge technology system marketed at law enforcement: 10,000 surveillance cameras, face recognition, automated license plate recognition, real-time crime analytics, a fleet of mobile surveillance vehicles, drone teams and counter-drone teams, and more. It also involves a 20-story high-rise in downtown Ciudad Juarez, known as the Torre Centinela (Sentinel Tower), that will serve as the central node of the surveillance operation. We’ll continue to keep a close eye on the development of this surveillance panopticon.

Finally, we weighed in on the dangers of border surveillance on civil liberties by filing an amicus brief in the U.S. Court of Appeals for the Ninth Circuit. The case, Phillips v. U.S. Customs and Border Protection, was filed after a 2019 news report revealed the federal government was conducting surveillance of journalists, lawyers, and activists thought to be associated with the so-called “migrant caravan” coming through Central America and Mexico. The lawsuit argues, among other things, that the agencies collected information on the plaintiffs in violation of their First Amendment rights to free speech and free association, and that the illegally obtained information should be “expunged” or deleted from the agencies’ databases. Unfortunately, both the district court and a three-judge panel of the Ninth Circuit ruled against the plaintiffs. The plaintiffs urged the panel to reconsider, or for the full Ninth Circuit to rehear the case. In our amicus brief, we argued that the plaintiffs have privacy interests in personal information compiled by the government, even when the individual bits of data are available from public sources, and especially when the data collection is facilitated by technology. We also argued that, because the government stored plaintiffs’ personal information in various databases, there is a sufficient risk of future harm due to lax policies on data sharing, abuse, or data breach.

Undoubtedly, next year’s election will only heighten the focus on border surveillance technologies in 2024. As we’ve seen time and again, increasing surveillance at the border is a bipartisan strategy, and we don’t expect that to change in the new year.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Saira Hussain

2023 Year in Review

3 months 1 week ago

At the end of every year, we look back at the last 12 months and evaluate what has changed for the better (and worse) for digital rights.  While we can be frustrated—hello ongoing attacks on encryption—overall it's always an exhilarating reminder of just how far we've come since EFF was founded over 33 years ago. Just the scale alone it's breathtaking. Digital rights started as a niche, future-focused issue that we would struggle to explain to nontechnical people; now it's deeply embedded into all of our lives.

The legislative, court, and agency fights around the world this year also helped us see and articulate a common thread: the need for a "privacy first" approach to laws and technology innovation.  As we wrote in a new white paper aptly entitled "Privacy First: A Better Way to Address Online Harms," many of the ills of today’s internet have a single thing in common, and it is that they are built on a business model of corporate surveillance and behavioral advertising.  Addressing that problem could help us make great strides in a range of issues, and avoid many of the the terrible likely impacts of many of today's proposed "solutions."

Instead of considering proposals that would censor speech and put children's access to internet resources at the whims of state attorneys general, we could be targeting the root cause of the concern: internet companies' collection, storage, sales, and use of our personal information and activities to feed their algorithms and ad services. Police go straight to tech companies for your data or the data on everyone who was near a certain location.  And that's when they even bother with a court-overseen process, rather than simply issuing a subpoena, showing up and demanding it, or buying data from data brokers. If we restricted what data tech companies could keep and for how long, we could also tackle this problem at the source. Instead of unconstitutional link taxes to save local journalism, laws that attack behavioral advertising--built on collection of data--would break the ad and data monopoly that put journalists at the mercy of Big Tech in the first place.

Concerns about what is feeding AI, social media algorithms, government spying (either your own or another country's), online harassment, getting access to healthcare--so much can be better protected if we address privacy first. EFF knows this, and it's why, in 2023, we did things like launch the Tor University Challenge, urge the Supreme Court to recognize that the Fifth Amendment protects you from being forced to give your phone's passcode to police, and work to fix the dangerously flawed UN Cybercrime Treaty. Most recently, we celebrated Google's decision to limit the data collected and kept in its "Location History" as a potentially huge step to prevent geofence warrants that use Google's storehouse of location data to conduct massive, unconstitutional searches sweeping in many innocent bystanders. 

Of course, as much as individuals need more privacy, we also need more transparency, especially from our governments and the big corporations that rule so much of our digital lives. That's why EFF urged the Supreme Court to overturn an order preventing Twitter—now X—from publishing a transparency report with data about what, exactly, government agents have asked the company for. It's why we won an important victory in keeping laws and regulations online and accessible. And it's why we defended the Internet Archive from an attack by major publishers seeking to cripple libraries' ability to give the rest of us access to knowledge into the digital age.

All of that barely scratches the surface of what we've been doing this year. But none of it would be possible without the strong partnership of our members, supporters, and all of you who stood up and took action to build a better future. 

EFF has an annual tradition of writing several blog posts on what we’ve accomplished this year, what we’ve learned, and where we have more to do. We will update this page with new stories about digital rights in 2023 every day between now and the new year.

Cindy Cohn

FTC’s Rite Aid Ruling Rightly Renews Scrutiny of Face Recognition

3 months 1 week ago

The Federal Trade Commission on Tuesday announced action against the pharmacy chain Rite Aid for its use of face recognition technology in hundreds of stores. The regulator found that Rite Aid deployed a massive, error-riddled surveillance program, chose vendors that could not properly safeguard the personal data the chain hoarded, and attempted to keep it all under wraps. Under a proposed settlement, Rite Aid can't operate a face recognition system in any of its stores for five years.

EFF advocates for laws that require companies to get clear, opt-in consent from any person before scanning their faces. Rite Aid's program, as described in the complaint, would violate such laws. The FTC’s action against Rite Aid illustrates many of the problems we have raised about face recognition—including how data collected for face recognition systems is often insufficiently protected, and how systems are often deployed in ways that disproportionately hurt BIPOC communities.

The FTC’s complaint outlines a face recognition system that often relied on "low-quality" images to identify so-called “persons of interest,” and that the chain instructed staff to ask such customers to leave its stores.

From the FTC's press release on the ruling:

According to the complaint, Rite Aid contracted with two companies to help create a database of images of individuals—considered to be “persons of interest” because Rite Aid believed they engaged in or attempted to engage in criminal activity at one of its retail locations—along with their names and other information such as any criminal background data. The company collected tens of thousands of images of individuals, many of which were low-quality and came from Rite Aid’s security cameras, employee phone cameras and even news stories, according to the complaint.

Rite Aid's system falsely flagged numerous customers, according to the complaint, including an 11 year-old girl whom employees searched based on a false-positive result. Another unnamed customer quoted in the complaint told Rite Aid, "Before any of your associates approach someone in this manner they should be absolutely sure because the effect that it can [have] on a person could be emotionally damaging.... [E]very black man is not [a] thief nor should they be made to feel like one.”

Even if Rite Aid's face recognition technology had been completely accurate (and it clearly was not), the way the company deployed it was wrong. Rite Aid scanned everyone who came into certain stores and matched them against an internal list. Any company that does this assumes the guilt of everyone who walks in the door. And, as we have pointed out time and again, that assumption of guilt doesn't fall on all customers equally: People of color, who are already historically over-surveilled, are the ones who most often find themselves under new surveillance.

As the FTC explains in its complaint (emphasis added):

"[A]lthough approximately 80 percent of Rite Aid stores are located in plurality-White (i.e., where White people are the single largest group by race or ethnicity) areas, about 60 percent of Rite Aid stores that used facial recognition technology were located in plurality non-White areas. As a result, store patrons in plurality-Black, plurality-Asian, and plurality-Latino areas were more likely to be subjected to and surveilled by Rite Aid’s facial recognition technology."

The FTC's ruling rightly pulls the many problems with facial recognition into the spotlight. It also proposes remedies to many ways Rite Aid failed to ensure its system was safe and functional, failed to train employees on how to interpret results, and failed to evaluate whether its technology was harming its customers.

We encourage lawmakers to go further. They must enact laws that require businesses to get opt-in consent before collecting or disclosing a person’s biometrics. This will ensure that people can make their own decisions about whether to participate in face recognition systems and know in advance which companies are using them. 

Hayley Tsukayama

Victory: Utah Supreme Court Upholds Right to Refuse to Tell Cops Your Passcode

3 months 1 week ago

Last week, the Utah Supreme Court ruled that prosecutors violated a defendant’s Fifth Amendment privilege against self incrimination when they presented testimony about his refusal to give police the passcode to his cell phone. In State v. Valdez, the court found that verbally telling police a passcode is “testimonial” under the Fifth Amendment, and that the so-called foregone conclusion exception does not apply to “ordinary testimony” like this. This closely tracks arguments in the amicus brief EFF and the ACLU filed in the case.

The Utah court’s opinion is the latest in a thicket of state supreme court opinions dealing with whether law enforcement agents can compel suspects to disclose or enter their passwords. Last month, EFF supported a petition asking the U.S. Supreme Court to review People v. Sneed, an Illinois Supreme Court opinion that reached a contrary conclusion. As we explained in that brief, courts around the country are struggling to apply Fifth Amendment case law to the context of compelled disclosure and entry of passcodes.

The Fifth Amendment privilege protects suspects from being forced to provide “testimonial” answers to incriminating lines of questioning. So it would seem straightforward that asking “what is your passcode?” should be off limits. Indeed, the Utah Supreme Court had no trouble finding that verbally disclosing a passcode was protected as a “traditionally testimonial communication.” Notably there has been dissent from even this straightforward rule by the New Jersey Supreme Court. However, many cases—like the Sneed case from Illinois—involve a less clear demand by law enforcement: “tell us your passcode or just enter it.”

Unfortunately, many courts, including Utah, have applied a different standard to entering rather than disclosing a passcode. Under this reasoning, verbally telling police a passcode is explicitly testimonial, whereas entering a passcode is only implicitly testimonial as an “act of production,” comparable to turning over incriminating documents in response to a subpoena. But as we’ve argued, entering a passcode should be treated as purely testimonial in the same way that nodding or shaking your head in response to a question is. More fundamentally, the U.S. Supreme Court has held that even testimonial “acts of production,” like assembling documents in response to a subpoena, are privileged and cannot be compelled without expansive grants of immunity.

A related issue has generated even more confusion: whether police can compel a suspect to enter a passcode because they claim that the testimony it implies is a “foregone conclusion.” The foregone conclusion “exception” stems from a single U.S. Supreme Court case, United States v. Fisher, involving specific tax records—a far cry from a world where we carry our entire life history around on a phone. Nevertheless, prosecutors routinely argue it applies any time the government can show suspects know the passcode to their phones. Even Supreme Court justices like Antonin Scalia and Clarence Thomas have viewed Fisher as a historical outlier, and it should not be the basis of such a dramatic erosion of Fifth Amendment rights.

Thankfully, the Utah Supreme Court held that the foregone conclusion doctrine had no application in a case involving verbal testimony, but it left open the possibility of a different rule in cases involving compelled entry of a passcode. Make no mistake, Valdez is a victory for Utahns’ right to refuse to participate in their own investigation and prosecution. But we will continue to fight to ensure this right is given its full measure across the country.

Related Cases: Andrews v. New Jersey
Andrew Crocker

Does Less Consumer Tracking Lead to Less Fraud?

3 months 1 week ago

Here’s another reason to block digital surveillance: it might reduce financial fraud.  That’s the upshot of a small but promising study published as a National Bureau of Economic Research (NBER) working paper, “Consumer Surveillance and Financial Fraud

Authors Bo Bian, Michaela Pagel and Huan Tang investigated the relationship between the rollout of Apple’s App Tracking Transparency (ATT) and reports of consumer financial fraud. Many apps can track users across apps or websites owned by other companies. By default, Apple's ATT opted all iPhone users out of tracking, which meant that apps and websites no longer received user identifiers unless they obtained user permission. 

The highlight of the research is that Apple users were less likely to be victims of financial fraud after Apple implemented the App Tracking Transparency policy. The results showed a 10% increase in the share of Apple users in a particular ZIP code leads to roughly 3% reduction in financial fraud complaints. 

The Methodology 

The authors designed a complicated methodology for this study, but here are the basics for those who don’t have time to tackle the actual paper. 

The authors primarily use the number of financial fraud complaints and the amount of money lost due to fraud to track how much fraud is happening. These figures are obtained from the Consumer Financial Protection Bureau (CFPB) and Federal Trade Commission (FTC). The researchers used machine learning and keyword searches to narrow the complaints down to those related to financial fraud that was caused by lax data privacy as opposed to other types of financial fraud. They concluded that complaints in certain product categories—like credit reporting and debt collection—are most likely to implicate the lack of data privacy. 

The study used data acquired by a company called Safegraph to determine the share of iPhone users on ZIP code level. It then estimated the effect of Apple’s ATT,on the number of complaints of financial fraud in each ZIP code. They found a noticeable, measurable reduction in complaints for iPhone users after ATT was implemented. The researchers also investigated variation in this reduction across different demographic groups. They found that the effect is stronger for minorities, women, and younger people—suggesting that these groups, which may have been more vulnerable to fraud before, saw a greater increase in protection when Apple turned on ATT.  

To test the accuracy and reliability of their results, the researchers employed many different methods typically used in a statistical analysis. These include placebo tests, robustness check, and Poisson regression. In lay terms, these methods test the results against assumptions, the potential effect of other factors and alternative specifications, and variable conditions. 

These methods help establish causation (as opposed to mere correlation), in part by ruling out other possible causes. Although one can never be 100% sure that a result was caused by something in a regression analysis, these methods are popularly used to reasonably infer causation and the report meticulously applies them. 

What This Means 

While the scope of the data is small, this is the first significant research we’ve seen that connects increased privacy with decreased fraud. This should matter to all of us. It reinforces that when companies take steps to protect our privacy, they also help protect us from financial fraud. This is a point we made in our Privacy First whitepaper, which discusses the many harms that a robust privacy system can protect us from.  Lawmakers and regulators should take note.   

In implementing ATT, Apple has proven something EFF has long said: with over 75% of consumers as of May 2022 keeping all tracking off rather than opting in, it’s clear that most consumers want more privacy than they are currently getting through the surveillance business model. Now, with this research it seems that when they get more privacy, they also get some protection against fraud as well.   

Of course, we are not done pushing Apple or anyone else on stepping up for our privacy. As Professor Zeynep Tufekci noted in a recent NY Times column, “I was happy to see Apple switch the defaults for tracking in 2021, but I’m not happy that it was because of a decision by one powerful company—what oligopoly giveth, oligopoly can taketh away. We didn’t elect Apple’s chief executive, Tim Cook, to be the sovereign of our digital world. He could change his mind.”  

We appreciate Apple for implementing ATT. The initial research indicates that it may have a welcome additional  effect for all of us who need both privacy and security against fraud.  We’d like to see more research about this connection and, of course, more companies following Apple’s lead.  

As a side note, it is important to mention that we are concerned about researchers using data from Safegraph, a company that EFF has criticized for unethical personal data collection and its PR efforts to "research wash" its practices by making that data available for free to academics. The use of this data in several academic research projects speaks to the reach of unethical data brokers as well as to the need to rein them in, both with technical measures like ATT and with robust consumer data privacy legislation.  

However, the use of this data does not take away from the credibility of the research and its conclusions. The iOS share per ZIP code could have been determined by other legitimate sources, but that would have had no effect on the results determining the impact of ATT.  

Thanks to EFF Intern Muhammad Essa for research and key drafting help with this blog post.

Cindy Cohn

Digital Rights Updates with EFFector 35.16

3 months 1 week ago

Have no fear, it's the final EFFector of the year! Be the digital freedom expert for your family and friends during the holidays by catching up on the latest online rights issues with EFFector 35.16. This issue of our newsletter covers topics including: the surveillance one could be gifting another with smart speakers and other connected gadgets, how to use various Android safety tools to secure your kids Android device, and a victory announcement—Montana's TikTok ban was ruled unconstitutional by a federal court.

EFFector 35.16 is out now—you can read the full newsletter here, or subscribe to get the next issue in your inbox automatically! You can also listen to the audio version of the newsletter below:

LISTEN ON YouTube

Safe and Private for the Holidays

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

EFF Joins Forces with 20+ Organizations in the Coalition #MigrarSinVigilancia

3 months 1 week ago

Today, EFF joins more than 25 civil society organizations to launch the Coalition #MigrarSinVigilancia ("To Migrate Without Surveillance"). The Latin American coalition’s aim is to oppose arbitrary and indiscriminate surveillance affecting migrants across the region, and to push for the protection of human rights by safeguarding migrants' privacy and personal data.

On this International Migrants Day (December 18), we join forces with a key group of digital rights and frontline humanitarian organizations to coordinate actions and share resources in pursuit of this significant goal.

Governments increasingly use technologies to monitor migrants, asylum seekers, and others moving across borders with growing frequency and intensity. This intensive surveillance is often framed within the concept of "smart borders" as a more humanitarian approach to address and streamline border management, even though its implementation often negatively impacts the migrant population.

EFF has been documenting the magnitude and breadth of such surveillance apparatus, as well as how it grows and impacts communities at the border. We have fought in courts against the arbitrariness of border searches in the U.S. and called out the inherent dangers of amassing migrants' genetic data in law enforcement databases.  

The coalition we launch today stresses that the lack of transparency in surveillance practices and regional government collaboration violates human rights. This opacity is intertwined with the absence of effective safeguards for migrants to know and decide crucial aspects of how authorities collect and process their data.

The Coalition calls on all states in the Americas, as well as companies and organizations providing them with technologies and services for cross-border monitoring, to take several actions:

  1. Safeguard the human rights of migrants, including but not limited to the rights to migrate and seek asylum, the right to not be separated from their families, due process of law, and consent, by protecting their personal data.
  2. Recognize the mental, emotional, and legal impact that surveillance has on migrants and other people on the move.
  3. Ensure human rights safeguards for monitoring and supervising technologies for migration control.
  4. Conduct a human rights impact assessment of already implemented technologies for migration control.
  5. Refrain from using or prohibit technologies for migration control that present inherent or serious human rights harms.
  6. Strengthen efforts to achieve effective remedies for abuses, accountability, and transparency by authorities and the private sector.

We invite you to learn more about the Coalition #MigrarSinVigilancia and the work of the organizations involved, and to stand with us to safeguard data privacy rights of migrants and asylum seekers—rights that are crucial for their ability to safely build new futures.

Veridiana Alimonti

The Surveillance Showdown That Fizzled

3 months 1 week ago

Like the weather rapidly getting colder outside, the fight over renewing, reforming, or sunsetting the mass surveillance power of Section 702 has been put on ice until spring.

In the last week of legislative business before the winter break, Congress was scheduled to consider two very different proposals: H.R. 6570, the Protect Liberty and End Warrantless Surveillance Act in House Judiciary Committee (HJC); and H.R. 6611, the FISA Reform and Reauthorization Act of 2023 in the House Permanent Select Committee on Intelligence (HPSCI), to reauthorize Section 702 of the Foreign Intelligence Surveillance Act (FISA). However, as the conversation about how to consider these proposals grew heated, both bills have been pulled from the legislative calendar without being rescheduled.

TAKE ACTION

Tell Congress: Defeat HPSCI’s Horrific Surveillance Bill

The legislative authority for Section 702 was set to expire December 31, 2023, though language was added to the National Defense Authorization Act (NDAA) to extend the legislative authority of Section 702 through April 2024. It is disappointing that, despite all of the reported abuses of the Section 702 program, Congress chose to pass a reauthorization bill instead of making the necessary effort to include critical reforms. As advocates for reform, including EFF, said in a letter to Congress in late November, bypassing the discussion around reform by slipping an extension of the law into the defense authorization bill during conference demonstrates a blatant disregard for the civil liberties and civil rights of the American people.

While it is frustrating that Congress ignored the urgent need for significant Section 702 reform before the December 31 deadline, reform advocates should not lose hope. The current stalemate also means that the pro-surveillance hardliners of the intelligence community were not able to jam through their expansion of the program based on the same old scare tactics they’ve used for years. Fortunately, it seems that many members of the House and Senate have heard our message. While renewing any surveillance authority remains a complicated and complex issue, this choice is clear: we continue to urge all Members to oppose the Intelligence Committee’s bill, H.R.6611, the FISA Reform and Reauthorization Act of 2023.

Additionally, in the moments leading up to a possible floor vote, many House members (and some Senators) have made public statements calling for reform. Notably, that list includes the current House Speaker, Mike Johnson, who told Fox News that Section 702 “... was also abused by the FBI, by our own government, over almost 300,000 times between 2020 and 2021, and so the civil liberties of Americans have been jeopardized by that. It must be reformed."

So, while we are disappointed that Congress chose to leave for the holidays without enacting any of these absolutely necessary reforms, we are already making plans to continue this fight in the New Year. We are also grateful for the calls and emails from our members and supporters; these have absolutely made an impact and will be more important than ever in the fight to come. 

TAKE ACTION

Tell Congress: Defeat HPSCI’s Horrific Surveillance Bill

India McKinney
Checked
2 hours 46 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed