EFF to Court: FOIA Requires ICE to Release Arrest and Deportation Database Records With Privacy Protections

2 weeks ago

The Freedom of Information Act requires U.S. Immigration and Customs Enforcement (ICE) to disclose deidentified data that would enable greater public oversight of the agency while protecting the privacy of immigrants and others, EFF argued in an amicus brief filed last month in federal court.

The case, ACLU v. ICE, centers on a request by the American Civil Liberties Union (ACLU) to obtain data from ICE databases that show how ICE arrests, classifies, detains, and deports individual immigrants. The databases link this information to particular individuals based on a unique identifier, known as an “A-number,” that ICE assigns to people. An A-number connects the thread of records on each of ICE’s interactions with an individual, giving a look into how the agency is targeting and treating individuals over time. However, disclosing someone’s A-number to the public could invade their privacy by linking this immigration history to them.

To get a better picture of ICE’s activities over time without disproportionately invading individuals’ privacy, ACLU requested that the agency replace each A-number with a new, unique identifier in the released records. A federal district court in New York denied ACLU's request, ruling that FOIA did not require ICE to substitute deidentified values for A-numbers. ACLU appealed to the U.S. Court of Appeals for the Second Circuit. 

EFF’s brief argues that ACLU’s proposed solution “is a vital—and sometimes the only—way to protect legitimate privacy concerns while ensuring that FOIA remains a robust tool for transparency and accountability.” EFF’s brief explains that ACLU’s proposal is effectively a form of redaction because it removes the identifying information in each A-number while keeping the “relational information” that connects individual records in ICE’s database.

Courts have always balanced FOIA’s primary goal of transparency with privacy by releasing records in redacted and modified forms. This is especially important for public oversight of the databases that have proliferated and grown at all levels of government. EFF’s brief discusses many examples, such as the Department of Homeland Security’s HART database. This database stores fingerprints, face and iris scans, and other sensitive information on immigrants and a recent Privacy Impact Assessment found several flaws with its privacy protocols. EFF filed amicus briefs in other cases requesting government database records in privacy-protecting forms, such as aggregate data.

The brief also describes how many other courts have rightly approved redaction methods to public records even when the redactions modify the underlying data. The California Supreme Court, for example, suggested substituting unique identifiers for license plate numbers in a request, co-litigated by EFF, for records on police use of automated license plate readers. In another context, government agencies often blur video records to prevent identification of people captured in the video while providing the recording’s context.

When courts apply it properly, FOIA is a powerful tool for the public to protect privacy and watchdog government abuse of massive databases. That is why EFF’s brief urged the appellate to uphold ACLU’s substitution procedure to “ensure that FOIA can help the public understand the scope of the government’s actions without intruding on the privacy of individuals whose data is found in government records systems.”

Aaron Mackey

EFF to Council of Europe: Cross Border Police Surveillance Treaty Must Have Ironclad Safeguards to Protect Individual Rights and Users’ Data

2 weeks ago

This is the third post in a series about recommendations EFF, EDRi, CIPPIC, Derechos Digitales, TEDIC, Karisma Foundation, and other civil society organizations have submitted to the Parliamentary Assembly of the Council of Europe (PACE), which is currently reviewing the Protocol, to amend the text before its final approval in the fall. Read the full series here, here, here and here.

Governments are on the cusp of adopting a set of additional international rules, which will reshape how cross-border police investigations are conducted. The protocol, referred to by the inauspicious moniker “Second Additional Protocol to the Council of Europe’s Budapest Convention on Cybercrime,” grants law enforcement intrusive new powers while adopting few safeguards for privacy and human rights. 

Many elements of the Protocol are a law enforcement wish list—hardly surprising given that its drafting was largely driven by prosecutorial and law enforcement interests with minimal input from external stakeholders such as civil society groups and independent privacy regulators. As a result, EFF, European Digital Rights, the Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic, and other civil society organizations have called on the Parliamentary Assembly of the Council of Europe (PACE), which is currently reviewing the Protocol, to amend the text before its final approval in the fall. 

International law enforcement investigations are becoming increasingly routine, with policing forces seeking access to digital evidence stored by service providers around the globe. But in the absence of detailed and readily enforceable international human rights standards, law enforcement authorities around the world are left to decide for themselves the conditions under which they can demand access to personal information. As a result, the lowest common denominator in terms of privacy and other protections will often prevail in cross-border investigations.

Unfortunately, the Council of Europe’s Second Additional Protocol fails to provide the type of detailed and robust safeguards necessary to ensure cross-border investigations embed respect for human rights. Quite to the contrary, the Protocol avoids imposing strong safeguards in an active attempt to entice states with weaker human rights safeguards to sign on. To this end, the Protocol recognizes many mandatory and intrusive police powers, coupled with relatively weak safeguards that are largely optional in nature. The result is a net dilution of privacy and human rights on a global scale.

Cross-border investigations raise difficult questions, as widely varying legal systems clash. How do data protection laws regulate police collection and use of personal data when the collection process spans multiple jurisdictions and legal systems? More specifically, what kinds of legal safeguards from existing human rights and data protection toolkits will govern these and other forms of evidence collection across borders? 

Although data protection laws generally apply to both public and private sectors, many countries have failed to set high standards. Some countries have exempted law enforcement data collection and processing of personal data from their data protection laws, while other privacy laws like the United States’ Privacy Act do not apply to foreigners--non-U.S. persons who are not legal permanent residents. 

Many different kinds of international evidence-gathering and international law enforcement cooperation happen today, but the draft Protocol seeks to establish a new international standard that will govern several aspects of policing in a global scale moving forward. We’ve described some of its more intrusive powers in other posts here, and here.

The Protocol also includes some human rights and privacy safeguards that apply when states rely on powers outlined by the Protocol. These safeguards are concentrated in Chapter III, Articles 13 and 14 of the Protocol, and their shortcomings will be explored in the remainder of this post.

Article 13 recognizes a general obligation to ensure adequate protections are in place for human rights and civil liberties. The inclusion of this safeguard is important, particularly the obligation to incorporate the principle of proportionality in determining the scope of human rights safeguards. But Article 13 imposes few specific restrictions, and signatories are largely left to determine what protections are “adequate” and “proportionate” on the basis of national law. So, in practice, there are few direct obligations for states to impose specific safeguards in specific investigative contexts. 

Article 14, by contrast, does impose a number of detailed, specific data protection obligations that would apply to any personal information obtained through the Protocol’s new law enforcement powers. However, Article 14’s standards are weak and even these weak safeguards can be circumvented by any two or more signatories by agreement.

Lowering the Bar for Data Protection

The standards set by Article 14 fail to meet modern data protection requirements and, in places, actively seek to undermine emerging international standards. For example, Article 14 obligates parties to ensure that personal data collected through the Protocol’s powers shall be used in a manner that is consistent with and relevant to the criminal investigative purposes that prompted their collection. However, contrary to most other data protection instruments, Article 14 data protection safeguards don’t require all processing of personal data be “adequate, fair and proportionate” to its objective, while Article 13 requires only “adequate” safeguards and a general respect for the principle of proportionality. Adequate, fair, and proportionate are important, distinctive conditions for accessing personal data recognized in several modern data protection legislations across the world. Each term imposes different requirements when applied to the collection, use, and disclosure of personal information. The absence of all three specific terms in the Protocol is troubling, as it indicates fewer, weaker, and outdated conditions to access data will be allowed and tolerated. 

Article 14’s safeguards are also problematic in that they do not require law enforcement to be subject to oversight that is completely independent. Oversight needs to be impartial and free from direct external influences, but Article 14’s explanatory text (which was never subject to public consultation) allows oversight bodies to be subjected to indirect influence. Under Article 14, for example, many oversight functions can be conducted by government officials housed in the same agencies directing the cross-border investigations being supervised. In addition, while oversight officials must not receive instruction from the state regarding the outcome of a particular case, Article 14 allows states to exert instruction and control over general oversight operations. Article 14 even expressly prohibits Parties from requiring the use of independent regulators to protect the privacy of personal data transferred to other Parties through the Protocol’s investigative powers. All in all, Article 14 fails to meet minimal standards of independent oversight.

Finally, Article 14 of the Protocol also outlines some safeguards for biometric data, but ultimately these are insufficient and undermine a growing international recognition that biometric data is sensitive and requires additional protection in all instances. Biometric data involves mathematical representations of people’s personal features such as their finger, voice or iris prints and fuels a range of intrusive technologies such as facial recognition. Because of its ability to persistently identify individuals through automated means, biometric information is generally considered sensitive by courts and legislatures at the Council of Europe and around the world

Despite this growing recognition of the sensitive nature of biometric information, Article 14 prohibits states from using additional safeguards unless biometric information can be shown to pose an additional risk to privacy. While the Protocol provides little guidance regarding what might constitute this added risk, the result is to provide a narrower scope of protection to biometric data than required by competing laws such as the GDPR, the EU Law Enforcement Directive, and the Council of Europe’s own Convention 108+, each of which recognize the sensitivity of all biometric data in all contexts. This creates ambiguity in defining the scope of protection applied to bilateral transfers, as many anticipated signatories to the Protocol have also signed Convention 108+ and committed to its higher standards of biometric protection while many others have not. While the explanatory text appears to acknowledge that parties bound by Convention 108+ will need to apply that treaty’s heightened biometric protections, Article 14 also prohibits signatories from applying any additional “generic” data protection conditions to any data transfer between signatories.  Moreover, many Parties to the Protocol will not be bound by Convention 108+ and will be prevented from ensuring the appropriate level of protection is applied when sensitive biometric information is transferred to other jurisdictions by law enforcement. 

For all of these reasons, we have asked that the Protocol be amended so that signatories may refuse to apply its most intrusive powers (Articles 6, 7 and 12) when dealing with any other signatory that has not also ratified Convention 108+.

Anyone Can Ignore Even These Safeguards

Even the weak standards applied by Article 14 are effectively optional under the Protocol. Signatories are explicitly permitted to bypass these safeguards through various mechanisms, none of which provide any assurance that adequate privacy protections will be in place.

For example, any two or more signatories can enter into an international data protection agreement that will supersede the safeguards outlined in Article 14. There is no obligation to ensure that superseding agreements provide an adequate level of protection, or even a level  comparable to the safeguards that are actually set out in Article 14. And parties can continue to rely on the Protocol’s law enforcement powers while applying weaker safeguards established in any such superseding agreement instead of the ones in Article 14. Indeed, Article 14’s explanatory text presents the so-called EU-US ‘Umbrella’ agreement—which provides safeguards and guarantees of lawfulness for data transfers—as a paradigmatic example of a qualifying agreement. But questions have been raised as to whether the Umbrella agreement is in compliance with the EU Charter of fundamental freedoms.

Even if no binding international agreement is in place, Parties can bypass the safeguards in Article 14 by entering into ad-hoc agreements with each other. These agreements need not be formal, comprehensive, binding, or even public. If a joint investigation between law enforcement authorities in multiple jurisdictions is underway, individual frontline police officers can even decide to adopt their own agreements, raising the prospect that privacy safeguards will be sacrificed for investigative convenience. (A more detailed analysis about the Protocol’s joint investigation section will be published soon.)

To ensure that there are at least some baseline safeguards in place, we have therefore recommended that the Protocol be amended to ensure that the specific protections outlined in Article 14 establish a minimum threshold of privacy protection. These may be supplemented with more rigorous protections, but cannot be replaced by weaker standards. 

Limits on Personal Data Transfer Limits

Article 14 also undermines a key safeguard used by independent privacy regulators in cross-border investigations, where there is frequently no direct opportunity to enforce safeguards once personal data has been transferred by law enforcement to another country. Because of this, many data protection regimes require independent regulators to block data transfers to states that fail to provide certain minimum levels of privacy protection. Article 14 places strict limits on data protection authorities’ ability to stop law enforcement from transferring personal data to other jurisdictions, removing a critical tool from the human rights protection toolkit.

Under most legal systems that rely on data transfer restrictions as a privacy safeguard, independent regulators determine whether another state’s legal system provides sufficient safeguards to permit law enforcement transfers. However, Article 14 “deems” that its safeguards (or any safeguards adopted in any international data protection agreement between any two parties to the Protocol) are sufficient to meet any signatory’s national standards, removing this important adjudicative role from independent regulators. The Protocol does allow signatories to suspend data transfers if Article 14’s own safeguards are breached, but only with substantial evidence of a systematic or material breach, and only after engaging in consultation with the suspended country. By setting a restrictive evidentiary standard and obligating the executive branch of a state to enter negotiations prior to suspending transfers, Article 14 further undermines the ability of privacy regulators to ensure an adequate level of data protection. 

To prevent the Protocol from diminishing the important role played by data protection authorities in adjudicating and safeguarding privacy in cross-border law enforcement transfers, we have asked that Article 14’s attempts to limit data transfer restrictions be removed. 

Conclusion

Some have defended the Second Additional Protocol in its current configuration, saying it's needed to forestall efforts that might lead to a more intrusive framework for cross-border policing. Specifically, Russia proposed another international cybercrime treaty, which is gaining support at the United Nations. The UN treaty would address many of the same investigative powers addressed in the Protocol and the Budapest Convention. 

Civil society is raising alarm bells about the Russian-led cybercrime initiative is. Human Rights Watch has pointed out, for example, that the UN is being led by countries that use cybercrime laws as a cover to crack down on rights. The Council of Europe should be advancing a human rights-respecting alternative to the UN initiative. But the Protocol, as it currently stands, is not it. 

PACE has an opportunity to substantially improve human rights protections in the Protocol by recommending to the Council of Ministers—CoE's decision-making body—amendments that will fix the technical mistakes in the Protocol and strengthen its privacy and data protection safeguards. With detailed law enforcement powers should come detailed legal safeguards, not a one-sided compromise on privacy and data protection.

Read more on this topic:
Katitza Rodriguez

The Catalog of Carceral Surveillance: Mobile Correctional Facility Robots

2 weeks ago

This post has been updated to provide additional context about patents and patent applications, which are indications of an entity’s interest in a particular product but not proof that the product is currently in development or available for use. You can read more about the role of patents in this series in our post, “The Catalog of Carceral Surveillance: Patents Aren't Products (Yet)

There are too many people in U.S. prisons. Their guards are overworked, underpaid, and prone to human errors, and they require work breaks and food, paychecks and sick days. Plus, they possess flaws that can lead to outbursts of violence, racism, and sexual harassment. Some have taken correctional staff shortages as an indication we need to rework our criminal justice system. In patent filings, prison technology company Global Tel*Link has an alternative suggestion: Robots.

Federal officials and correctional departments nationwide have cited lack of staff as one of the biggest contributors to inmate and employee safety, and as articulated in its patent, GTL has imagined a future where coordinating robots, outfitted with biometric sensors and configured to identify “events of interest,” will help to fill the gap.

Notorious for overcharging inmates for phone calls, GTL, like its major competitor Securus, has been dreaming up new offerings since federal efforts to rein in prison phone costs. 

We haven’t seen any Robo-Guards roaming the prison hallways yet, and it’s important to remember that patent filings reflect ideas that may never become tangible products. Given that, we asked GTL whether it was planning  to build the robot described in its patent or if it had dropped the idea. GTL declined to comment for this series. Until GTL commits to not building this robot, we’ll remain cautious of cost-saving plans that involve outsourcing “enforcement activities” to Dalek-like apparati.

It’s a trash can empowered to decide if you get your commissary items or an electroshock. What could go wrong?

These robots, according to the patent application, could perform many of the same responsibilities as human prison guards with the help of integration of biometric sensors, like fingerprint and iris scanners, microphones, or even face recognition. The robot could be expected to authenticate packages or individuals before escorting them to other parts of a facility. Or it could be programmed to patrol a particular path, collecting and monitoring audio and video for suspicious words or actions. 

Outfitted with tools of supposedly “non-lethal force,” the robot could then be configured to deploy, “autonomously” or “at the remote direction of a human operator,” “an electro-shock weapon, a rubber projectile gun, gas, or physical contact by the robot with an inmate.” 

[caption caption=" The flowchart that GTL robots would use to make decisions.  “Enforcement action” here is a euphemism for potentially lethal disciplinary methods including rubber bullets and electrical shock."]

Why we’re concerned about potentially automated enforcement robots

GTL hasn’t explicitly said that it would be integrating artificial intelligence into its robots, but its patent does imagine that the robots would “have intelligent...communications with inmates and guards.” 

According to the patent: 

In one embodiment, mobile correctional facility robot can include hardware and software that allow mobile correctional facility robot to have intelligent and realistic communications with inmates and guards in the correctional facility. This capability of mobile correctional facility robot can be used to take orders from inmates or guards (e.g. , orders for goods from the correctional facility commissary), to answer general questions of inmates or guards, or to relay electronic messages between inmates and guards or between guards.

If this prison robot becomes reality and relies on artificial intelligence, we would have many, many concerns. 

Artificial intelligence is only as good as its training data. In a lot of cases, the training data is not very good. This is, in part, because humans themselves are often not very good at making choices and haven’t yet been able to accurately train AI to identify movements, people, and other objects. AI also tends to reflect cultural biases and if the people who are creating the training data tend to view people of color as more intimidating, then this bias will be infused into the AI as well. All of this contributes to why we don’t want to see a system where a robot might be evaluating a scene for a possibly forceful response. 

Depending on the data set such an AI is trained on, it might decide that a hug is a  threatening gesture, or a fist bump, or a high five. If someone were to trip and fall that might be seen as a threatening gesture by the AI. The  “Central Controller,” cited multiple times in the GTL patent, can direct multiple robots to work together. GTL has said little on it, and this central controller could be human-powered or use some form of artificial intelligence to direct multiple robots to work together as they monitor an area for bad words or perform an “enforcement action.” In a version of not-quite-AI, the patent describes giving the robots the  power to monitor for “events of interest.” The robots may identify a predetermined spoken word or behavior as a cause for reasonable suspicion,a dangerous way to try to identify a proxy for crime that will also end up capturing a lot of innocuous activity. 

There are, of course, many activities that, without context, might seem strange, perhaps even threatening: 

  • sitting in a strange position, 
  • having a non-violent psychological episode, or 
  • holding a threatening broom while performing assigned cleaning duties.

I’m afraid I can’t let you buy peanuts from the commissary, Dave.

 

You will comply.

GTL isn’t the first company to consider making Robo-Patrols a reality. Guard robots created by another company, Knightscope,  have been deployed in parks, garages, and other public areas, highlighting other potential problems and risks with these employees: drowning, stairs, blindness, or interference with their LIDAR-based navigation systems

A Knightscope robot drowns itself after learning it won’t be receiving a paycheck this month. 

While there is a well-known issue with incarceration in this country, the solution, we think, is not more AI-enabled robots.

An earlier version of this article incorrectly identified the owner of the patent as Securus, rather than GTL. EFF apologizes for the error.

Cooper Quintin

The Catalog of Carceral Surveillance: Exploring the Future of Incarceration Technology

2 weeks 1 day ago

This post has been updated to provide additional context about patents and patent applications, which are indications of an entity’s interest in a particular product but not proof that the product is currently in development or available for use. You can read more about the role of patents in this series in our post, “The Catalog of Carceral Surveillance: Patents Aren't Products (Yet)

Prison technology and telecom companies such as Securus and Global Tel*Link are already notorious for their ongoing efforts to extract every last penny from incarcerated people and, in the process, destroying any shreds of privacy they have left. These companies now operate in thousands of prisons and jails in every state in the U.S., and they are often the only way for thousands of inmates to call home.

Securus and GTL are more than just prison phone companies, though. In the last several years, both companies have moved to diversify their products, dreaming up new ways to extract money from incarcerated people, violate human rights, and surveil not only prisoners but their families and friends, too.

Over the coming weeks, EFF will be shedding light on some of the patents and technologies these companies have devised. Some are already actively in use and others may one day be used in prisons across the country. This series is based largely off of patents filed or obtained by Securus and GTL. Patents often precede the actual creations of technologies and do not, by themselves, indicate that the products will ever become reality. Some companies never follow through on the ideas in their patents.   

By exposing some of the horrifying technologies that Securus and GTL have envisioned  in their patents, our hope is that most of these ideas never move from concept to reality, and that they remain visible only in obscure patent documents. Indeed, this already appears to be the case, as Securus’ parent company has told EFF that it will not build one of the patents featured in this series.

But if the companies do end up building the dystopian tech described in their patents, we hope that this series, which also details technologies already in use, leads to greater public scrutiny of the tech being contemplated and actively deployed against incarcerated people and their families.

View the Catalog of Carceral Surveillance below. 

*Do you have experience with these technologies? We'd love to hear from you. Get in touch via info@eff.org.*

Cooper Quintin

The Catalog of Carceral Surveillance: Monitoring Online Purchases of Inmates’ Family and Friends

2 weeks 1 day ago

This post has been updated to provide additional context about patents and patent applications, which are indications of an entity’s interest in a particular product but not proof that the product is currently in development or available for use. You can read more about the role of patents in this series in our post, “The Catalog of Carceral Surveillance: Patents Aren't Products (Yet)

Prison wardens and detention center administrators have, for years, faced what they believe to be a serious problem. While they can surveil every aspect of the lives of the people imprisoned in their facilities, they typically have no ability to violate the privacy and civil liberties of the friends and family of incarcerated people. Fortunately for prison administrators, Securus, notable for overcharging inmates for the privilege of communication with their loved ones, has done some thinking on the problem. 

Earlier this year, Securus received approval for a patent describing a method of “linking controlled-environment facility residents and associated non-resident telephone numbers to ... e-commerce accounts associated with the captured telephone number” and “information about purchases made by a non-resident associated with the accessed e-commerce account.”   

In other words, the patent imagined a way to capture the phone numbers of everyone a prisoner talks to, including friends and family, and to use that information to scrutinize their e-commerce purchases. (Note: After EFF published this post, Securus told EFF that it will not build the system described in the patent. The company’s full statement is below.)

The patent application provides the following example of how prisons might use this invasive and dangerous technology.

The flowchart submitted with the patent describing how the e-commerce surveillance system would work. 

“[I]nmate call records may show that an inmate made calls to their girlfriend before escaping. Investigators question the girlfriend, but she provides no help. However, investigators employ embodiments of the present systems and methods, using the DTN [Dialed Telephone Number] used by the inmate to call the girlfriend, to find that the girlfriend had purchased skiing equipment through an e-commerce app associate with the DTN and made a reservation through another (or the same) e-commerce app (such as a homestay app) for a house in a remote area in the Colorado mountains. Investigators find the escaped convict and the girlfriend at the house using the data obtained through the invention.

If implemented, the patent  would be a massive civil liberties violation and a dangerous expansion in the powers of prison administrators to surveill people not under their carceral control. One may wonder, though, how Securus would implement this patent in practice. Not many people would willingly give a prison administrator access to their Amazon or other online shopping account. 

Luckily for wardens, the patent has suggested a workaround: an end-user license agreement.

In their patent application, Securus suggests that prison officials could obtain a waiver from anyone wishing to communicate with an incarcerated person, which would allow prison officials to then root around in the proverbial sock drawer of that person's e-commerce purchases. Understanding that very few people would knowingly agree to this waiver, Securus helpfully suggests: “Such a waiver may be part of an end user agreement associated with use of controlled-environment facility communication services, including, such as by way of example, a controlled-environment facility communications app.” The patent continues: “the waiver may allow the resident's controlled-environment facility, a controlled-environment facility communication vendor, law enforcement, and/or the like, to garner passwords from the non-resident's mobile device, computer, etc. to use for such access.”

It is not clear that this demand would be legally permissible, at least without a warrant. As an initial matter, most e-commerce sites prohibit sharing passwords.  For example, Amazon’s Conditions of Use say "You are responsible for maintaining the confidentiality of your account and password and for restricting access to your account.”  The envisioned waiver agreement does not change the restrictions of the e-commerce site, and could put the non-resident in an impossible position.  Moreover, the system envisioned would give investigators access to the content of communications between the non-resident and the e-commerce site, which requires a warrant. While a service provider can voluntarily disclose with the customer's consent, as envisioned in the patent, the service provider is not being asked to disclose voluntarily, and may not even be initially aware of the access.

In a statement sent to EFF after publishing this post, Aventiv, the parent company of Securus, said it would not be developing the technology described in the patent. The company’s statement in full: 

We at Aventiv are committed to protecting the civil liberties of all those who use our products.  As a technology provider, we continuously seek to improve and to create new solutions to keep our communities safe.  The patent you reference is 10904297, which was filed in June 2019, prior to our company publicly announcing a multi-year transformation effort. The patent is not currently in development as it was an idea versus a product we will pursue.  Our organization is focused on better serving justice-involved people by making our products more accessible and affordable, investing in free educational and reentry programming, and taking more opportunities—just like this one—to listen to consumers. To ensure there is no additional misunderstanding, we will be abandoning this patent and reviewing all open patents to certify that they align with our transformation efforts.

We agree that Securus should withdraw this odious idea rather than coerce a Faustian bargain upon people who love someone in prison: give us the power to monitor your online purchases, or wait to talk to your loved one only after we let them go. 

Cooper Quintin

Video Briefing Wednesday: EFF and Partners Will Deliver to Apple Petitions with 50,000 Signatures Demanding End to Phone Scanning Program

2 weeks 5 days ago
Apple Customers Tell Tech Giant: Don’t Scan Our Phones

San Francisco—On Wednesday, September 8, at 9 am PT, internationally renowned security technologist Bruce Schneier and EFF Policy Analyst Joe Mullin will speak on a panel with digital rights activists delivering petitions with more than 50,000 signatures calling on Apple to cancel its iPhone surveillance software program. The briefing will be held via Zoom.

Apple’s announcement last month that it plans to install two scanning systems on all of its phones was a disappointment that stands to shatter the tech giant’s credibility on protecting users’ privacy. The iPhone scanning harms privacy for all iCloud photo users, continuously scanning user photos to compare them to a secret government-created database of child abuse images. The parental notification scanner uses on-device machine learning to scan messages, then informs a third party, which breaks the promise of end-to-end encryption.

Acknowledging the outcry by customers and activists against the program, Apple said it’s gathering more feedback and making improvements before launching the scanning features. This does not go far enough. The petitions call on Apple to abandon its surveillance plan, which goes against the company’s long-standing commitment to privacy and security, as well as its history of rejecting backdoors to access content on our phones. EFF, Fight for the Future, and OpenMedia gathered signatures for the petitions that will be emailed to Apple on September 8. EFF is one of 90 organizations that signed on to a letter urging Apple CEO Tim Cook to stop the company’s plans to weaken privacy and security on Apple’s iPhones and other products.

Schneier and Mullin will discuss how Apple’s program opens the door to other surveillance. It will give ammunition to authoritarian governments wishing to expand surveillance and censorship.

WHAT:
Don’t Scan our Phones Petitions to Apple

WHEN:
Wednesday, September 8, 9 am PT

WHO:
Bruce Schneier, Security Technologist
Caitlin Seeley George, Director of Campaigns and Operations, Fight for the
Joe Mullin, Policy Analyst, Electronic Frontier Foundation
Matt Hatfield, Director of Campaigns, OpenMedia

RSVP for Live Zoom Link:
https://us02web.zoom.us/meeting/register/tZYvduytrD0vHNM122yw2kAAqnfyk9EQZpdg

For more on Apple’s phone scanning:
https://www.eff.org/deeplinks/2021/08/apples-plan-think-different-about-encryption-opens-backdoor-your-private-life
https://www.eff.org/deeplinks/2021/08/if-you-build-it-they-will-come-apple-has-opened-backdoor-increased-surveillance

Contact:  JoeMullinPolicy Analystjoe@eff.org CaitlinSeeley GeorgeDirector of Campaigns and Operations, Fight for the Futurecseeleygeorge@fightforthefuture.org MattHatfieldDirector of Campaigns, OpenMediamatt@openmedia.org
Karen Gullo

Delays Aren't Good Enough—Apple Must Abandon Its Surveillance Plans

2 weeks 5 days ago

Apple announced today that it would “take additional time over the coming months to collect input and make improvements” to a program that will weaken privacy and security on iPhones and other products. EFF is pleased Apple is now listening to the concerns of customers, researchers, civil liberties organizations, human rights activists, LGBTQ people, youth representatives, and other groups, about the dangers posed by its phone scanning tools. But the company must go further than just listening, and drop its plans to put a backdoor into its encryption entirely.

JOIN THE NATIONWIDE PROTEST

TELL APPLE: DON'T SCAN OUR PHONES

The features Apple announced a month ago, intending to help protect children, would create an infrastructure that is all too easy to redirect to greater surveillance and censorship. These features would create an enormous danger to iPhone users’ privacy and security, offering authoritarian governments a new mass surveillance system to spy on citizens. They also put already vulnerable kids at risk, especially LGBTQ youth, and create serious potential for danger to children in abusive households.

The responses to Apple’s plans have been damning: over 90 organizations across the globe have urged the company not to implement them, for fear that they would lead to the censoring of protected speech, threaten the privacy and security of people around the world, and have disastrous consequences for many children. This week, EFF’s petition to Apple demanding they abandon their plans reached 25,000 signatures. This is in addition to other petitions by groups such as Fight for the Future and OpenMedia, totaling well over 50,000 signatures. The enormous coalition that has spoken out will continue to demand that user phones—both their messages and their photos—be protected, and that the company maintain its promise to provide real privacy to its users. 

Further Reading: 

Cindy Cohn

Without Changes, Council of Europe’s Draft Police Surveillance Treaty is a Pernicious Influence on Latam Legal Privacy Frameworks

2 weeks 5 days ago

This is the second post in a series about recommendations EFF, European Digital Rights, the Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic, and other civil society organizations have submitted to the Parliamentary Assembly of the Council of Europe (PACE), which is currently reviewing the Protocol, to amend the text before its final approval in the fall.

The Council of Europe (CoE) is on track to approve the Second Additional Protocol to the Budapest Cybercrime Convention, which will set new invasive international rules for law enforcement access to user data and cooperation between States conducting criminal investigations. In our recent joint civil society submission to the CoE’s Parliamentary Assembly we recommended 20 solid amendments to preserve the Protocol’s objective—facilitating efficient and timely cross-border investigations between countries with varying legal systems—while embedding a much-needed baseline to safeguard human rights. In this post, the second in a series about our recommendations, we examine how the current Protocol's text threatens privacy rights in Latin America, a region with deeper challenges for fulfilling human rights safeguards and the rule of law compared to many European countries.

Article 7 of the Protocol is among the most troubling provisions, raising privacy concerns regarding police cross-border access to subscriber data. As we have written, Article 7 establishes procedures for law enforcement in one country to request access to subscriber data directly from service providers located in another country under the requesting country’s legal standards. This can create unjustifiable asymmetries in national law by applying to foreign authorities a more permissive, less privacy-protective legal basis to access subscriber data than what is granted to local law enforcement agencies under its own local law.

Article 7 focuses on authorizing police access to subscriber data. Why does subscriber data matter? Your IP address can tell authorities what websites you visit and who you communicate with. It could reveal otherwise anonymous online identities, your social networking contacts and, even at times, your physical location via GPS. Police can request your name, the subscriber data to link your identity to your online activity, and that can be used to create a nicely detailed police profile of your daily habits .

When and How Cross-Border Police Direct Cooperation Rules Will Perniciously Affect Latin American Countries

We see at least two possible scenarios for how pernicious Article 7 could be on Latam frameworks for lawful access to communications data in criminal investigations. First, this provision can serve as an influence to drive down standards in the region for accessing subscriber information (and unveiling a user’s identity). Second, it can potentially export globally a broader definition of what constitutes “subscriber information,” expanding the categories of communications data encompassed by a third-class protection standard. All in all, Article 7 contains serious flaws that should be fixed before it can serve as a robust rights-protective model to pursue and endorse.

With CoE's final adoption of the draft Protocol, countries in Latin America already parties to the original 2001 Budapest Convention will be able to ratify or accede to the Second Protocol. To date, those countries are Argentina, Chile, Costa Rica, Colombia, Dominican Republic, Panama, Paraguay, and Peru. Brazil and Mexico were invited to become parties and currently act as observers. The Budapest Convention, the first international treaty addressing internet and computer crime by harmonizing national laws and increasing cooperation among nations, has been influential in the region, acting as a model for cybercrime regulation and production of electronic evidence, even for countries that are not parties of the Convention. As many law enforcement authorities want access to potential electronic evidence across borders, Latin American countries will likely seek accession to the Protocol because of its cooperation rules. But if the final text passes without our recommended amendments, the Protocol will encourage Parties to reinforce weaker privacy standards already in place in different Latam countries instead of fostering a growing trend in other nations in the region where domestic laws or court judgments have provided stronger human rights protections. 

That’s because of another concerning mandate in Article 7: in countries with laws that prevent service providers from voluntarily responding to subscriber data requests without appropriate safeguards—such as a reasonable ground requirement and/or a court order—Article 7 requires these legal “impediments” be removed for cross-border requests. Those countries with higher standards are allowed to reserve the right not to abide by Article 7, but only at the time of the signature/ratification/approval, and not at a later stage. This means that in the future, Parties will be stuck with the inherent flaws in Article 7, and will be unable to designate Article 8—another, slightly more privacy-protective provision in the Protocol for getting data across borders—as the sole means of accessing some or all types of subscriber data, even if their legal systems, because of new laws or court decisions, eventually recognize additional safeguards for subscriber information.

Moreover, although the Protocol stipulates important data protection safeguards, its current text contains provisions that will allow State parties to bypass them (as we will further explain in the third post of this series).

Levelling Down Subscriber Information Protections

Countries in the region have adopted varying degrees of privacy safeguards in criminal investigations. Mexico's legal framework has good standards, at least on the books, requiring judicial authorization for disclosing stored communications data, including subscriber information, and calling for authorities to specify targets and time periods as well as justify the need for the information sought. In Brazil, when it comes to accessing internet users’ subscriber data (dados cadastrais, in Portuguese), authorities with express legal power to access subscriber information aren’t required to obtain a warrant to access the data. Authorities' direct requests to service providers must indicate the explicit legal basis for the request and must specify the individuals whose information is being sought (generic and non-specific collective requests are prohibited).

But Brazilian police agencies dispute that direct requests are authorized only for certain legally specified cases and push for a broader interpretation of their powers. The National Association of Mobile Service Providers (ACEL) went to Brazil's Supreme Court to assert users have constitutional privacy protections when the government is requesting communications data, including subscriber information. But with the case still pending in court, a proposal to reform the country's Criminal Procedure Code is looking to side with law enforcement by generally authorizing police and prosecutors to directly request subscriber data from service providers.

This push to allow law enforcement agents to access subscriber data without a prior court order reflects bad practices adopted in some Latin American countries like Panama, Paraguay, and Colombia. In Colombia, a simple administrative resolution sets out that telecommunications service providers must allow authorities to remotely connect with their systems to obtain user information. Other countries, like Argentina, do not have legal rules or case law specifically addressing law enforcement access to subscriber information.

The Protocol’s Article 7 rules for service providers' direct cooperation with law enforcement aligns with the region’s weaker privacy standards. It also hinders companies’ best practice commitments to interpret local laws in a way that  provides the most privacy protections for users. In collaboration with EFF, leading digital rights groups in Latin America and Spain have been pushing companies to make greater commitments on that front. Who Defends Your Data assessments, inspired by EFF's Who Has Your Back project, have encouraged companies to improve their privacy practices in recent years, demonstrating that local privacy laws should be the ground, and not the ceiling, for companies' efforts in supporting users’ fundamental rights. 

For example, Chilean ISPs have adopted best practices to require a judicial order before handing over users’ information (see GTD's and Claro's law enforcement guidelines) and to only comply with individualized personal data requests (in addition to Claro, see Entel's guidelines). Chilean law does not explicitly create an artificial distinction among different types of communications data, but instead the country’s Criminal Procedure Code allows a more protective standard by requiring a prior warrant in all proceedings that affect, deprive, or restrict an accused or a third-party’s constitutional privacy rights. Since 2017, Derechos Digitales’ Who Defends Your Data reports have been calling on Chilean companies to commit to the most protective interpretation of legal standards concerning communications data disclosures including subscriber data.

In early 2020, Chile's Prosecutor’s Office sought to obtain all mobile phone numbers that had connected to antennas in Santiago’s subway stations, where fires marked the beginning of the country's 2019 social uprising. By obtaining the mobile phone numbers, it would be possible to identify their owners. Most of the ISPs did not comply with the prosecutor’s direct request without a judicial examination. This case is a clear demonstration of how subscriber information, which unveils a user’s identity linked to specific activities, can provide sensitive details of individuals’ daily lives.

In our submission, we recommend removing Article 7 since it erodes privacy standards even where appropriate protections already exist. This amendment would permit Article 8, mentioned above, to become the primary legal basis by which subscriber data is accessed in cross-border contexts. Article 8 authorizes the requesting authority to submit a production order to the receiving national authority so it can compel local service providers to produce stored subscribers and “traffic data.” Even though Article 8 could also benefit from additional safeguards, such as setting a prior judicial authorization standard, it provides stronger protections than Article 7. Article 8 requires the involvement of the receiving Party’s national authorities that can, applying standards contained in its own national laws, compel the production of subscriber data to the local service provider located in its territory.

Broadening the Scope of Third-Class Protection for Subscriber Information

We wrote about the “second-class” protection still granted to metadata in the region. Latam domestic privacy laws often treat metadata as less worthy of protection compared to the contents of a communication. The Budapest Convention has always promoted the distinction between “traffic data” (equivalent to "metadata") and “subscriber information,” and defines them separately. The Protocol uses this distinction to incorporate a lower level of protection for subscriber information in the context of cross-border requests. But as our 13 Principles on the application of human rights to communications surveillance states, these formalistic categories of data "content," "subscriber information," or "metadata” are no longer appropriate for measuring how  intrusive communications surveillance is for individuals’ private lives and associations. While it has long been agreed that communications content deserves significant protection in law because of its capability to reveal sensitive information, it is now clear that other information arising from communications, including subscriber data and metadata, may reveal deeply sensitive aspects about an individual, and thus deserves similarly robust protections.

Unfortunately, the Convention’s broad definition of subscriber information, which includes IP addresses, exacerbates the Protocol’s callous treatment of this category of information, giving it third-class treatment.

That definition goes beyond, for example, the Brazilian legal definition of subscriber data (dados cadastrais). In fact, IP addresses are considered part of connection and application logs, only disclosed by means of a prior judicial authorization—without the exception for direct requests, referred to above, that may apply to subscriber data. As the Protocol’s Explanatory Report underlines, IP address-related information and other access numbers may be treated as traffic data in some countries, which is why the Second Additional Protocol (Article 7, paragraph 9.b) allows Parties to reserve the right not to apply Article 7 to certain types of access numbers.

However, Article 7, paragraph 7.9.b’s reservation is only possible when disclosing those access numbers through direct cross-border cooperation “would be inconsistent with the fundamental principles of [the] domestic legal system.” But in many Latam legal systems, judicial control and/or the presence of reasonable grounds for communications data aren't clearly spelled out. They often rely on legislation that does not clearly distinguish types of information, case law explicitly addressing only telephone communications, or protective interpretations fostered by companies’ best practices. This situation could not only hamper the use of the reservation clause, when countries eventually sign the Protocol, but may also function as a tool for spreading a general understanding of the scope of “subscriber information,” conveniently served with third-class protection standards.

Conclusion

In their landmark ruling affirming data protection as a fundamental right under the country’s Constitution, Brazilian Supreme Court justices pointed out how changes in our technological landscape demand more cautious treatment of subscriber information. Justice Rosa Weber recalled public telephone directories that contained people’s names, telephone numbers, and addresses, asserting that “what could be done from the publicization of such personal data [a few decades ago] is not comparable to what can be done at the current technological level, where powerful data processing, cross-referencing and filtering technologies allow the formation of extremely detailed individual profiles.” Also mentioning public telephone directories, Justice Cármen Lúcia went as far as to say “this world is over!”—referring to how personal information can now be gathered and analyzed to reveal details of our personal lives.

Article 7 of the Second Protocol is way out of step with the realities of how today’s technology can be used to threaten privacy, relying on an outdated and incorrect assumption, put forward in the Protocol’s Explanatory Report, that subscriber information “does not allow precise conclusions concerning the private lives and daily lives of individuals concerned.”

We hope that CoE’s Parliamentary Assembly removes Article 7 in its entirety from the text of the Protocol, allowing Article 8 to form the primary basis by which user information is disclosed in cross-border contexts. This would allow cross-border cooperation in accessing people’s private information to properly align with advancements in privacy protections being made in national law. That will help to avoid the drift towards third-class protection for user information that can unveil people’s identities  and  link them to specific online activities. Alternatively, if the Parliamentary Assembly retains Article 7, it must be amended to prevent foreign efforts to sidestep domestic safeguards when seeking access to user data.

The Assembly has the opportunity to ensure respect for human rights in cross-border police investigations. Improving the Protocol’s safeguards will carry weight with stakeholders at the national level and influence their decisions to champion, instead of discard, proper privacy safeguards. CoE’s international rules should serve to tip the scale in favor of protecting fundamental rights instead of embracing surveillance tactics strongly lacking human rights protections.

Read more on this topic:

EFF to Council of Europe: Flawed Cross Border Police Surveillance Treaty Needs Fixing—Here Are Our Recommendations to Strengthen Privacy and Data Protections Across the World

Joint Civil Society Comment to the Parliamentary Assembly of the Council of Europe (PACE) on the Second Additional Protocol to the Cybercrime Convention (CETS 185) 

Council of Europe’s Actions Belie its Pledges to Involve Civil Society in Development of Cross Border Police Powers Treaty

Global Law Enforcement Convention Weakens Privacy & Human Rights

Joint Civil Society letter for the 6th round of consultation on the Cybercrime Protocol on the first complete draft of the Protocol



Veridiana Alimonti

Introducing “apkeep,” EFF Threat Lab’s new APK Downloader

2 weeks 6 days ago

To track state-sponsored malware and combat the stalkerware of abusive partners, you need tools. Safe, reliable, and fast tools. That’s why EFF’s Threat Lab is proud to announce our very own tool to download Android APK files, apkeep. This enables users to download an Android APK or number of APKs directly from the command-line—either from the Google Play Store (with Google credentials) or from a third-party which mirrors the Play Store apps (no credentials needed).

Written in async Rust, this tool prioritizes simplicity of use, memory safety, reliability, and speed. It has also been compiled to a number of architectures and platforms, including Android’s armv7 and aarch64 platforms to download apps directly from an Android device using Termux. It is available right now for you to use.

In the future, we hope to expand apkeep’s functionality by adding support for the Amazon Appstore, allowing downloads of older app versions, and adding additional architectures.

We are proud to give back to the pool of tools that the application security community has created and that we use every day. We hope our own contribution will provide a useful addition to the toolbox.

Further Details Examples

The simplest example is to download a single APK to the current directory:

apkeep -a com.instagram.android .

This downloads from the default source, APKPure, which does not require credentials. To download directly from the google play store:

apkeep -a com.instagram.android -d GooglePlay -u 'someone@gmail.com' -p somepass .

Refer to USAGE to download multiple APKs in a single run.

Specify a CSV file or individual app ID

You can either specify a CSV file which lists the apps to download, or an individual app ID. If you specify a CSV file and the app ID is not specified by the first column, you’ll have to use the --field option as well. If you have a simple file with one app ID per line, you can just treat it as a CSV with a single field.

Bill Budington

New Texas Abortion Law Likely to Unleash a Torrent of Lawsuits Against Online Education, Advocacy and Other Speech

2 weeks 6 days ago

In addition to the drastic restrictions it places on a woman’s reproductive and medical care rights, the new Texas abortion law, SB8, will have devastating effects on online speech. 

The law creates a cadre of bounty hunters who can use the courts to punish and silence anyone whose online advocacy, education, and other speech about abortion draws their ire. It will undoubtedly lead to a torrent of private lawsuits against online speakers who publish information about abortion rights and access in Texas, with little regard for the merits of those lawsuits or the First Amendment protections accorded to the speech. Individuals and organizations providing basic educational resources, sharing information, identifying locations of clinics, arranging rides and escorts, fundraising to support reproductive rights, or simply encouraging women to consider all their options—now have to consider the risk that they might be sued for merely speaking. The result will be a chilling effect on speech and a litigation cudgel that will be used to silence those who seek to give women truthful information about their reproductive options. 

We will quickly see the emergence of anti-choice trolls: lawyers and plaintiffs dedicated to using the courts to extort money from a wide variety of speakers supporting reproductive rights.

SB8, also known as the Texas Heartbeat Act, encourages private persons to file lawsuits against anyone who “knowingly engages in conduct that aids or abets the performance or inducement of an abortion.” It doesn’t matter whether that person “knew or should have known that the abortion would be performed or induced in violation of the law,” that is, the law’s new and broadly expansive definition of illegal abortion. And you can be liable even if you simply intend to help, regardless, apparently, of whether an illegal abortion actually resulted from your assistance.  

And although you may defend a lawsuit if you believed the doctor performing the abortion complied with the law, it is really hard to do so. You must prove that you conducted a “reasonable investigation,” and as a result “reasonably believed” that the doctor was following the law. That’s a lot to do before you simply post something to the internet, and of course you will probably have to hire a lawyer to help you do it.  

SB8 is a “bounty law”: it doesn’t just allow these lawsuits, it provides a significant financial incentive to file them. It guarantees that a person who files and wins such a lawsuit will receive at least $10,000 for each abortion that the speech “aided or abetted,” plus their costs and attorney’s fees. At the same time, SB8 may often shield these bounty hunters from having to pay the defendant’s legal costs should they lose. This removes a key financial disincentive they might have had against bringing meritless lawsuits. 

Moreover, lawsuits may be filed up to six years after the purported “aiding and abetting” occurred. And the law allows for retroactive liability: you can be liable even if your “aiding and abetting” conduct was legal when you did it, if a later court decision changes the rules. Together this creates a ticking time bomb for anyone who dares to say anything that educates the public about, or even discusses, abortion online.

Given this legal structure, and the law’s vast application, there is no doubt that we will quickly see the emergence of anti-choice trolls: lawyers and plaintiffs dedicated to using the courts to extort money from a wide variety of speakers supporting reproductive rights.

And unfortunately, it’s not clear when speech encouraging someone to or instructing them how to commit a crime rises to the level of “aiding and abetting” unprotected by the First Amendment. Under the leading case on the issue, it is a fact-intensive analysis, which means that defending the case on First amendment grounds may be arduous and expensive. 

The result of all of this is the classic chilling effect: many would-be speakers will choose not to speak at all for fear of having to defend even the meritless lawsuits that SB8 encourages. And many speakers will choose to take down their speech if merely threatened with a lawsuit, rather than risk the law’s penalties if they lose or take on the burdens of a fact-intensive case even if they were likely to win it. 

The law does include an empty clause providing that it may not be “construed to impose liability on any speech or conduct protected by the First Amendment of the United States Constitution, as made applicable to the states through the United States Supreme Court’s interpretation of the Fourteenth Amendment of the United States Constitution.” While that sounds nice, it offers no real protection—you can already raise the First Amendment in any case, and you don’t need the Texas legislature to give you permission. Rather, that clause is included to try to insulate the law from a facial First Amendment challenge—a challenge to the mere existence of the law rather than its use against a specific person. In other words, the drafters are hoping to ensure that, even if the law is unconstitutional—which it is—each individual plaintiff will have to raise the First Amendment issues on their own, and bear the exorbitant costs—both financial and otherwise—of having to defend the lawsuit in the first place.

One existing free speech bulwark—47 U.S.C. § 230 (“Section 230”)—will provide some protection here, at least for the online intermediaries upon which many speakers depend. Section 230 immunizes online intermediaries from state law liability arising from the speech of their users, so it provides a way for online platforms and other services to get early dismissals of lawsuits against them based on their hosting of user speech. So although a user will still have to fully defend a lawsuit arising, for example, from posting clinic hours online, the platform they used to share that information will not. That is important, because without that protection, many platforms would preemptively take down abortion-related speech for fear of having to defend these lawsuits themselves. As a result, even a strong-willed abortion advocate willing to risk the burdens of litigation in order to defend their right to speak will find their speech limited if weak-kneed platforms refuse to publish it. This is exactly the way Section 230 is designed to work: to reduce the likelihood that platforms will censor in order to protect themselves from legal liability, and to enable speakers to make their own decisions about what to say and what risks to bear with their speech. 

But a powerful and dangerous chilling effect remains for users. Texas’s anti-abortion law is an attack on many fundamental rights, including the First Amendment rights to advocate for abortion rights, to provide basic educational information, and to counsel those considering reproductive decisions. We will keep a close eye on the lawsuits the law spurs and the chilling effects that accompany them. If you experience such censorship, please contact info@eff.org.

David Greene

Victory! Federal Trade Commission Bans Stalkerware Company from Conducting Business

3 weeks ago

In a major victory in our campaign to stop stalkerware, the Federal Trade Commission (FTC) today banned the Android app company Support King and its CEO Scott Zuckerman, developers of SpyFone, from the surveillance business. The stalkerware app secretly “harvested and shared data on people’s physical movements, phone use and online activities through a hidden device hack,” according to the FTC. The app sold real-time access to surveillance, allowing stalkers and domestic abusers to track potential targets of their violence.

EFF applauds this decision by the FTC and the message it sends to those who facilitate by technical means the behavior of stalkers and domestic abusers. For too long, this nascent industry has been allowed to thrive as an underbelly to the much larger and diverse app ecosystem. With the FTC now turning its focus to this industry, victims of stalkerware can begin to find solace in the fact that regulators are beginning to take their concerns seriously.

The FTC case against Support King is the first to outright ban a stalkerware company and comes two years after EFF and its Director of Cybersecurity Eva Galperin launched the Coalition Against Stalkerware to unite and mobilize security software companies and advocates for domestic abuse victims in actions to combat and shut down malicious stalkerware apps. 

Stalkerware, a type of commercially-available surveillance software, is installed on phones without device users’ knowledge or consent to secretly spy on them. The apps track victims’ locations and allow abusers to read their text messages, monitor phone calls, see photos, videos, and web browsing, and much more. It’s being used all over the world to intimidate, harass, and harm victims, and is a favorite tool for stalkers and abusive spouses or ex-partners.

By using security vulnerabilities that may not yet be known to the public (known as zero-day exploits), stalkerware developers subvert the normal security mechanisms built into the mobile operating system and are able to deeply embed their malicious code into the device.

In a proposed settlement, the FTC bans Support King and Zuckerman from “offering, promoting, selling, or advertising any surveillance app, service, or business” and “to delete any information illegally collected from their stalkerware apps.” The ban sets an important precedent for developers who would consider developing apps that spy on and invade the privacy of their victims. The proposal will be subject to public comment for 30 days after publication in the Federal Register after which the FTC will decide whether to make the proposal final.

In 2019, EFF was one of the ten organizations that founded the Coalition Against Stalkerware, a group of security companies, non-profit organizations, and academic researchers that support survivors of domestic abuse by working together to address technology-enabled abuse and raise awareness about the threat posed by stalkerware. Among its early achievements are an effort to create an industry-wide definition of stalkerware, encouraging research into the proliferation of stalkerware, and convincing anti-virus companies to detect and report the presence of stalkerware as malicious or unwanted programs.

Bill Budington

Court Ruling Against Locast Gets the Law Wrong; Lets Giant Broadcast Networks Control Where and How People Watch Free TV

3 weeks ago

In a blow to millions of people who rely on local television broadcasts, a federal court ruled yesterday that the nonprofit TV-streaming service Locast is not protected by an exception to copyright created by Congress to ensure that every American has access to their local stations. Locast is evaluating the ruling and considering its next steps.

The ruling, by a judge in the U.S. District Court for the Southern District of New York, does the opposite of what Congress intended: it threatens people’s access to local news and vital information during a global pandemic and a season of unprecedented natural disasters. What’s more, it treats copyright law not as an engine of innovation benefiting the public but a moat protecting the privileged position of the four giant broadcast networks ABC, CBS, NBC, and Fox.

Locast, operated by Sports Fans Coalition NY, Inc. (SFCNY), enables TV viewers to receive local over-the-air programming—which broadcasters must by law make available for free—using set-top boxes, smartphones, or other devices of their choice. Over three million people use Locast to access local TV, including many who can’t afford cable and can’t pick up their local stations with an antenna. The broadcast networks sued SFCNY, and its founder and chairman David Goodfriend, arguing for the right to control where and how people can watch their free broadcasts.

EFF joined with attorneys at Orrick, Herrington & Sutcliffe to defend SFCNY. We told the court that Locast is protected by an exception to copyright law, put in place by Congress, that enables nonprofits to retransmit broadcast TV, so communities can access local stations that offer news, foreign-language programming, and local sports. Under that exception, there’s no infringement if nonprofits retransmit TV broadcasts without any commercial purpose, and without charge except to cover their costs. Locast viewers can voluntarily donate to SFCNY for this purpose.

Congress made the exemption so that Americans can access local broadcast stations—and expanding such access is exactly what Locast does. But the court accepted a bogus argument by the giant networks, and ruled that user contributions to Locast were “charges” and can’t be used to expand access so more Americans can receive their local channels via streaming. The ruling reads the law in an absurdly narrow way that defeats Congress’s intention to allow nonprofits to step in and provide communities access to broadcast TV, a vital source of local news and cultural programming for millions of people. This matters now more than ever, with communities across the country at risk because of COVID-19, devastating fires, and deadly hurricanes.

Make no mistake, this case demonstrates once again how giant entertainment companies use copyright to control when, where, and how people can receive their local TV broadcasts, and drive people to buy expensive pay-TV services to get their local news and sports. We are disappointed that the court is enabling this callous profiteering that tramples on Congress’s intent to ensure local communities have access to news that’s important to people regardless of their ability to pay. The court made a mistake, and Locast is considering its options.

Karen Gullo

25,000 EFF Supporters Have Told Apple Not To Scan Their Phones

3 weeks ago

Over the weekend, our petition to Apple asking the company not to install surveillance software in every iPhone hit an important milestone: 25,000 signatures. We plan to deliver this petition to Apple soon; and the more individuals who sign, the more impact it will have. We are deeply grateful to everyone who has voiced their concerns about this dangerous plan. 

SIGN THE PETITION

TELL APPLE: DON'T SCAN OUR PHONES

Apple has been caught off guard by the overwhelming resistance to its August 5th announcement that it will begin. In addition to numerous petitions like ours, over 90 organizations across the globe have urged the company to abandon its plans. But the backlash should be no surprise: what Apple intends to do will create an enormous danger to our privacy and security. It will give ammunition to authoritarian governments wishing to expand the surveillance, and because the company has compromised security and privacy at the behest of governments in the past, it's not a stretch to think they may do so again. Democratic countries that strive to uphold the rule of law have also pressured companies like Apple to gain access to encrypted data, and are very likely already considering how this system will allow them to do so more easily in the future.

All it would take to widen the narrow backdoor that Apple is building is an expansion of the parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts. That’s not a slippery slope; that’s a fully built system that enables screening, takedown, and reporting in its end-to-end messaging.  

Don’t let Apple betray its users. Tell them today: Don't scan our phones

 

Further Reading: 

Jason Kelley

Vaccine Passport Missteps We Should Not Repeat

3 weeks 1 day ago

Vaccine mandates are becoming increasingly urgent from public health officials and various governments. As they roll out, we must protect users of vaccine passports and those who do not want to use—or cannot use—a digitally scannable means to prove vaccination. We cannot let the tools used to fight for public health be subverted into systems to perpetuate inequity or as cover for unrelated, unnecessary data collection. 

Over the past year, EFF has been tracking vaccine passport proposals and how they have been implemented. We have objections—especially when rolled out by opportunistic tech companies that are already creating digital inequity and mismanaging user data. We hope we can stop them from transforming into another layer of user tracking.

Paper proof of vaccination raises fewer concerns, as does a digital photo of a paper card displayed on a phone screen. Of much greater concern are scannable vaccination credentials, which might be used to track people’s physical movements through doors and across time. Thus, we oppose any use of scannable vaccination credentials. At a minimum, such systems must have a paper alternative, open source code, and design and policy safeguards to minimize the risk of tracking.

Last year “immunity passports” were proposed and sometimes implemented before the science was even well-developed on COVID-19 immunity and vaccination. Many governments and private companies apparently were driven less by informed public health and science, as by the need to promote economic movement. Some organizations and governments even took the opportunity to create a new, digital verification system for the vaccinated. The needed transparency and protection has been lacking, and so have clear boundaries to keep them from escalating into an unnecessary surveillance system. Even though we recognize that many vaccine credentialing systems have been implemented in good faith, there are several examples below of dangerous missteps that we hope will not be repeated.

New York State’s Excelsior Pass

Launched in April, this optional mobile application has had gradual adoption. Three key issues appeared with this deployment. 

First, IBM was not transparent on how this application was built. Instead, the company used vague buzzwords like “blockchain technology” that don’t paint a detailed picture on how they are keeping user data secure.

Second, the Surveillance Technology Oversight Project (S.T.O.P.), a member of the Electronic Frontier Alliance, uncovered a contract that New York State had with IBM, outlining a “phase 2” of the passport. It would have not only a significantly higher price tag ($2.5 million to $27 million), but an expansion on what Excelsior can hold, such as driver’s licenses and other health records.

Third, a bill was introduced to protect Covid data a month after the Excelsior Pass was launched. It passed the NY State Assembly, but was never taken up by the NY State Senate. The protections should have passed through before the state rolled out the Excelsior Pass.

A “Clear” Path to Centralizing Vaccination Credentials with Other Personal Data

CLEAR displays a company slogan at San Francisco’s airport.

CLEAR already holds a place in major airports across the United States as the only private company in TSA’s Registered Traveler program. So this company was primed for launching their Health Pass, which is intended to facilitate Covid screening by linking health data to biometric-based digital identification. CLEAR’s original business model was born out of a previous rush to security, in a post-9/11 world. Now they are there for the next rushed security task: vaccination verification for travel. In the words of CLEAR’s Head of Public Affairs, Maria Comella, to Axios:

“CLEAR’s trusted biometric identity platform was born out of 9/11 to help millions of travelers feel safe when flying. Now, CLEAR’s touchless technology is able to connect identity to health insights to help people feel confident walking back into the office.”

A restaurant reservation app, OpenTable, just announced plans to integrate CLEAR’s vaccination credentials into its own system. There is no logical limit to how centralized digital identifications like those created by CLEAR might spread into our lives by facilitating proof of vaccination, and with it new vectors for tracking our movements and activities.

Of course CLEAR is not the only company openly luring large government clients to merge scannable proof-of-vaccination systems into larger digital identification and data storage systems. For example, the National Healthcare System in the U.K. contracted with Entrust, another company that openly contemplated turning vaccination credentials into national identification systems (which EFF opposes). With no federal laws adequately protecting the privacy of our data, we are being asked to trust the word of profit-driven companies that continue to grow through harvesting and monetizing our data in all forms. 

Likewise, U.S. airlines are using vaccine passports, subject to policies that reserve the corporate prerogative to sell data about customers to third parties. So any scan of passengers’ health information can be added to profiles of the thousands that travel each year.

Illinois’ Approach

In Illinois earlier this month, the state’s “Vax Verify” system launched to offer digital credentials to vaccinated citizens. A glaring flaw is the use of Experian, the controversial data broker, to verify the identity of those accessing the portal. The portal even asks for Social Security numbers (optional) to streamline the process with Experian.

Many Americans have been targets of Covid-based scams, so one of the main pieces of advice is to freeze your credit during this turbulent time. This advice is offered on Experian’s own website, for example. However, to access Illinois’ Vax Verify, users must unfreeze their credit with Experian to complete registration. This prioritizes a digital vaccine credential over the user’s own credit protection. 

The system also defaults to sharing immunization status with third parties. The FAQ page explains that users may retroactively revoke so-called “consent” to this sharing.

A New Inequity

We have had concerns about "vaccine passports" and "immunity passports" being used to place company profit over true community health solutions and amplify inequity. 

Sadly, we have seen many take the wrong path. And it could get worse. With more than one hundred COVID-19 vaccine candidates undergoing clinical trials across the world, makers of these new digital systems are advocating for a “chain of trust” that marks only certain labs and health institutions as valid. This new marker will deliberately leave behind many people across the world whose systems may not be able to adhere to the requirements these new digital vaccine proof systems create. For example, many of these new systems entail elements of public key infrastructure governance for public key cryptography, which creates a list of "trusted" public keys associated with "trusted" health labs. But the definition of technical “trustworthiness” has not been agreed upon or enforced pre-Covid, raising concerns that imposing such systems on the world will lock out hundreds of millions of people from being able to obtain visas or even travel—all because their country's labs may not clear these unnecessary technical hurdles. An example would be the EU’s Digital COVID Certificate system. That requires a significant list of technical requirements to achieve interoperability that include data availability, data storage formats, and specific communication and data serialization protocols.

Overview of EU’s Digital COVID Certification System. Source: https://ec.europa.eu/health/sites/default/files/ehealth/docs/digital-green-certificates_v5_en.pdf

This primary reliance on digital passports effectively pushes out presenting paper options for international travel, and potentially domestic travel as well. They devalue paper as a proper check of vaccination proof because the verifier can’t scan a public key. The only viable paper option is printing out the QR Code of the digitally verified credential, which still locks people into these new systems of verification. 

These new trust-based systems, if implemented in a way that automatically disqualifies people who received genuine vaccinations, will cause dire effects for years to come. It sets up a world where certain people can move about easily, and those who have already had a hard time with visas will experience another wall to climb. Vaccines should be a tool to reopen doors. Digital vaccine passports, as we've seen them deployed so far, are far more likely to slam them shut.

This post was updated on 9/2/21 to reflect the more recent discovery of the potential cost for the NY Excelsior Pass. Source: https://www.nytimes.com/2021/08/19/nyregion/new-york-excelsior-pass-cost.html

Alexis Hancock

Starve the Beast: Monopoly Power and Political Corruption

3 weeks 1 day ago
Docket of the Living Dead

In 2017, Federal Communications Commission Chairman Ajit Pai - a former Verizon lawyer appointed by Donald Trump - announced his intention to dismantle the Commission’s hard-won 2015 Network Neutrality regulation. The 2015 order owed its existence to people like you, millions of us who submitted comments to the FCC demanding commonsense protection from predatory ISPs.

After Pai’s announcement, those same millions - and millions of their friends - flooded the FCC’s comment portal, actually overwhelming the FCC’s servers and shutting them down (the FCC falsely claimed it had been hacked). The comments from experts and everyday Americans overwhelmingly affirmed the consensus from the 2015 net neutrality fight: Americans love net neutrality and they expect their regulators to enact it.

But a funny thing happened on the way to the FCC vote: thousands, then millions of nearly-identical comments flooded into the Commission, all opposed to net neutrality. Some of these comments came from obviously made-up identities, some from stolen identities (including identities stolen from sitting US Senators!), and many, many dead people.. One million of them purported to be sent by Pornhub employees. All in all, 82% of the comments the FCC received were fake, and the overwhelming majority of fake comments opposed net neutrality. 

Sending all these fake comments was expensive. The telecoms industry paid millions to corrupt the political process. That bill wasn’t footed by just one company, either - an industry association paid for the fraud. 

How did that happen?

One Big, Happy Family

Well, telecoms is a highly concentrated industry where companies refuse to compete with one another: instead, they divide up the country into a series of exclusive territories, leaving subscribers with two or fewer ISPs to choose from

Not having to compete means that your ISP can charge more, deliver less, and pocket the difference. As a sector, the US ISP market is wildly profitable. That’s only to be expected: when companies have monopolies, value is transferred from the company’s customers and workers to its executives and shareholders. That’s why executives love monopolies and why shareholders love to invest in them.

Profits can be converted into policies: the more extra money you have, the more lobbying you can do. Very profitable companies find it much easier to get laws and regulations passed that benefit them than less profitable ones do, and even less profitable companies get their way from lawmakers and regulators more often than the public does.

But excessive profits aren’t the only reason an industry can get its way in the political arena. When an industry is composed of hundreds of small- and medium-sized firms, they aren’t just less profitable (because they compete with one another to lower prices, raise wages and improve their products and services), they also have a harder time cooperating.

When the people who control your industry number in the hundreds or thousands, they have to rent a convention center if they want to get together to hammer out a common lobbying policy, and they’ll struggle to do so - a thousand rival execs can’t even agree on what lunch to buy, much less what laws to buy. 

When control over the industry dwindles to a handful of people, they can all fit around a single table. They often do

Competition Is For Losers

And that’s how tens of millions of fake anti-net neutrality comments ended up in front of the FCC. A highly concentrated ISP sector decided to cooperate, rather than compete, with each other. 

This let them rip off the country and make a lot of money. Some of that money was set aside for lobbying, and since there are only a handful of companies that dominate the sector, it was easy for them to decide what to lobby for.

To top it all off, the guy they had to convince was one of their own, a former executive at one of the monopolistic companies that funded the fraud campaign. He was, unsurprisingly, very sympathetic to their cause.

Monopolies equip companies with vast stockpiles of ammo to use in the policy wars. Monopolies reduce the number of companies that have to agree on a target for that ammo. 

How It Started/How It’s Going

Back in the 2000s, the tech sector was on the ropes. Google had two lobbyists in DC. Despite the prominence of a few companies (Microsoft, Yahoo, Netscape), most of the web was in the hands of hundreds of small and medium-sized companies, many of them struggling with the post-9/11 economic downturn.

Meanwhile, the entertainment industry was highly concentrated and highly disciplined. Waves of genuinely idiotic tech laws and regulations crashed over the tech sector, and only some fast, savvy courtroom work by nonprofits and inspired grassroots activism kept these outlandish proposals at bay.

The tech sector of the early 2000s had a much higher aggregate valuation than the entertainment sector, and it was more dynamic and diverse, with new companies appearing out of nowhere and rising to prominence in just a few years, displacing seemingly unassailable giants whose dominance proved fleeting

But the entertainment industry was concentrated. Music was dominated by six major labels (today it’s three, thanks to mergers and acquisitions); TV, film and publishing were likewise dominated by a handful of companies (and, likewise, the number of companies has contracted since thanks to a series of mergers). Some of these major labels and studios and broadcasters had the same corporate owners, a trend that has only accelerated since the turn of the century.

These monopolized industries possessed the two traits necessary to secure policies favorable to their interests: excessive monopoly profits and streamlined monopoly collaboration. They had a lot of ammo and they all agreed on a set of common targets to blast away at.

Today, Big Tech is just as concentrated as Big Content, and it has an army of lobbyists who impose its will on legislators and regulators. The more concentrated an industry is, the more profitable it is, the more profitable it is, the more lobbyists it has, the more lobbyists it has, the more it gets its way.

Clash of the Titans

Monopoly begets monopoly. Before the rise of Big Tech, the tech sector was caught in a vicious squeeze between the monopolistic ISP industry and the monopolistic entertainment industry. Today, Big Tech, Big Content and Big Telco each claim the right to dominate our digital lives, and ask us to pick a giant to root for.

We’re not on the side of giants. We’re on the side of users, of the public interest. Big companies can have their customers’ or users’ backs, and when they do, we’ve got their back, too. But we demand more than the right to choose which giant we rely on for a few dropped crumbs.

That’s why we’re interested in competition policy and antitrust. We don’t fetishize competition for its own sake. We want competition because a competitive sector has a harder time making its corporate priorities into law.  The law should reflect the public interest and the will of the people, not the mobilized wealth of corporate barons who claim no responsibility to anyone, save their shareholders.

Have You Tried Turning It Off and On Again?

Even critics of the tech antitrust surge agree that the tech sector is unhealthily concentrated.  But they are apt to point to the outwardly abusive conduct of the sector: using copyright claims  to block interoperability, weaponizing privacy to shut out rivals, selling out net neutrality, embracing censorship, and so on.

We’re in vigorous agreement with this analysis. All of this stuff is terrible for competition. But all this stuff is also enabled by the lack of competition. These are expensive initiatives, funded by monopoly profits, and they’re controversial initiatives that rely on a monopolist’s consensus.

It’s true that sometimes a monopolist defends the public interest while sticking up for its own interests. The Google/Oracle fight over API copyrights saw two billionaires burning millions of dollars to promote their own self-interest. Oracle wanted to change copyright law in a way that would have let it take billions away from Google. Google wanted to keep its billions. For Google to keep its billions, it had to stand up for what’s right: namely, that APIs can’t be copyrighted because they are functional, and because blocking interoperability is counter to the public interest.

If we demonopolized Google - if we forced it to operate in a competitive advertising environment and lowered its profits - then it might not be able to fight off the next Oracle-style assault on the public interest. That’s not an argument for increasing Google’s power - it’s an argument for decreasing Oracle’s power.

Because more often than not, Google and Oracle are on the same side, along with the rest of the tech giants. 

And now that the FCC is getting new leadership, it’s a safe bet that we’ll be fighting about net neutrality again, this time to restore it, with the same shady tactics that we saw in 2017. Google might be our ally in fighting back - net neutrality is in the tech sector’s interest, after all - but then again, maybe they’ll cut another deal with a monopolistic telco

The way to stop Big Telco from shredding the public interest isn’t to make Google as large as possible and hope it doesn’t switch sides (again): it’s to shrink Big Telco until it fits in a bathtub.

Profits are power. Concentration is power. Concentration is profitable. Profits let merging companies run roughshod over the FTC and become more concentrated. Lather, rinse, repeat.

The system of monopoly is a ravenous beast, a cycle that turns money into power into money into power. We have to break the cycle.

We have to starve the beast.

Cory Doctorow

The Federal Circuit Has Another Chance to Get it Right on Software Copyright

3 weeks 2 days ago

When it comes to software, it seems that no matter how many times a company loses on a clearly wrong copyright claim, it will soldier on—especially if it can find a path to the U.S. Court of Appeals for the Federal Circuit. The Federal Circuit is supposed to be almost entirely focused on patent cases, but a party can make sure its copyright claims are heard there too by simply including patent claims early in the litigation, and then dropping them later. In SAS v. WPL, that tactic means that a legal theory on software copyrightability that has lost in three courts across two countries will get yet another hearing. Hopefully, it will be the last, and the Federal Circuit will see this relentless opportunism for what it is.

That outcome, however correct, is far from certain. The Federal Circuit got this issue very wrong just a few years ago, in Oracle v. Google. But with the facts stacked against the plaintiff, and a simpler question simpler to decide, the Federal Circuit might get it right this time.

The parties in the case, software companies SAS Institute Inc. (SAS) and World Programming Ltd. (WPL), have been feuding for years in multiple courts in the U.S. and abroad. At the heart of the case is SAS’s effort to effectively own the SAS Language, a high-level programming language used to write programs for conducting statistical analysis. The language was developed in the 1970s at a public university and dedicated to the public domain, as was software designed to convert and execute SAS-language programs. Works in the public domain can be used by anyone without permission. That is where the original SAS language and software executing it lives.

A few years later, however, some of its developers rewrote the software and founded a for-profit company to market and sell the new version. It was alone in doing so until, yet more years later, WPL developed its own, rival software that can also convert and execute SAS-Language programs. Confronted with new competition, SAS ran to court, first in the U.K., then in North Carolina, claiming copyright infringement. It lost both times.

Perhaps hoping that the third time will be the charm, SAS sued WPL in Texas for both patent and copyright infringement. Again, it lost—but it decided to appeal only the copyright claims. As with Oracle v Google, however, the fact that the case once included patent claims—valid or not—was enough to land it before the Federal Circuit.

It is undisputed that WPL didn’t copy SAS’s actual copyrighted code. Instead, SAS claims WPL copied nonliteral, functional elements of its system: input formats (which say how a programmer should input data to a program to make the program work properly) and output designs (which the computer uses to let the programmer view the results correctly). These interfaces specify how the computer is supposed to operate—in response to inputs in a certain format, produce outputs that are arranged in a certain design. But those interfaces don’t instruct the computer how it should perform those functions, for which WPL wrote its own code. SAS’s problem is that copyright law does not, and should not, grant a statutory monopoly in these functional elements of a computer program.

SAS is desperately hoping that the Federal Circuit will say otherwise, based on the Federal Circuit’s previous ruling, in Oracle v. Google, that choosing among various options can suffice to justify copyright protection. In other words, if a developer had other programming options, the fact that it chose a particular path can allegedly be “creative” enough to merit exclusive rights for 70+ years. As we explained in our amicus brief, that reliance is misplaced.

First, the facts of this case are different –WPL, unlike Google, didn’t copy any actual code. Again, this is undisputed. Second, Oracle v. Google was based on a fundamentally incorrect assumption that the Ninth Circuit (the jurisdiction from which Oracle arose and, therefore, whose law the Federal Circuit was bound to apply) would accept the “creative choices” theory. How do we know that assumption was wrong? Because the Ninth Circuit later said so, in a different case.

But SAS should lose for another reason. In essence, it is trying to claim copyright in processes and methods of operation–elements that, if they are protectable at all, are protected only by patent. If SAS couldn’t succeed on its patent claims, it shouldn’t be allowed to rely on copyright as a backstop to cover the same subject matter. In other words, SAS cannot both (1) evade the limits on patent protection such as novelty, obviousness, eligible patent subject matter, the patent claim construction process, etc.; and, at the same time (2) evade the limits on copyright protection by recasting functional elements as “creative” products.

In addition to these point, our brief hopes to remind the court that the copyright system is intended to serve the public interest, not simply the financial interest of rightsholders such as SAS. The best way for the Federal Circuit to serve that public interest here is to defend the limits on copyright protection for functional parts of computer programs, and to clarify its previous erroneous computer copyrightability ruling in Oracle v. Google. We hope the court agrees. 

Related Cases: Oracle v. Google
Corynne McSherry

Victory! Lawsuit Proceeds Against Clearview’s Face Surveillance

3 weeks 2 days ago

Face surveillance is a growing menace to racial justice, privacy, and free speech. So EFF supports laws that ban government use of this dangerous technology, and laws requiring corporations to get written opt-in consent from a person before collecting their faceprint.

One of the worst offenders is Clearview AI, which extracts faceprints from billions of people without their consent and uses these faceprints to help police identify suspects. For example, police in Miami worked with Clearview to identify participants in a recent protest. Such surveillance partnerships between police and corporations are increasingly common.

Clearview’s faceprinting violates the Illinois Biometric Information Privacy Act (BIPA), which requires opt-in consent to obtain someone’s faceprint. As a result, Clearview now faces many BIPA lawsuits. One was brought by the ACLU and ACLU of Illinois in state court. Many others were filed against the company in federal courts across the country and then consolidated into one federal courtroom in Chicago. In both Illinois and federal court, Clearview argues that the First Amendment bars these BIPA claims. We disagree and filed an amicus brief saying so in each case.

Last week, the judge in the Illinois state case rejected Clearview’s First Amendment defense, denied the company’s motion to dismiss, and allowed the ACLU’s lawsuit to move forward. This is a significant victory for our privacy over Clearview’s profits.

The Court’s Instructive Reasoning

The court began its analysis by holding that faceprinting “involves expression and its predicates, which are entitled to some First Amendment protection.” We agree. EFF has long advocated for First Amendment protection of the right to record on-duty police and the right to code.

The court then held that Clearview’s faceprinting is not entitled to “strict scrutiny” of the speech restraint (one of the highest levels of First Amendment protection) but instead is entitled to “intermediate scrutiny.” We agree, because (as our amicus briefs explain) Clearview’s faceprints do not address a matter of public concern, and Clearview has solely commercial purposes.

Applying intermediate scrutiny, the court upheld the application of BIPA’s opt-in consent requirement to Clearview’s faceprinting. The court emphasized Illinois’ important interests in protecting the “privacy and security” of the public from biometric surveillance, including the “difficulty in providing meaningful recourse once a person’s [biometrics] have been compromised.” The court further explained that the opt-in consent requirement is “no greater than necessary” to advance this interest because it “returns control over citizens’ biometrics to the individual whose identities could be compromised.”

As to Clearview’s argument that BIPA hurts its business model, the court stated: “That is a function of having forged ahead and blindly created billions of faceprints without regard to the legality of that process in all states.”

Read here the August 27, 2021, opinion of Judge Pamela McLean Meyerson of the Cook County (Illinois) Circuit Court.

Adam Schwartz

EFF to Council of Europe: Flawed Cross Border Police Surveillance Treaty Needs Fixing—Here Are Our Recommendations to Strengthen Privacy and Data Protections Across the World

3 weeks 2 days ago

EFF has joined European Digital Rights (EDRi), the Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic (CIPPIC), and other civil society organizations in recommending 20 solid, comprehensive steps to strengthen human rights protections in the new cross border surveillance draft treaty that is under review by the Parliamentary Assembly of the Council of Europe (PACE). The recommendations aim to ensure that the draft treaty, which grants broad, intrusive police powers to access user information in criminal cross border investigations, contains a robust baseline to safeguard privacy and data protection.

From requiring law enforcement to garner independent judicial authorization as a condition for cross border requests for user data, to prohibiting police investigative teams from bypassing privacy safeguards in secret data transfer deals, our recommendations submitted to PACE will add much-needed human rights protections to the draft Second Additional Protocol to the Budapest Convention on Cybercrime. The recommendations seek to preserve the Protocol’s objective—to facilitate efficient and timely cross-border investigations between countries with varying legal systems—while embedding safeguards protecting individual rights. 

Without these amendments, the Protocol’s credibility is in question. The Budapest Cybercrime Convention has been remarkably successful in terms of signatories—large and small states from around the globe have ratified it. However, Russia’s long-standing goal to replace the treaty with its own proposed UN draft convention may be adding pressure on the Council of Europe (CoE) to rush its approval instead of extending its terms of reference to properly allow for a meaningful non-stakeholder consultation. But if the CoE intends to offer a more human right protective approach to the UN Cybercrime initiative, it must lead by example by fixing the primary technical mistakes we have highlighted in our submission and strengthen privacy and data protection safeguards in the draft Protocol. 

This post is the first of a series of articles describing our recommendations to PACE. The series will also explain how the Protocol will impact legislation in other countries.  The draft Protocol was  approved by the  Council of Europe’s Cybercrime Committee (T-CY) in May 28th following an opaque, several-year process largely commandeered by law enforcement.  

Civil society groups, data protection officials, and defense attorneys were sidelined during the process, and the draft Protocol reflects this deeply flawed and lopsided process. PACE can recommend further amendments to the draft during the treaty’s adoption and final approval process. EFF and partners urge PACE to use our recommendations to adopt new Protocol amendments to protect privacy and human rights across the globe. 

Mischaracterizing the Intrusive Nature of Subscriber Data Access Powers 

One of the draft’s biggest flaws is its treatment, in Article 7, of subscriber data, the most sought-after information by law enforcement investigators. The Protocol’s explanatory text erroneously claims that subscriber information “does not allow precise conclusions concerning the private lives and daily habits of individuals concerned,” so it’s less sensitive than other categories of data. 

But, as is increasingly recognized around the world, subscriber information such as a person’s address and telephone number, under certain conditions, is frequently used by police to uncover people’s identities and link them to specific online activities that reveal details of their private lives. Disclosing the identity of people posting anonymously exposes intimate details of individuals’ private lives. The Protocol's dismissive characterization of subscriber data directly conflicts with judicial precedent, particularly when considering the Protocol’s broad definition of subscriber information, which includes IP addresses and other online identifiers. 

In our recommendations, we therefore urge PACE to align the draft explanatory text’s description of subscriber data with judicial opinions across the world that recognize it as highly sensitive information. Unfettered access to subscribers’ data encroaches on the right to privacy and anonymity, and people’s right to free expression online, putting journalists, whistleblowers, politicians, political dissidents, and others at risk. 

Do Not Mandate Direct Cooperation Between Service Providers and Foreign Law Enforcement

Article 7 calls upon States to adopt legislation that will allow law enforcement in one country to request the production of subscriber data directly from companies located in another country under the requesting country’s legal standard. Due to the variety of legal frameworks among the Parties’ signatories, some countries’ laws authorize law enforcement to access subscriber data without appropriate safeguards, such as without prior court authorization and/or a reasonable grounds requirement. The article applies to any public or private service providers, defined very broadly to encompass internet service providers, email and messaging providers, social media sites, cell carriers, host and catching services, regardless whether free of charge or for renumeration, and regardless of whether to the public or in a closed group (e.g. community network).

For countries with strong legal safeguards, Article 7 will oblige them to remove any law that will impede local service providers holding subscriber data from voluntarily responding to requests for that data from foreign agencies or governments. So, a country that requires independent judicial authorization for local internet companies to produce information about their subscribers, for example, will need to amend its law so companies can directly turn over subscriber data to foreign entities. 

We have criticized Article 7 for failing to provide, or excluding, critical safeguards that are included in many national laws. For example, Article 7 does not include any explicit restrictions on targeting activities which implicate fundamental rights, such as freedom of expression or association, and categorically prevents any state from requiring foreign police to demonstrate that the subscriber data they seek will advance a criminal investigation before justifying access to it. 

This is why we've urged PACE to remove Article 7 entirely from the text of the Protocol. States would still be able to access subscriber data in cross-border contexts, but would instead rely on another provision of the Protocol (Article 8), which also has some issues but includes more safeguards for human rights. 

If Article 7 is retained, the Protocol should be amended to make it easier for states to limit its scope of application. As the text currently stands, countries must decide whether to adopt Article 7 or not when implementing the draft Protocol. But the scope of legal protection many states provide for subscriber data is evolving as many courts and legislatures are increasingly recognizing that access to this personal data can be intrusive and may require additional safeguards. As drafted, if a signatory to the Protocol adds more safeguards to its subscriber data access regime—out of public policy concerns or in response to a court decision—extending these safeguards to foreign police will place it in violation of its obligations under the Protocol. 

Because the draft Protocol gives law enforcement powers with direct impact on human rights and will be available to a diverse number of signatories with varying criminal justice systems and human rights records, we recommend that it provide the additional safeguards for cross border data requests:

  • Allow a Party to require independent judicial authorization for foreign requests for subscriber data issued to service providers in its territory. Or even better, we would like to see a general obligation compelling independent supervision on every cross-border subscriber data request.
  • Allow authorities in the country where service providers are located to be notified about subscriber data requests and given enough information to assess their impact on fundamental rights and freedoms; and
  • Adopt legal measures to ensure that gag requests—confidentiality and secrecy requests—are not inappropriately invoked when law enforcement make cross-border subscriber data access demands.

We are grateful to PACE for the opportunity to present our concerns as it formulates its own opinion and recommendations before the treaty reaches CoE’s final body of approval, the Council of Ministers. We hope PACE will take our privacy and human rights concerns seriously. In recent weeks EFF and the world has learned that governments across the globe have targeted journalists, human rights activists, dissidents, lawyers, and private citizens for surveillance because of their work or political viewpoints. Regimes are weaponizing technology and data to target those who speak out. We strongly urge PACE to adopt our recommendations for adding strong human rights safeguards to the Protocol to ensure that it doesn’t become a tool for abuse. 

Read further on this topic:

Karen Gullo

Apple’s Plan to Scan Photos in Messages Turns Young People Into Privacy Pawns

3 weeks 5 days ago

This month, Apple announced several new features under the auspices of expanding its protections for young people, at least two of which seriously walk back the company’s longstanding commitment to protecting user privacy. One of the plans—scanning photos sent to and from child accounts in Messages—breaks Apple’s promise to offer end-to-end encryption in messaging. And when such promises are broken, it inevitably opens the door to other harms; that’s what makes breaking encryption so insidious. 

Apple’s goals are laudable: protecting children from strangers who use communication tools to recruit and exploit them, and limiting the spread of child sexual abuse material. And it’s clear that there are no easy answers when it comes to child endangerment. But scanning and flagging Messages images will, unfortunately, create serious potential for danger to children and partners in abusive households. It both opens a security hole in Messages, and ignores the reality of where abuse most often happens, how dangerous communications occur, and what young people actually want to feel safe online. 

JOIN THE WORLDWIDE PROTEST

TELL APPLE: DON'T SCAN OUR PHONES

How Messages Scanning Works

In theory, the feature works like this: when photos are sent via Messages between users who are under a certain age (13), those photos will be scanned by a machine learning algorithm. If the algorithm determines that the photo contains “sexually explicit” material, it will offer the user a choice: don’t receive or send the photo, and nothing happens; or choose to receive or send the photo, and the parent account on the Family Sharing plan will be notified. The system also scans photos of users between 13 and 17 years old, but only warns the user that they are sending or receiving an explicit photo, not the parents.  

Children Need In-App Abuse Reporting Tools Instead

The Messages photo scanning feature has three limitations meant to protect users. The feature requires an opt-in on the part of the parent on the Family Sharing plan; it allows the child account to decide not to send or receive the image; and it’s only applicable to Messages users that are designated as children. But it’s important to remember that Apple could change these protections down the road—and it’s not hard for a Family Sharing plan organizer to create a child account and force (or convince) anyone, child or not, to use it, easily enabling spying on non-children. 

Creating a better reporting system would put users in control—and children are users.

Kids do experience bad behavior online—and they want to report it. A recent study by the Center for Democracy and Technology finds that user reporting through online reporting mechanisms is effective in detecting “problematic content on E2EE [end-to-end encrypted] services, including abusive and harassing messages, spam, mis- and disinformation, and CSAM.” And, when given the choice to use online tools to do so, versus reporting to a caregiver offline, they overwhelmingly prefer using online tools. Creating a better reporting system like this would put users in control—and children are users.

But Apple’s plan doesn’t help with that. Instead, it treats children as victims of technology, rather than as users. Apple is offering the worst of both worlds: the company inserts its scanning tool into the private relationships between parents and their children, and between children and their friends looking for “explicit” material, while ignoring a powerful method for handling the issue. A more robust reporting feature would require real work, and a good intake system. But a well-designed system could meet the needs of younger users, without violating privacy expectations.

Research Shows Parents Are A Bigger Danger for Children than Strangers

Apple’s notification scheme also does little to address the real danger in many many cases.  Of course, the vast majority of parents have a child’s best interests at heart. But the home and family are statistically the most likely sites of sexual assault, and a variety of research indicates that sexual abuse prevention and online safety education programs can’t assume parents are protective. Parents are, unfortunately, more likely to be the producers of child sexual abuse material (CSAM) than are strangers. 

In addition, giving parents more information about a child’s online activity, without first allowing the child to report it themselves, can lead to mistreatment, especially in situations involving LGBTQ+ children or those in abusive households. Outing youth who are exploring their sexual orientation or gender in ways their parents may not approve of has disastrous consequences. Half of homeless LGBTQ youth in one study said they feared that expressing their LGBTQ+ identity to family members would lead to them being evicted, and a large percentage of homeless LGBTQ+ youth were forced to leave their homes due to their sexual orientation or gender. Leaving it up to the child to determine whether and to whom they want to report an online encounter gives them the option to decide how they want to handle the situation, and to decide whether the danger is coming from outside, or inside, the house. 

It isn’t hard to think of other scenarios where this notification feature could endanger young people. How will Apple differentiate a ten year-old sharing a photo documenting bruises that a parent gave them in places normally hidden by clothes—which is a way that abusers hide their abuse—from a nude photo that could cause them to be sextorted? 

Children Aren’t the Only Group Endangered by Apple’s Plan

Unfortunately, it’s not only children who will be put in danger by this notification scheme. A person in an abusive household, regardless of age, could be coerced to use a “child” account, opening Messages users up to tech-enabled abuse that’s more often found in stalkerware. While Apple’s locked down approach to apps has made it less likely for someone to install such spying tools on another’s iPhone, this new feature undoes some of that security. Once set up, an abusive family member could ensure that their partner or other household member doesn’t send any photos that Apple considers sexually explicit to others, without them being notified. 

Finally, if other algorithms meant to find sexually explicit images are any indication, Apple will likely sweep up all sorts of non-explicit content with this feature. Notifying a parent that a child is sending explicit material when they are not could also lead to real danger. And while we are glad that Apple’s notification scheme stops at twelve, even teenagers who will see only a warning when they send or receive what Apple considers a sexually explicit photo could be harmed. What impact does it have when a young woman receives a warning that a swimsuit photo being shared with a friend is sexually explicit? Or photos of breastfeeding? Or nude art? Or protest photos

Young People Are Users, Not Pawns

Apple’s plan is part of a growing, worrisome trend. Technology vendors are inserting themselves more and more regularly into areas of life where surveillance is most accepted and where power imbalances are the norm: in our workplaces, our schools, and in our homes. It’s possible for these technologies to help resolve those power imbalances, but instead, they frequently offer spying, monitoring, and stalking capabilities to those in power. 

This has significant implications for the future of privacy. The more our technology surveils young people, the harder it becomes to advocate for privacy anywhere else. And if we show young people that privacy isn’t something they deserve, it becomes all-too-easy for them to accept surveillance as the norm, even though it is so often biased, dangerous, and destructive of our rights. Child safety is important. But it’s equally important not to use child safety as an excuse to dangerously limit privacy for every user.

By breaking the privacy promise that your messages are secure, introducing a backdoor that governments will ask to expand, and ignoring the harm its notification scheme will cause, Apple is risking not only its privacy-protective image in the tech world, but also the safety of its young users.

JOIN THE WORLDWIDE PROTEST

TELL APPLE: DON'T SCAN OUR PHONES

Further Reading: 

Jason Kelley

Facebook’s Secret War on Switching Costs

3 weeks 5 days ago

When the FTC filed its amended antitrust complaint against Facebook in mid-August, we read it with interest. FTC Chair Lina Khan rose to fame with a seminal analysis of the monopolistic tactics of Amazon, another Big Tech giant, when she was just a law student, and we anticipated that the amended complaint would make a compelling case that Facebook had violated antitrust law.

Much of the coverage of the complaint focused on the new material defining “personal social networking” as a “relevant market” and making the case that Facebook dominated that market thanks to conduct banned under the antitrust laws. Because the court threw out the FTC’s previous complaint for failing to lay out Facebook’s monopoly status in sufficient detail, the new material is important to keep the case going. But as consequential as that market-defining work is, we want to highlight another aspect of the complaint - one that deals directly with the questions of what kinds of systems promote competition and what kinds of systems reduce it.

When antitrust enforcers and scholars theorize about Big Tech, they inevitably home in onnetwork effects.” A system is said to benefit from “network effects” when its value increases as more people use it - people join Facebook to hang out with the people who’ve already joined Facebook. Once new people join Facebook, they, in turn, become a reason for other people to join Facebook.

Network effects are real, and you can’t understand the history of networked computers without an appreciation for them. Famously, Bob Metcalfe, the inventor of Ethernet networking, coined “Metcalfe’s Law”: “the value of a telecommunications network is proportional to the square of the number of connected users of the system (n2).” That is, every time you add a new user to a network you double the number of ways that users can connect with one another.

But while network effects are a good predictor of whether a service will get big, they can’t explain why it stays big. 

Cheap printers might entice many people to buy a printer for home, and incentivize many retailers to carry ink and paper, and encourage businesses and schools to require home printouts, but why would printer owners shell out big bucks for ink when there’s lots of companies making cheap cartridges?

Apple’s App Store might be a great way to find reliable apps (incentivizing people to buy iPhones, and incentivizing programmers to make apps for those iPhone owners), but why continue to shop there once you’ve found the apps you want, rather than dealing directly with the app’s makers, who might give you a discount because they no longer have to cut Apple in for a 30% commission?

And Facebook is full of people whose company you enjoy, but if you don’t like its ads, its surveillance, its deceptive practices, or its moderation policies, why not leave Facebook and find a better platform (or run your own), while continuing to send and receive messages from the communities, friends and customers who haven’t left Facebook (yet)? 

Short answer? Because you can’t. 

Big Printer periodically downgrades your printer with “security updates” that prevent it from using third party cartridges. Apple uses legal and technical countermeasures to stop you from running apps unless you buy them through its store. And Facebook uses all-out warfare and deceptive smear campaigns to stop anyone from connecting their tools to its platform.

Software locks, API restrictions, legal threats, forced downgrades and more - these are why Big Tech stays big. 

Collectively, these are a way to create high “switching costs” and high switching costs are the way to protect the dividends from network effects - to get big and stay big.

Switching costs are how economists refer to all the things you have to give up to switch between products or services. Leaving Facebook might cost you access to people who share your rare disease, or the final messages sent by a dying friend, or your business’s customers, or your creative audience, or your extended family. By blocking interoperability, Facebook ensures that participating in those relationships and holding onto those memories means subjecting yourself to its policies.

Back to the FTC’s amended complaint. In several places, the FTC investigators cite internal Facebook communications in which engineers and executives plotted to increase switching costs in order to make it harder for dissatisfied users to switch to a better, rival service. These examples, which we reproduce below, are significant in several ways:

  1. They show that the FTC is thinking about the practice of engineering in switching costs as anticompetitive and subject to antitrust scrutiny.
  2. They show that Facebook understands that it owes its success to both strong network effects and high switching costs, and that losing the latter could undo the former;
  3. They suggest that interoperability, which lowers switching costs and keeps them low, should be seen as an important tool in the antitrust enforcement toolbox, whether through legislation or as part of litigation settlements.

Here’s some examples of Facebookers discussing switching costs, from the FTC’s amended complaint.

Paragraph 87: Facebook Mergers and Acquisitions department emails Mark Zuckerberg to make the case for buying a company with a successful mobile social media strategy: "imo, photos (along with comprehensive/smart contacts and unified messaging) is perhaps one of the most important ways we can make switching costs very high for users - if we are where all users’ photos reside because the upoading [sic] (mobile and web), editing, organizing, and sharing features are best in class, will be very tough for a user to switch if they can’t take those photos and associated data/comments with them." [emphasis added]

Here, Zuckerberg’s executives are proposing that if Facebook could entice people to lock up their family photos inside Facebook’s silo, Facebook could make confiscating those pictures a punishment for disloyal users who switched platforms.

Paragraphs 144/145: A Facebook engineer discusses the plan to reduce interoperability selectively, based on whether a Facebook app developer might help people use rivals to its own projects. “[S]o we are literally going to group apps into buckets based on how scared we are of them and give them different APIs? How do we ever hope to document this? Put a link at the top of the page that says ‘Going to be building a messenger app? Click here to filter out the APIs we won’t let you use!’ And what if an app adds a feature that moves them from 2 to 1? Shit just breaks? And a messaging app can’t use Facebook login? So the message is, “if you’re going to compete with us at all, make sure you don’t integrate with us at all.’? I am just dumbfounded..[T]hat feels unethical somehow, but I’m having difficulty explaining how.  It just makes me feel like a bad person.” 

Paragraph 187: A Facebook executive describes how switching costs are preventing Google’s “Google+” service from gaining users: "[P]eople who are big fans of G+ are having a hard time convincing their friends to participate because 1/there isn’t [sic] yet a meaningful differentiator from Facebook and 2/ switching costs would be high due to friend density on Facebook.” [emphasis added]

Finally, in paragraph 212, the FTC summarizes the ways that switching costs constitute an illegitimate means for Facebook to maintain its dominance: “In addition to facing these network effects, a potential entrant in personal social networking services would also have to overcome the high switching costs faced by users. Over time, users of Facebook’s and other personal social networks build more connections and develop a history of posts and shared experiences, which they cannot easily transfer to another personal social networking provider.  Further, these switching costs can increase over time—a “ratchet effect”—as each user’s collection of content and connections, and investment of effort in building each, continually builds with use of the service.” [emphasis added]

And, the FTC says, Facebook knows it:

Facebook has long recognized that users’ switching costs increase as users invest more time in, and post more content to, a personal social networking service. For example, in January 2012, a Facebook executive wrote to Mr. Zuckerberg: ‘one of the most important ways we can make switching costs very high for users - if we are where all users’ photos reside . . . will be very tough for a user to switch if they can’t take those photos and associated data/comments with them.’ Facebook’s increase in photo and video content per user thus provides another indication that the switching costs that protect Facebook’s monopoly power remain significant.” [emphasis added]

Network effects are how you get users. Switching costs are how you hold them hostage. The FTC Facebook complaint makes it clear that antitrust regulators have wised up to this phenomenon, and not a moment too soon.

Cory Doctorow
Checked
32 minutes 52 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed