California Courts Must Protect Data Privacy

3 months ago

If our legal rights to data privacy aren’t enforceable, they are just empty promises. One of the best ways to enforce them is to let people sue the companies that violate their data privacy. Unfortunately, the U.S. Supreme Court has been chipping away at private enforcement by rewriting a legal doctrine called “standing,” which determines who has been harmed enough to deserve their day in court.

California’s standing rules are different, and far more protective. But a recent state appeals court decision may change those rules, closing the courthouse doors to victims of corporate violations of data privacy laws. This week, EFF filed an amicus letter with the California Supreme Court, urging it to review that decision and keep those doors open. Our co-amicus is the Electronic Privacy Information Center (EPIC), and we had assistance from Hunter Pyle Law and Feinberg, Jackson, Worthman & Wasow.

The case, called Limon v. Circle K Stores, alleges the company violated the federal Fair Credit Reporting Act (FCRA), by presenting a prospective employee with a confusing request for “consent” to run a background check on them. The plaintiff initially sued the company in federal court, but the case was dismissed under federal standing doctrine. The plaintiff then sued the company in California state court, but the case was dismissed again. Worse, when a California appellate court affirmed this dismissal, it imported restrictive federal standing requirements into California’s law. The plaintiff is asking the California Supreme Court to take another look and fix this dangerous mistake.

EFF has filed many amicus briefs in federal court in favor of broad standing to bring data privacy lawsuits. So has EPIC. A recurring question in federal court is whether the plaintiff’s injury is sufficiently “concrete” to satisfy the U.S. Constitution’s limit of federal litigation to “cases and controversies.” It should be enough to suffer a deprivation of one’s legal right to data privacy, without having to prove more, such as an economic or physical injury. After all, American law has historically recognized causes of action for the loss of control over what other people know about us, including claims against intrusion upon seclusion and publication of private facts.

But in TransUnion v. Ramirez (2021), a five-Justice majority of the U.S. Supreme Court rejected standing for some 6,000 people to sue the credit reporting agency for its violation of FCRA. Specifically, the company supposedly did not cause a concrete injury when it negligently and falsely labelled these innocent people as potential terrorists, and made that dangerous information available to employers and other businesses. This opinion was wrongly decided and should be overruled.

In the meantime, state courts must step up as guardians of data privacy. As explained by the TransUnion dissent, state courts are now “the sole forum” for certain kinds of FCRA and other claims.

As the California Supreme Court recently held: “Unlike the federal Constitution, our state Constitution has no case or controversy requirement imposing independent jurisdictional limitation on our standing doctrine.” Thus, it is enough for a plaintiff to show they have “an actual and substantial interest” in the case’s outcome, to ensure the parties “press their case with vigor.” A person should be able to pass this test when a business violates their legal right to data privacy.

We hope the California Supreme Court will grant review of Limon, reverse the erroneous appellate court ruling, and ensure that Californians can still turn to state court to protect their data privacy.

You can read our amicus letter here.

Adam Schwartz

Here's How Apple Could Open Its App Store Without Really Opening Its App Store

3 months ago
And what we can do about it.

With this year’s passage of the EU’s Digital Markets Act (DMA), very large online platforms - those with EU revenues of  €75 billion or more and at least 45 million EU users  - will have to open up their devices to rival app stores. 

While this has implications for game consoles, the main attraction is the mobile market, specifically Apple’s iOS-based mobile devices: iPhones, iPads, iPods and Apple Watches. These devices are locked to Apple’s official App Store, and EU law prohibits the public from modifying them to accept alternative app stores from other vendors, under Article 6 of 2001’s EU Copyright Directive (EUCD).

With the public unable to legally reconfigure their devices to use rival app stores, we are dependent on Apple’s permission if we want to get our iOS apps elsewhere, and, according to a Bloomberg report, that’s just what Apple is about to do.

Though Apple hasn’t formally announced a plan to open its devices to rival app stores (and indeed, has not yet affirmed that it will comply with the DMA at all), Bloomberg’s Mark Gurman cites multiple Apple employees who provide early details of the plan.

Apple Protects Its Customers, Just Not At the Expense of Its Investors

As ever, the devil is in those details. Apple’s App Store does a generally excellent job of protecting users from malicious code, privacy invasions and deceptive practices, but not always. Like any company, Apple will sometimes make mistakes, but the risk to Apple customers is by no means limited to lapses and errors.

Apple’s commitment to its customers’ privacy and integrity is admirable, but it’s not absolute. Apple continuously strikes a balance between its customers’ interests and Apple’s shareholders’ interests. When the Chinese government ordered Apple to remove working VPNs from the App Store, Apple complied.  When the Chinese government ordered Apple to install backdoors in its cloud backup service, Apple complied.  When the Chinese government ordered Apple to break AirDrop so it couldn’t be used to organize anti-government demonstrations, Apple complied.

On the other hand, when the FBI ordered Apple to add backdoors to its devices, Apple refused - and rightly so

This doesn’t mean that Apple values the safety, privacy and free expression rights of its Chinese customers less than it values the safety, privacy and free expression rights of its American customers.

Rather, it’s that the Chinese government can harm Apple’s shareholders worse than the American government can: if Apple was denied access to low-waged Chinese manufacturing and 350 million middle-class Chinese consumers, it would have to pay much more to manufacture its products, and it would sell far fewer of them: none in China, and fewer elsewhere, thanks to the higher prices it would have to charge.

The App Tax

Even when powerful governments aren’t involved, Apple sometimes puts its shareholders ahead of its customers: the company’s policy of charging very high payment processing fees (30% for large businesses, 15% for some small businesses) means that some products and services simply can’t be offered at all without losing money. For example, the wholesale discount on audiobooks is 20%, so a bookseller that makes its wares available through an iOS app will lose money on every sale. 

In the case of audiobooks, Apple uses its high fees to clear the field of competitors for its own product, Apple Books, which sells books that are permanently locked (through Digital Rights Management) to Apple’s platform. 

Incredibly, that’s the best outcome of Apple’s “app tax.” In other cases, companies simply pass on Apple’s commission to Apple’s customers by raising prices (since everyone who sells via an iOS app must pay the app tax, they all raise prices). Even that is better than the worst outcome, where products and services never come to market because they can’t be profitable after paying the app tax, and the products won’t sell if the app tax is added onto their sale price.

These bad outcomes are endemic to all app store businesses. Once a company has the power to decide what you can and cannot buy, and how much you must pay for it, it can use that power to shift value from their customers to their shareholders. The more those customers have invested in the platform, the worse the platform’s proprietor can treat them without fearing that they will quit. High switching costs are the enemy of good corporate behavior.

Walled Gardens and Prison Walls

It’s fine for companies to build walled fortresses around their products to keep bad guys out, but those fortress walls can quickly become prison walls that lock their customers in. For example, Apple made privacy changes to iOS that effectively block most third party tracking, but they secretly continued to spy on iOS users even if they had opted out of tracking. This is where the DMA comes in: by forcing companies to open their app stores to rivals, the DMA will use competition to discipline large companies. They’ll make sure Apple builds fortresses, but not prisons. For example, a third-party app store could block Apple’s tracking on iOS devices and let Apple device owners truly opt out of all tracking.

But again, the devil is in the details. The app store plan reported in Bloomberg has many gotchas that would allow Apple to claim to have complied with the DMA without satisfying either the letter of the law, or - more importantly - its intent: to allow customers true alternatives to Apple and its compromised judgments about what is best for them.

How To Sabotage the DMA

Here are some red flags to watch for as Apple’s plans mature:

Forcing software authors to sign up for Apple’s Developer Program: Apple’s Developer Program offers many tools and services to software authors who choose to pay to join it. But it should be a choice. Some developers may see the benefits of paying for Apple’s blessing on their products, while others may choose to save their money - or they may choose not to sign onto Apple’s eye-glazingly long and bowel-looseningly terrifying developer terms and conditions.

As part of opening up iOS devices to other app stores, Apple must not impede the development of rival toolchains and software development kits (SDKs) for iOS developers. Software authors should be able to choose which tools they use.

Forcing rival app stores use the same editorial criteria as Apple’s App Store: The DMA doesn’t propose to turn app stores into a free-for-all; Apple is permitted to impose some security standards on third party iOS app stores. However, Apple should not be permitted to bootstrap genuine security concerns into general editorial oversight of its competitors’ stores. Apps that don’t meet Apple’s editorial standards - like games that simulate work in an offshore sweatshop or apps that report civilian deaths from US drone strikes  - should not be blocked on other app stores if they meet those stores’ editorial standards.

Requiring that third-party app stores pay Apple for security vetting: It’s one thing for Apple to oversee the criteria by which third-party app stores assess the security of the apps they carry; it’s another thing for Apple to require its competitors to pay it to vet the apps they sell. According to Bloomberg, this is what Apple plans. Despite Apple’s submissions to the world’s governments, it is not the only entity capable of assessing the security of an app, nor can it claim to do so perfectly

Requiring third-party app stores to process payments through Apple: Third party app stores and the apps they carry should be able to use any secure payment processor. If Apple wants their business, it should make a competitive product, not order all comers to pay a 30% app tax.

Arbitrarily revoking third party app stores: A common complaint among businesses that rely on Apple’s App Store is that Apple capriciously and arbitrarily rejects their apps. Apple removes some apps from the App Store for violating rules, while turning a blind eye to other apps that violate the same rules. This is frustrating when it applies to individual apps, but it could be much worse if it is applied to whole app stores

When Apple removes an app store for allegedly failing to meet its security obligations, it could take a long time to figure out whether the action was warranted, and during that delay, the suspended app store’s customers could lose access to the media they’ve purchased, the services they use, and the data they’ve entrusted to their apps.

Bad-faith app store deletions represent a serious danger to the entire DMA. It wouldn’t take more than a couple of bad experiences with a third-party app store disappearing without warning to dissuade the public from ever trusting another third-party store.

How to Defend the DMA

Luckily, the DMA doesn’t give companies the final say on these matters. The European  Commission, the EU’s administrative body, has the power to oversee and enforce compliance with the regulation. 

All of these items - the standards for security vetting, the editorial standards, the criteria for removal of an app store, the remedies for the users of those app stores when they are removed - can’t solely be left to the judgment of the firms involved. These companies have their customers’ backs sometimes, but when it comes down to a fight between their customers’ interests and their shareholders’ interests, the shareholders always win.

Why Should Europeans Get to Have All the Fun?

All of this is still up in the air: Apple almost certainly hasn’t finalized its plans, and the EU is still gearing up to enact and administer the DMA.

One thing we dearly hope is that Apple will not withhold the rights it gives back to its European customers from its customers all over the world. Every person deserves the right to technological self-determination, no matter where they are.

Cory Doctorow

User Generated Content and the Fediverse: A Legal Primer

3 months ago

A growing number of people are experimenting with federated alternatives to social media like Mastodon, either by joining an “instance” hosted by someone else or creating their own instance by running the free, open-source software on a server they control. (See more about this movement and joining the fediverse here).

The fediverse isn’t a single, gigantic social media platform like Facebook or Youtube. It’s an expanding ecosystem of interconnected sites and services that let people interact with each other no matter which one of these sites and services they have an account with. That means people can tailor and better control their own experience of social media and be less reliant on a monoculture developed by a handful of tech giants. 

For people hosting instances, however, it can also mean some legal risk. Fortunately, there are some relatively easy ways to mitigate that risk – if you plan ahead. To help people do that, this guide offers an introduction to some common legal issues, along with a few practical considerations.

Two important notes: (1) This guide is focused on legal risks that flow from hosting other people’s content, under U.S. law. In general, the safe harbors and immunities discussed below will not protect you if you are directly infringing copyright or defaming someone. (2) Many of us at EFF are lawyers, but we are not YOUR lawyers. This guide is intended to offer a high-level overview of U.S. law and should not be taken as legal advice specific to your particular situation.

Copyright

Copyright law gives the rightsholder substantial control over the use of expressive works, subject to several important limits such as fair use. Violations may result in ruinous damage awards if some of your users share infringing material via your instance and if you are found to be responsible for that infringement under doctrines of “secondary liability” for copyright infringement.

However, the Digital Millennium Copyright Act, 17 USC § 512, creates a "safe harbor" immunity from copyright liability for service providers – including instance admins – who "respond expeditiously" to notices claiming that they are hosting or linking to infringing material. Taking advantage of the safe harbor protects you from having to litigate the complex question of secondary liability and from the risk you would ultimately be found liable.

The safe harbor doesn’t apply automatically. First, the safe harbor is subject to two disqualifiers: (1) actual or “red flag” knowledge of specific infringement; and (2) profiting from infringing activity if you have the right and ability to control it. The standards for these categories are contested; if you are concerned about them, you may wish to consult a lawyer.

Second, a provider must take some affirmative steps to qualify:

  1. Designate a DMCA agent with the Copyright Office.

This may be the best $6 you ever spend. A DMCA agent serves as an official contact for receiving copyright complaints, following the process discussed below. Note that your registration must be renewed every three years and if you fail to register an agent you may lose the safe harbor protections. You must also make the agent’s contact information available on your website, such as a link to publicly-viewable page that describes your instance and policies.

  1. Have a clear DMCA policy, including a repeat infringer policy, and follow it.

To qualify for the safe harbors, all service providers must “adopt and reasonably implement, and inform subscribers and account holders of . . . a policy that provides for the termination in appropriate circumstances of . . . repeat infringers.” There’s no standard definition for “repeat infringer” but some services have adopted a “three strikes” policy, meaning they will terminate an account after three unchallenged claims of infringement. Given that copyright is often abused to take down lawful speech, you may want to consider a more flexible approach that gives users ample opportunity to appeal prior to termination. Courts that have examined what constitutes “reasonable implementation” of a termination process have stressed that service providers need not shoulder the burden of policing infringement.

Hosting services, which are the mostly likely category for a Mastodon instance, must also follow the “notice and takedown” process, which requires services to remove allegedly infringing material when they are notified of it. To be valid under the DMCA, the notice must include the following information:

  • The name, address, and physical or electronic signature of the complaining party
  • Identification of the infringing materials and their internet location (e.g. a url)
  • Sufficient information to identify the copyrighted works
  • A statement by the copyright holder of a good faith belief that there is no legal basis for the use complained of
  • A statement of the accuracy of the notice and, under penalty of perjury, that the complaining party is authorized to act on the behalf of the copyright holder

Providers are not required to respond to a DMCA notice that does not contain substantially all of these elements. Copyright holders are required to consider whether the targeted use may be a lawful fair use before sending notices.

If they think they’ve been unfairly targeted, users can respond with a counter-notice. You should forward it to the rightsholder. At that point, the copyright claimant has 10-14 days to file a lawsuit. If they don’t, you can put back the material and still remain immune from liability.

A proper counter-notice must contain the following information:

  • The user's name, address, phone number, and physical or electronic signature [512(g)(3)(A)]
  • Identification of the material and its location before removal [512(g)(3)(B)]
  • A statement under penalty of perjury that the material was removed by mistake or misidentification [512(g)(3)(C)]
  • Consent to local federal court jurisdiction, or if overseas, to an appropriate judicial body. [512(g)(3)(D)]

To help the process along, it’s good practice to forward the original takedown notice to the user, so they can understand who’s complaining and why.

Finally, service providers must “accommodate and not interfere with standard technical measures…used by copyright owners to identify or protect copyrighted works.” In order to qualify as a “standard technical measure,” the measure must have been developed “pursuant to a broad consensus of copyright owners and service providers in an open, fair, voluntary, multi-industry standards process,” and not impose “substantial costs” on service providers. As of 2022, nothing appears to qualify.

State Laws and Federal Civil Claims, or Why Section 230 Isn't Just a "Big Tech" Protection

Thanks to Section 230 of the Communications Decency Act, online intermediaries that host or republish speech are protected against a range of state laws, such as defamation, that might otherwise be used to hold them legally responsible for what their users say and do. Section 230 applies to basically any online service that hosts third-party content, such as web hosting companies, domain name registrars, email providers, social media platforms – and Mastodon instances.

Section 230 says that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider" (47 U.S.C. § 230). This protects the provider from liability for what users say in a wide variety of legal contexts. It also provides immunity from liability that might arise from service's removal of users’ speech or other moderation decisions. Unlike the DMCA, Section 230 does not require service providers to take any affirmative steps to qualify for protection.

In 2018, however, Congress passed FOSTA/SESTA, which created new civil and criminal liability for anyone who “owns, manages, or operates an interactive computer service” and creates content (or hosts third-party content) with the intent to “promote or facilitate the prostitution of another person.” The law also expands criminal and civil liability to classify any online speaker or platform that allegedly assists, supports, or facilitates sex trafficking as though they themselves were participating “in a venture” with individuals directly engaged in sex trafficking.

EFF represents several plaintiffs who are challenging the constitutionality of FOSTA/SESTA. As of this writing, the law is still on the books.

Privacy and Anonymity

Your users may register using pseudonyms, and people who want to respond to them may ask you to reveal any personally identifying information you have about them. They may seek to use that information as part of a legal action, but also to retaliate in some other way. Law enforcement may also seek information from you as part of a criminal investigation or prosecution.

If you receive a subpoena or other legal document requiring you to produce that information, consider consulting a lawyer to see if you are required to comply. The best practice is to notify the user as soon as possible so that they can challenge the subpoena in court. Many such challenges have been successful, given the strong First Amendment protections for anonymous speech. You may delay notification for an emergency, a gag order, or when providing notice would be futile, but best practice is to publicly commit to providing notice after the emergency is over or the gag has expired.

Consider publishing a regularly updated Transparency Report, which should include useful data about how many times governments sought user data and how often you provided user data to governments. Even if you get few or no government requests, publishing a report showing zero requests can provide useful data to users.

In addition, consider whether you are collecting and/or retaining more information than necessary, such as logging IP addresses, datestamps, or reading activity. If you don’t have it, they won’t come for it. Thus, it a best practice to publish a law enforcement guide explaining how you respond to data demands from the government, including what information you cannot provide.

Child Sexual Abuse Material (CSAM)

Service providers are required to report any CSAM on their servers to the CyberTipline operated by the National Center for Missing and Exploited Children (NCMEC), a private, nonprofit organization established by the U.S. Congress, and can be criminally prosecuted for knowingly facilitating its distribution. NCMEC shares those reports with law enforcement.  However, you are not required to affirmatively monitor your instance for CSAM.

Other legal issues

In our litigious society, there are many "causes of action" — reasons for initiating a lawsuit — which a creative and determined plaintiff can dream up. Aggressive plaintiffs will sometimes use dubious claims to try to squelch protected speech. If you get a threat or request for information that seems inappropriate, feel free to reach out to EFF (info@eff.org) and we will help if we can.

Corynne McSherry

VICTORY! There Is No Link Tax in the End-of-Year Bills

3 months ago

The Journalism Competition and Preservation Act (JCPA) was a bad bill to begin with and somehow just got worse and worse with each iteration. Its best chance was being slipped into unrelated legislation that no one in Congress could afford to block.

Earlier this month, it seemed like that might happen.  Proponents added this controversial, unconstitutional, poorly-conceived piece of legislation to the National Defense Authorization Act (NDAA), a routine but “must-pass” military budget bill. Thanks to all of you who spoke against this trick and forced the JCPA to be considered on its own merits.

While many like to frame the opposition to the JCPA as that of Big Tech, we know better. The union that represents many reporters was against it. Civil society and Big Tech opponents were against it. And most importantly, you were against it. You drove thousands of messages to Congress exposing this bill as dangerous to the free flow of information online.

We couldn’t rest after that fight because while the NDAA had closed its doors to the JCPA, there was still a chance that it would be added to the end-of-year omnibus, a massive spending bill that lays out the budget for the government for the next fiscal year and routinely gets all sorts of other bills added to it. But we kept pushing—you kept pushing—and it appears we have finally won.

Thank you, we could not have done it without you. Hopefully, this lets Congress know that a link tax is not how you help journalists.

Katharine Trendacosta

We Need to Talk About Infrastructure

3 months ago

Essential internet infrastructure should be content-neutral.  These services should not make editorial decisions that remove content beyond the scope of the law.  This is in part because history shows that any new censorship methods will eventually be abused and that those abuses often end up hurting the least powerful.

That’s the easy part. The hard part is defining what exactly "essential internet infrastructure," is, and to which users. We also need to recognize that this designation can and does change over time. Right now, the "infrastructure" designation is in danger of getting tossed around too easily, resulting in un-nuanced conversations at best and an unjustified cloak of protection, sometimes for anti-competitive business models, at worst.

The term “infrastructure” can encompass a technically nuanced landscape of things – services, standards, protocols, and physical structures – each of which has varying degrees of impact if they’re removed from the proverbial stack. Here’s how EFF thinks about the spectrum of infrastructure with respect to content moderation in late 2022, and how our thinking has changed over time.

Essentially Infra

Some things are absolutely, essentially, infrastructure. These things often have no meaningful alternative, no inconvenient but otherwise available option. Physical infrastructure is the easiest type to see here, with things like submarine cables and internet exchange points (IXPs). These things make up the tangible backbone of the internet.  Parts of the logical layer of the internet also sit on this far side of the spectrum of what is or is not critical infrastructure, including protocols like HTTP and TCP/IP. These components of physical and logical infrastructure share the same essentialness and the same obligation to content neutrality. Without them, the internet in its current form simply could not exist. At least not at this moment.

Pretty much Infra

Then there's a layer of things that are not necessarily critical internet infrastructure but are essential for most of us to operate businesses and labor online. Because of how the internet functions today, things in this layer have unique chokepoint capabilities. This includes payment processors, certificate authorities, and even app stores. Without access to these things, many online businesses cannot function. Neither can nonprofits and activist groups and many, many others. The unique power that things in this layer have over public equity is too much to deny. Sure, some alternatives technically exist: things like Monero, side-loaded APKs, or root access to a web server for generating your own cert with Certbot. But these are not realistic options to recommend for anyone without significant technical skill or resources. There's no denying that when these “pretty much infra” services choose to police content, those choices can be disproportionately impactful in ways that end users and websites can’t remedy.

Not really Infra, but for some reason we often get stuck saying it is

Then there’s this whole other layer of things that take place behind the scenes of apps, but still contribute some important service to them. These things don’t have the literal power to keep a platform’s lights on (or turn the lights off), but they provide an undeniable and sometimes important “quality of life.”

CDNs, security services, and analytics plugins are all great examples. If they withdraw service the impact can vary, but on the internet of 2022, someone dropped by one service almost always has easy-to-obtain (even if not as sleek or sophisticated) alternative solutions.

CDNs are an important example to consider: they provide data redundancy and speed of access. Sometimes they’re more vital to an organization, like if a company needs to send a one-gigabyte software update to a billion people ASAP. A web app’s responsiveness is also somewhat dependent on the reliability of a CDN. Streaming is a good example of something whose performance can be more dependent on that kind of reliability. Nonetheless, a CDN doesn’t have the lights on/off quality that other things do and only very rarely is its quality-of-life impact severe enough that it qualifies for the “pretty much infra” category we just covered. Unfortunately, mischaracterizing the infrastructural quality of CDNs is a common mistake, one we’ve even made ourselves.

EFF’s past infrastructure characterizations

At EFF, we are deeply committed to ensuring that users can trust us to be both careful and correct in all of our advocacy. Our framing of Cloudflare’s decision to cut off service to Kiwi Farms as about “infrastructure,” in a post discussing content interventions more generally, didn’t meet that bar for 2022.

The silver lining is that it prompted us at EFF to reconsider how we approach infrastructure and content moderation decisions and to think about how today’s internet is different than it was just a few years ago. In 2022, could we applaud Cloudflare’s decision to not do business with such ghouls while also strongly supporting the principle that infrastructure needs to be content-neutral? It turns out the answer is yes, and that answer begins with a careful and transparent reconsideration of what we mean when we say “infrastructure.”

Our blog post raised concerns about “infrastructure” content interventions, and pointed to Cloudflare’s decision, among others. Yet what happened as a result of that decision is clear: shortly after Kiwi Farms went offline, they came back on again with the help of a FOSS bot-detection tool. It came at the cost of a slightly slower load time and the occasional CAPTCHA for gatekeeping authentication, but that result clearly put this situation in a “not really infra” category in 2022, even if at some earlier time the loss of Cloudflare’s anti-DDOS service might have been closer to infrastructure.  

When a business like Cloudflare isn’t really crucial to keeping a site online, it should not claim “infrastructure” status (or use public utility examples to describe itself). EFF shouldn’t do that either. 

Because true censorship – kicking a voice offline with little or no recourse – is what we’re really worried about when we say that infrastructure should be content-neutral.   And since we’re worried about steps that will truly kick people off of the Internet, we need to recognize that what service qualifies for that status changes over time, and may even change depending on the resources of the person or entity censored.

Infrastructure matters because it is crucial in protecting expression and speech online. EFF will always stand up to “protect the stack” even if what’s in the stack can and will change over time. 

Electronic Frontier Foundation

EFF Receives $250k Grant from Craig Newmark Philanthropies 

3 months ago

EFF has received a $250,000 grant from Craig Newmark Philanthropies to support its programs to teach and protect journalists, advocate against abusive “stalkerware” technology, and maintain its cybersecurity Threat Lab.

This generous support will help EFF educate journalists about, and protect them from, digital and legal threats. It also will help us do public education, build coalitions, and conduct research into the disciplinary technologies that corporations, schools, and individuals use to violate people’s autonomy and privacy. 

This grant, for work to be conducted through April 2023, is the latest in a series of grants made to EFF by the Craig Newmark Philanthropies, created by the founder of Craigslist to support causes including building networks to help protect the country in the cybersecurity world, defending against disinformation warfare, and fighting online harassment as well as support for ethical and trustworthy journalism, particularly in underserved communities. 

EFF, founded in 1990, is the leading nonprofit organization defending civil liberties in the digital world, championing user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development. Newmark is a member of EFF’s advisory board. 

“Craig Newmark Philanthropies has been a tremendous, unwavering supporter of EFF’s work to protect journalists and people everywhere, and we are extremely grateful,” said EFF Executive Director Cindy Cohn. “This grant advances EFF's mission to ensure that technology supports freedom, justice, and innovation for all people of the world.” 

Josh Richman

No Nudity Allowed: Censoring Naked Yoga

3 months ago

Most nude content is legal, and engaging with such material online provides individuals with a safe and open framework to explore their identitiesbuild communities, and discover new interests. However, social networks and payment processors are intervening to become the arbiters of how people create and engage with nudity and sexuality—both offline and in the digital space. As a result of this flawed system, a crucial form of engagement for all kinds of users has been removed and the voices of people with less power have regularly been shut down.

One recent illustration is the censorship of True Naked Yoga—a platform providing online naked yoga videos. In August 2022, payment processor Stripe banned True Naked Yoga, calling it a “restricted business,” which contravened Stripe’s service agreement. Stripe gave True Naked Yoga just four days’ notice before the account was closed.

Stripe had reviewed the site in December 2021 and did not flag any problems, nor did it give True Naked Yoga any warning that the terms of service had changed. The abrupt ban forced True Naked Yoga to shut down for more than one month until it could find a new payment processor.  

Processors are entitled to enforce their terms of service, but this is a shameful way to do it, and a terrible precedent. Payment services provide vital financial pathways for companies and nonprofits. Websites—whether they accept online donations, sell goods online, or simply have a bank account—rely on their financial institutions to ensure they can continue to operate. We’ve seen many examples of pressure being exerted on a website’s wallet to try to shut down lawful speech. 

So even when material violates terms of service, it is crucial that payment processors give users ample notice so they can arrange for an alternative. They should also give users the right to appeal. 

Speaking to EFF, True Naked Yoga noted that: 

“These prudish tactics often do nothing to make the internet a safer place, but instead negatively affect sex workers, people of color, artists, and even naturist communities. Though these terms of service are in place to supposedly create a safer environment online, they consequently create a situation in which pornography has a monopoly over nudity on the internet. 

Allowing space for nonsexual nudity in society and online helps to combat stigma and other sociocultural harm, while also encouraging more nuanced conversations that have the power to shape our world in more positive ways.”

True Naked Yoga has also been banned from the marketing platform MailChimp for violating its Acceptable Use Policy, which forbids “services with a sexual emphasis or sexually explicit content—including images depicting nudity.” At the same time, it’s accounts on Facebook, Twitter, TikTok, and YouTube have been removed, as well as Instagram deleting the yoga platform’s original account four times before a permanent deletion—despite the content uploaded in accordance with community guidelines.

Nudity transcends pornographic content and sexually engaging material. Yet, the actions of social media platforms and payment intermediaries—like Stripe—are arbitrarily influencing what kind of speech and nudity can exist online. It’s time for payment processors to stop censoring legal content and grow up.

Paige Collings

Looking Forward and Back at the California State Legislature

3 months 1 week ago

California’s legislators took their oaths of office for a fresh two-year session last week. As we prepared for a new session, it’s worth looking back at the past session—in which, with your help, EFF was able to advocate successfully for digital rights victories. California is often seen as a leader in recognizing the importance of privacy, innovation, and free expression. This year, EFF was proud to support legislation that again put the state at the forefront of privacy protections for those seeking reproductive and gender-affirming care.

EFF supported three bills—A.B. 2091, A.B. 1242, and S.B. 107–that were signed into law and take steps to set California as a data sanctuary state for anyone seeking reproductive or gender-affirming care. Authored by Assemblymember Rebecca Bauer-Kahan, Assemblymember Mia Bonta, and California State Senator Scott Wiener, these bills will protect people by forbidding health care providers and many businesses in California from complying with out-of-state warrants seeking information about reproductive or gender-affirming care.

Health privacy has always been important to EFF. While we are not focused on reproductive justice or gender-affirming care advocacy, we joined those advocacy communities in support of these bills because no one should fear receiving a medical procedure because of privacy risks. In the wake of the Dobbs decision, the increasing criminalization of health care makes protecting health privacy newly important.

In addition to these three bills, EFF supported A.B. 2089, authored by Asm. Bauer-Kahan, which was signed into law by Gov. Newsom. This bill extends the protections of the California Confidentiality of Medical Information Act (CMIA) to information generated by mental health apps—previously a glaring hole in medical privacy protections.

Unfortunately, not every legislative battle we tackled this year was as successful. EFF began the session by sponsoring two bills. The first, the Biometric Information Privacy Act (BIPA), would have mirrored many of the protections of a landmark Illinois statute that requires companies to obtain opt-in consent before collecting your biometric information. It was authored by California State Senator Bob Wieckowski. The second, the Student Test Takers’ Privacy Act, was aimed at preventing student proctoring companies from collecting more information that is necessary. It was authored by California State Senator Richard Pan. Both, in line with EFF’s legislative principles, were introduced with a strong private right of action—an enforcement mechanism that gives people the right to sue companies that violate their privacy.

Neither survived the session intact. As in years past, California’s lawmakers were unwilling to stand up for the individual right to sue over privacy violations, and both bills took a beating as they went through the committee process. The California BIPA bill was quietly killed in the Senate Appropriations Committee—which, unlike policy committees, does not allow for public testimony—using a specious argument about costs to California’s courts. The Judicial Council of California itself has said that the legislature should not mark bills as costlyl due to the inclusion of a private right of action. However, the Senate Appropriations Committee continues to use this argument. We thank Sen. Wieckowski and his staff for authoring this bill and working to raise the issue in the legislature.

Similarly, after it was made clear that the private right of action would keep the student privacy bill from moving forward, Sen. Pan agreed to drop the provision. While EFF understood that decision, we—along with our co-sponsors, Privacy Rights Clearinghouse—could no longer sponsor that bill after that point. S.B. 1172 has been signed into law and takes some important steps to curb data collection of proctoring companies. However, the bill lacks teeth. Enforcement of the bill will depend heavily on the workloads and priorities of the state AG and city and county attorneys. This unfortunately silences the voices of those most afflicted by overbroad data collection: students.

There were also bills that EFF opposed due to privacy concerns. One was A.B. 984, sponsored by digital license plate company Reviver. It originally would have allowed for GPS trackers to be placed in the digital license plates of personal vehicles. After advocacy work with many other groups, including ACLU California Action, the National Network to End Domestic Violence, and others, the bill was amended to remove this troubling flaw. We thank Assemblymember Lori Wilson for hearing our concerns and hope others will not reintroduce this bill in the next session.                                                                                                                      

EFF also raised concerns about A.B. 2273 and A.B. 587, which address how social media and other technology companies moderate their platforms. Both were signed into law and address the ways that social media platforms present and report information. While these bills may be a well-intentioned reaction to reports from those such as Frances Haugen on the effect social media has on children, we believe both violate the First Amendment. We urge other states not to adopt these bills as models for their own legislation.

In the coming year, we look forward to working with advocates and legislators in California. We especially hope to build on the bills passed this past year to continue to ensure that the data of those seeking medical care are not handed over to those who wish to prosecute or expose them.


 

Hayley Tsukayama

Federal Agencies Keep Rejecting FOIA Requests for Their Procedures for Handling FOIA Requests

3 months 1 week ago

The majority of federal agencies — including law enforcement agencies like Customs and Border Protection — are refusing to release some of the most basic guidance materials used by their Freedom of Information Act (FOIA) offices: procedures for how they do their jobs.

Government Attic, a website that regularly files FOIA requests and posts the provided records, estimates that at least 60 percent of federal agencies, when faced with filling requests for FOIA standard operating procedures (SOP), claimed that the documents are in draft form and exempt from disclosure or that they don’t have any such records at all. 

FOIA is one of the key mechanisms for government transparency. EFF regularly uses FOIA and state public records laws in its work, including to learn about policy making and implementation, expose local police surveillance, and protect the public’s right to know what the government is doing. 

FOIA requests are rarely processed within the 20-workday time frame required under federal law. A lot of agencies have a lot of backlog to address; the Central Intelligence Agency, for one, reports having more than 1000 requests in queue for processing. As part of the annual Chief FOIA Officer reports submitted by government agencies to the Department of Justice, agencies are supposed to offer some transparency around how the FOIA offices process requests and the work they did to try to improve their workflows. The standard operating procedures (SOPs) for FOIA offices are regularly mentioned in these reports. 

In its most recent Annual FOIA Report, for example, DHS confirmed that CBP and a number of its other components have such guiding documents.



The SOPs can be a playbook for how agencies respond to FOIA requests. As the description might suggest, it’s the way that the FOIA office would standardly handle a FOIA request. In general, it’s not related to a particular law enforcement action, iit doesn’t talk about particular people, and there shouldn't be any sensitive information about confidential informants or spy techniques. 

These SOPs are important for the public to access, because they are a guide to how agencies handle their requests. Being able to see them helps requesters to better understand how each different FOIA office works, allowing them to better formulate their requests and understand the environment in which they’re being processed. Multiple agencies have released some materials describing their procedures, including the Department of Veterans Affairs and the Department of the Treasury

FOIA dictates a “presumption of disclosure” and requires agencies to apply any of the law’s nine exemptions narrowly, meaning that agencies are supposed to redact specific records, rather than withhold them in their entirety. Agencies should be applying that expectation to SOPs, not barring the public from the very materials that guide it.

In materials posted on Government Attic, the National Transportation Safety Board’s SOPs, for example, describe details of how FOIA officers should triage incoming requests, respond to requests related to foreign investigations, and go through other parts of the FOIA process. Though some sections are redacted, each of these, at least, point to the applicable exemption, usually b(4), the trade secret exemption, as a way of blocking access to the specifics of FOIAXpress, the portal agencies use to organize requests. 

However, when asked for copies of these SOPs by Government Attic, Customs and Border Protection, CIA, and dozens of other agencies have withheld any responsive materials in their entirety — and used some unexpected reasons to do it. 

Rather than release their procedures, CBP cited the b(6) exemption under FOIA, which generally relates to the personal privacy of an individual, often in the form of names, social security numbers, details about where a person might live, etc. In most cases when these exemptions are used, agencies are able to redact the personal information pretty easily, leaving the rest of the document available for review. CBP also claimed that the SOPs were exempt from disclosure under another FOIA exemption that prevents the disclosure of law enforcement techniques. It's unclear why CBP’s FOIA manual would include investigative techniques  — unless it's claiming that processing FOIAs in and of itself is a law enforcement technique. It’s also hard to believe that any such techniques are so pervasive as to require withholding the entire manual, rather than redacting specific pages.

The disparity when it comes to releasing the SOPs highlights the variance that exists across FOIA offices when it comes to processing all types of requests. The FOIA process can feel confusingly opaque to requesters. Unfortunately, for many agencies, keeping it that way seems to be their SOP.

Beryl Lipton

Dangerous "Kids Online Safety Act" Does Not Belong in Must-Pass Legislation

3 months 1 week ago

Every year, Congress must follow through on an enormous and complicated task: agreeing on how to fund the government for the following year. The wrangling over spending often comes down to the wire, and this year, some Senators are considering shoehorning a controversial and unconstitutional bill, the Kids Online Safety Act (KOSA), into the must-pass legislation. Make no mistake: KOSA is bad enough on its own, but putting KOSA into the “omnibus package” is a terrible idea. 

Amendments Aren’t Enough

The bill’s sponsors have made last-minute changes to the bill in an attempt to assuage concerns, but these edits don’t resolve its fundamental problems. We’ve spoken about the harms KOSA will cause at length, and they remain in the current version. 

To recap: KOSA’s main provision contains the vague requirement that online services act “in the best interests of a user that the platform knows or should know is a minor,” by taking “reasonable measures” to prevent and mitigate various enumerated harms. These harms include mental health disorders, including (to name a few) the promotion or exacerbation of suicide, eating disorders, and substance use disorders; physical violence, online bullying, and harassment of the minor; and sexual exploitation and abuse.  

KOSA’s latest text still contains this glaring and unconstitutional flaw at its core. 

There is no doubt that this content exists on the internet and that it can be harmful. But as we’ve written, there is no way a platform can make case-by-case decisions about which content exacerbates, for example, an eating disorder, compared to content which provides necessary health information and advice about the topic. As a result, services will be forced to overcensor to ensure young people—and possibly, all users, if they aren’t sure which users are minors—don’t encounter any content on these topics at all. 

KOSA’s latest text still contains this glaring and unconstitutional flaw at its core. That’s why we continue to urge Congress to not pass it. And Senators should not consider the largest change—the addition of a new “limitations” section—a solution to any of the bill’s problems. The new language reads:


Nothing in subsection (a) shall be construed to require a covered platform to prevent or preclude any minor from deliberately and independently searching for, or specifically requesting, content.

The new “limitation” section is intended to wave away KOSA’s core problem by giving online services an out. But instead, it creates a legal trap. The bill still creates liability for any service that delivers the content to the user, because at root, if the service is aware the user is a minor—or should know, in the language of the billthe service is still on the hook for any content presented that is not in the user's “best interest.” This new language just begs the question: How can a site provide general information that a minor “deliberately and independently” searches for or finds on their services without that site then acquiring some knowledge that minors are looking at the information? Simply put: They cannot. This language just puts a fig leaf over the problem. 

One interpretation of the new “limitation” could be that KOSA creates liability only when platforms deliver the content to minor users who aren’t seeking it, rather than those who are seeking it. This may be a subtle way to go after “algorithmically-presented” content, such as videos recommended by YouTube. But that’s not what the law actually says. Who knows what “independently” means here?  Of what or whom?   If a minor finds a site through a search engine, or if a covered platform provides a list of URLs for minors, was that an “independent” search? A law that claims it doesn’t censor content because people can still search for it is fundamentally flawed if the bill also says that services cannot deliver the content. 

In its latest form, this bill still lets Congress—and through the bill's enforcement mechanism, state Attorneys General—decide what’s appropriate for children to view online. The result will be an internet that is less vibrant, less diverse, and contains less accurate information than the one we have now. 

Removal of this content is not an abstract harm. At this moment, hospital websites are removing truthful information about gender-affirming healthcare due to both political pressure and legislation. Libraries are removing books about LGBTQ topics. Given what we know about how certain Attorneys General already have animus against online services and children seeking gender-affirming care, this seems like a terrible time to hand them this additional power. Forcing online services to heavily moderate content that could be construed as contributing to physical violence or a mental disorder, and leaving that enforcement up to either the Federal Trade Commission or state Attorneys General, will disparately impact the most vulnerable, and will harm children who lack the familial, social, financial, or other means to access health information from other places. 

1st Amendment Concerns

A fundamental principle in First Amendment law is that the government can’t indirectly regulate speech that it can't directly regulate. Congress can’t pass a law prohibiting children from accessing information about eating disorders, and similarly, Congress can’t impose liability on services that host that speech as a means of limiting its visibility to children. In its latest form, KOSA still creates liability for hosting legal content that the bill defines as one of the enumerated harms (suicide, eating disorders, etc), and permits enforcement by the FTC and state Attorneys General for violations for services that don’t take whatever “reasonable measures” necessary (a vague standard) to prevent and mitigate children’s exposure to that information. This is unconstitutional. 

Not only will KOSA endanger the ability of young people to find true and helpful information online, but it will also interfere with the broader public’s First Amendment right to receive information. The steps that services will take to limit this information for minors is likely to limit access for all users, because many services will not be in a position to know whether their users are children or adults.

This is not a simple problem to solve: Many services that deliver content do not know the age of their users, especially those that are not social media platforms (and even some of those, like Reddit, do not ask for personal information such as age upon signup, which benefits user privacy). Assuming that they do know, or that they should know, fundamentally misunderstands how these services operate. KOSA would require that services must either learn the age of their users and be able to defend that knowledge in court, or remove any potentially offending content. Again, no platform can reasonably be expected to make intelligent and sweeping decisions about which content promotes disordered behavior and which ones provide necessary health information or advice to those suffering from such behaviors. Instead, most platforms will err on the side of caution by removing all of this content entirely.

Additionally, there is a significant benefit to anonymity and privacy online.  This is especially true for certain individuals in vulnerable minorities, and in instances in which anyone is looking up certain sensitive information, such as about reproductive rights or LGBTQ topics. Young people in particular may require anonymity in some of their online activity, which is one reason why the “parental supervision” elements of KOSA are troubling. 

Better Options Exist

The latest amendments to KOSA should not give Congress cover to include it in the omnibus spending bill. It is a massive overreach to include— without full discussion and within must-pass legislation—a law that requires web browsers, email applications and VPN software, as well as platforms like Reddit and Facebook, to censor an enormous amount of truthful content online. 

While we understand the desire for Congress to do something impactful to protect children online before the year’s end, KOSA is the wrong choice. Instead of jamming sweeping restrictions for online services into a must-pass spending package, Congress should take the time to focus on creating strict privacy safeguards for everyone—not just minors—by passing legislation that creates a strong, comprehensive privacy floor with robust enforcement tools. 

Jason Kelley

Only A Few More Weeks Left to Support EFF Through The CFC!

3 months 1 week ago

The Combined Federal Campaign (CFC) is the world's largest and most successful annual charity campaign for U.S. federal employees and retirees. The pledge period for this year's campaign ends on January 14, 2023, and you can donate to support EFF's mission in fighting for digital freedoms for every internet user.

Donating to EFF through the CFC is easy! Just follow these steps:

  1. Go to GiveCFC.org or scan the QR code below.
  2. Click the DONATE button to give via payroll deduction, credit/debit, or an e-check.
  3. Be sure to type in our CFC ID #10437.

If you have a renewing pledge from a previous year, you can also follow these steps to increase your support!

U.S. federal employees raised over $34,000 for EFF through last year's CFC campaign. That support has helped EFF pull off some serious victories this year, including pushing San Francisco to ban the use of remote-controlled killer robots, working with Congress to pass the Safe Connections Act (which is now law), and keeping the JCPA "link tax" bill out of must-pass U.S. military legislation.

Support from federal employees has a tremendous impact on the work that EFF can do. We couldn't keep up these fights and secure these victories without your help. Support EFF today by using our CFC ID #10437 when you make a pledge!

Christian Romero

EFF Agrees With the NLRB: Workers Need Protection Against Bossware

3 months 1 week ago

The general counsel of the National Labor Relations Board (NLRB) issued an important memo that calls for regulators to protect workers against what she described as “unlawful electronic surveillance and automated management practices.” The NLRB is the independent federal agency charged with defending the collective bargaining rights of workers. In keeping with its mission, it described its plan to enforce currently existing law. It will urge that the board itself and relevant federal agencies create a new framework for settling labor law principles in the arena of workplace technology. 

How does this work? The NLRB protects the right of workers under Section 7 of the National Labor Relations Act to organize and discuss joining unions with their coworkers without retaliation and the board’s General Counsel rightly suggests that surveillance of workers by their bosses can lead to unlawful retaliation, as well as a chilling effect on workplace speech protected by the NLRA.

“It concerns me that employers could use these technologies to interfere with the exercise of Section 7 rights … by significantly impairing or negating employees’ ability to engage in protected activity—and to keep that activity confidential from their employer,” General Counsel Jennifer Abruzzo said in her letter. She added she will urge the board to act to "protect employees from intrusive or abusive electronic monitoring and automated management practices" that interfere with organizing rights.  The general counsel's memo serves as a marker for future cases considered by the NLRB. Traditionally, the opinion of the NLRB's general counsel has a significant effect on how the board rules on cases it considers. This means that, should workers wish to file a claim with the NLRB along these lines, the board would take this opinion into account.

While worker privacy has been considered within general consumer privacy bills, workplace privacy rights function differently than those in many other contexts. A worker often cannot walk away from a camera pointed at their workstation. And while a consumer may feel they aren’t really “consenting” to data collection when they use a product or service, they generally have the option to go to a competing product. Workers don’t; saying “no” could cost them their livelihood. Therefore workers are set up to potentially lose certain rights during the workday.   

Abruzzo, in writing her memo, said that "[c]lose, constant surveillance and management through electronic means threaten employees’ basic ability to exercise their rights." 

 In the workplace, electronic surveillance and the breakneck pace of work set by automated systems may severely limit or completely prevent employees from engaging in protected conversations about unionization or terms and conditions of employment that are a necessary precursor to group action. If the surveillance extends to break times and nonwork areas, or if excessive workloads prevent workers from taking their breaks together or at all, they may be unable to engage in solicitation or distribution of union literature during nonworking time. And surveillance reaching even beyond the workplace—or the use of technology that makes employees reasonably fear such far-reaching surveillance—may prevent employees from exercising their Section 7 rights anywhere.

Finding ways to protect those rights in the diversity of workplaces across the country is a bigger question, and we look forward to seeing Abruzzo's forthcoming framework, and the way this opinion is represented in future rulings.

EFF believes Abruzzo was right to raise concerns about this technology and urges the National Labor Relations Board to consider seriously the harms that workplace surveillance technology poses to workers and to organizing. Abruzzo is correct in identifying the risk that such technologies pose to organizing. EFF also has broader concerns about the effect such surveillance has on worker privacy and autonomy.  

Earlier this year, we joined California’s leading labor groups to support A.B. 1651, authored by Assemblymember Ash Kalra, which would have taken important first steps in providing workers with information about monitoring in the workplace. The NLRB is just one regulator paying close attention to worker surveillance. The Federal Trade Commission has begun considering making rules around commercial surveillance, and in their recent period for public comment, the FTC sought to consider workplace surveillance as a part of their rulemaking process. The Electronic Frontier Foundation submitted our comments, as have many of our allies, supporting the commission’s plan to have a rule that is inclusive of the workplace. Strong worker protections through the FTC rule would align the commission with the NLRB’s stated mission in its memo to enforce existing protections in this space and to foster inter-agency cooperation to protect the rights of workers from punitive and harmful workplace technologies. 

We echo Abruzzo in encouraging regulators to look at worker surveillance both in the workplace and for those working remotely. As we have previously discussed, workers are also often asked to install “bossware” on their work—or sometimes personal—devices. Such software may be aimed at helping employers. But, in practice, it can put workers’ privacy and security at risk by logging every click and keystroke, covertly gathering information for lawsuits, and using other spying features that go far beyond what is necessary and proportionate to manage a workforce. 

Workers are not consumers when they’re on the job, and so we should not expect existing consumer privacy frameworks to adequately address worker surveillance.. But unions, labor researchers, rank and file workers, and the NLRB offer essential input into both how to protect workers with inclusive consumer protections, and through workplace protections in particular. Our privacy shouldn’t stop when we clock in. Workers should not feel pressure to be scrutinized in their workplace or in their own homes to keep their jobs.

José EFA

Digital Rights Updates with EFFector 34.6

3 months 1 week ago

Want the latest news on your digital rights? Well, you're in luck! Version 34, issue 6 of our EFFector newsletter is out now. Catch up on the latest EFF news by reading our newsletter or listening to the audio version below. This issue covers a collection of EFF's latest victories (seriously, there are a lot of them!) as well as our thoughts on the "fediverse" and the mess that is the Filter Mandate Bill.

LISTEN ON YouTube

EFFECTOR 34.6 - Victory!

Make sure you never miss an issue by signing up by email to receive EFFector as soon as it's posted! Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Letter to the UN Ad Hoc Committee

3 months 1 week ago

H.E. Ms. Faouzia Boumaiza Mebarki
Chairperson

Ad Hoc Committee to Elaborate a Comprehensive International Convention on Countering the Use of Information and Communication Technologies for Criminal Purposes

Your Excellency: 
We, the undersigned organizations and academics, work to protect and advance human rights, online and offline. Our collective goal is to ensure that human rights and fundamental freedoms are always prioritized when countering cybercrime, securing electronic evidence, facilitating international cooperation, or providing technical assistance. While we are not convinced that a global cybercrime convention is necessary, we would like to reiterate the need for a human-rights-by-design approach in the drafting of the proposed UN Cybercrime Convention. 

We have grave concerns that the draft text released by the committee on November 7, 2022, formally entitled “the consolidated negotiating document (CND) on the general provisions and the provisions on criminalization and on procedural measures and law enforcement of a comprehensive international convention on countering the use of information and communications technologies for criminal purposes,” risks running afoul of international human rights law. 

The CND is overbroad in its scope and not restricted to core cybercrimes. The CND also includes provisions that are not sufficiently clear and precise, and would criminalize activity in a manner that is not fully aligned and consistent with States’ human rights obligations set forth in the Universal Declaration of Human Rights (UDHR), the International Covenant on Civil and Political Rights (ICCPR), and other international human rights standards and instruments.[1] Further, the CND’s criminal procedural and law enforcement chapter lacks robust human rights safeguards, while its substantive provisions expand the scope of criminal intent and conduct, threatening to criminalize legitimate activities of journalists, whistleblowers, security researchers, and others.

Failing to prioritize human rights throughout all the Chapters can have dire consequences. The protection of fundamental rights has consistently been raised by Member States throughout the sessions of the Ad Hoc Committee to elaborate the Proposed Convention. Many States and non-governmental stakeholders have called for the Proposed Convention to be fully aligned and consistent with international human rights law. Any permitted measures restricting rights need to be prescribed by law, justified on legal grounds permitted strictly in relation to the rights concerned, and be necessary and proportionate to pursue a legitimate objective. Provisions should also respect the rule of law by including sufficient specificity and independent oversight to ensure their implementation aligns with their intended scope. So, it's extremely troubling to see that many provisions in the CND are drafted in a way that does not uphold human rights law, in substance or in process, and open the door to implementation in ways that threaten further violations of human rights and the rule of law.

Specifically, we are concerned that CLUSTERS 2 to 10 include a long list of offences that are not core cybercrimes, offences that interfere with protected speech and fail to comply with permissible restrictions under international freedom of expression standards, or offences drafted with vague or overbroad language.

The Criminalization Chapter should be restricted to core cybercrimes–criminal offences in which information and communications technology (ICT) systems are the direct objects, as well as instruments, of the crimes; these crimes could not exist at all without the ICT systems. A useful reference for the types of crimes that are inherently ICT crimes can be found in Articles 2-6 of the Budapest Convention. Should other non-core cybercrimes be included, we recommend that those “cyber-enabled” crimes be narrowly defined and strictly consistent with international human rights standards. 

Crimes, where ICT systems are simply a tool that is sometimes used in the commission of an offence, should be excluded from the proposed Convention. These would include crimes already prohibited under existing domestic legislation and merely incidentally involving or benefiting from ICT systems without targeting or harming those systems, as in some of the crimes under CLUSTERS 2 and 10.

We are particularly concerned about the inclusion of content crimes such as “extremism-related offences” (Article 27) and “terrorism-related offences” (Article 29). These provisions disregard existing human rights standards set out by various UN bodies on policies and national strategies to counter and prevent terrorism and violent extremism. In particular, freedom of expression mandates holders have reiterated that broad and undefined concepts such as “terrorism” and “extremism” should not be used as a basis to restrict freedom of expression. In addition, there are no uniform definitions of these concepts in international law, and many States rely on this ambiguity to justify human rights abuses such as politically-motivated arrests and prosecutions of civil society members, independent media, and opposition parties, among others. 

More generally, the inclusion of several content-related offences is profoundly concerning (as in some of the crimes under CLUSTERS 4, 7, 8, and 9). As we have reiterated throughout the negotiating process, this instrument should not include speech related offences. Including these crimes poses a heightened risk that the proposed Convention will contravene existing international protection of freedom of expression and be used to restrict protected expression under international human rights standards.

Moreover, core cybercrime offences under CLUSTER 1 would impose some restrictions that might interfere with the essential working methods of journalists, whistleblowers, and security researchers and needs to be revised.  Articles 6 and 10, for example, should also require a standard of both fraudulent intent and harm  - a requirement that many delegations suggested as essential to consider during the discussion on this issue in the second substantive session.

The provisions on the Convention’s procedural powers also raise concerns. Investigative powers required by the Convention should only be available with respect to crimes covered by the Convention. The Convention concerns cybercrime and should not become a general purpose vehicle to investigate any and all crimes.

While the general obligation to respect the principles of proportionality, necessity, and legality and the protection of privacy and personal data in implementing procedural powers is welcome, additional specificity is necessary to ensure human rights are respected in the implementation of the Convention. To that effect, Article 42 should specify that prior independent (preferably judicial) authorization and independent ex-post monitoring are required, recognize the need for effective remedies, require rigorous transparency reporting and user notification by state parties, and include guarantees to ensure that any investigative powers do not compromise the integrity and security of digital communications and services.

The Convention’s procedural mechanisms should also ensure that international law and human rights standards with respect to evidence are respected. Evidence obtained in violation of domestic law or of human rights should be excluded from criminal proceedings as should any further products of that evidence.

The Convention’s preservation powers (Articles 43 and 44) should ensure that preservation requirements and renewals are also premised on reasonable belief or suspicion that a criminal offence has or is being committed and that the data sought to be preserved will yield evidence of that offence. The preservation period should not exceed sixty (60) days, subject to renewal, and the Convention should clarify that national laws requiring preservation in excess of the specified period will not qualify for implementation. Article 43 should further specify that service providers are required to expeditiously delete any preserved data once the preservation period ends.

Article 46(4) raises serious concerns vis-a-vis the potential obligations imposed upon third parties, such as service providers, to either disclose vulnerabilities of certain software or to provide relevant authorities with access to encrypted communications. 

Article 47 on a real-time collection of traffic data should be revised and written in a more precise way to ensure that the Article does not authorize any blanket or indiscriminate data retention measures. The generalized interception, storage, or retention of the content of communications or its metadata has been deemed to have failed the necessary and proportionate test.

Articles 47 and 48 should be amended to clarify that they do not include state hacking of end devices. State hacking powers remain controversial and can cause collateral harm to the integrity and security of networks, data, and devices. There is no consensus as to when these powers can be appropriately invoked, and there is a risk that some State Parties will inappropriately implement Articles 47 and 48 to include this type of intrusive surveillance. 

The Convention’s confidentiality provisions (Articles 43(3), 47(3), and 48(3)) should only apply to the extent necessary to prevent any threats to investigations that might ensue in the absence of confidentiality.

We respectfully recommend that the CND be revised to ensure that:

  • The scope of the Convention should be limited to issues within the realm of the criminal justice system and should be limited in both its substantive and procedural scope to core cyber crimes.
  • The proposed crimes under Articles 6 and 10 should be revised to include, at minimum, a standard of both fraudulent intent and harm, to protect journalists, whistleblowers, and security researchers [CLUSTER 1]. 
  • The criminalization chapters should be restricted to offences against the confidentiality, integrity, and availability of computer data and systems. 
  • Crimes where ICTs are simply a tool that is sometimes used in the commission of an offence should be excluded from the proposed Convention. [CLUSTERS 2-10]
  • Should other non-core cybercrimes be included, we recommend that those cyber-enabled crimes are  narrowly defined and consistent with international human rights standards, and, in any case, no speech offences should be included.
  • Any criminal offences that restrict activity in a manner that is inconsistent with human rights law should be excluded. The risk that an overbroad list of online content, speech, and other forms of expression may be considered a cybercrime under the proposed Convention is a major concern that should be addressed, particularly through the removal of any content offences [See CLUSTERS 4, 7, 8, and 9].
  • Investigative powers in Criminal Procedural Measures and Law Enforcement Chapter III should be carefully scoped so that they remain closely linked to investigations of specific criminal conduct and proceedings and should only be available for investigations of crimes specifically covered by the Convention (Article 41(2)).
  • Secrecy provisions should only be available where disclosure of the information in question would pose a demonstrable threat to an underlying investigation (Articles 43(3), 47(3), and 48(3).
  • When it comes to criminal procedural measures, any proposed obligations that enable investigation and prosecution should come with detailed and robust human rights safeguards and rule of law standards, including a requirement for independent oversight and control and the right to an effective remedy.
  • General provisions authorizing interception and real time collection of data should be amended to clarify that they do not authorize intrusion into networks and end devices. These provisions lack sufficient safeguards to address the threat to the security and integrity of networks, data, and devices posed by state hacking, and State Parties should not be able to rely on ambiguities in the text to justify hacking activities (Articles 47 and 48).
  • The text should not authorize any indiscriminate or indefinite retention of metadata.

Negotiating an international cybercrime Convention with Member States is not an easy task. But it is paramount that this Convention, which has the potential to profoundly impact millions of people around the world, makes it crystal clear that fighting global cybercrime should reinforce and not endanger or undermine human rights. 

[1] These instruments are the International Covenant on Civil and Political Rights (ICCPR), the International Covenant on Economic, Social, and Cultural Rights (ICESCR), the Convention on the Elimination of All Forms of Discrimination against Women (CEDAW), the Convention on the Elimination of All Forms of Racial Discrimination (CERD), the Convention on the Rights of the Child (CRC), among other international and regional human rights instruments and standards).  
[2]  https://privacyinternational.org/sites/default/files/2022-01/2021%20GILS%20version%203.0_0.pdf

Submitted by NGOS registered under operative 8 or 9.

  1. Red en Defensa de los Derechos Digitales  - Mexico
  2. Access Now - International 
  3. Association for Progressive Communications (APC) - International
  4. Center for Democracy and Technology (CDT) - International
  5. Data Privacy Brasil - Brazil 
  6. Derechos Digitales - Latin America
  7. Eticas Data Society Foundation - International
  8. Fundacion Via Libre - Argentina
  9. Human Rights Watch - International
  10. Hiperderecho - Perú 
  11. IPANDETEC - Central America

The letter has also been supported by a broad list of civil society and academics available here:

  1. ARTICLE19 - International
  2. Aquilenet - France
  3. Asociación por los Derechos Civiles (ADC) - Argentina
  4. Asociación TEDIC - Paraguay
  5. Asociación para una Ciudadanía Participativa, ACI PARTICIPA - Honduras
  6. Association for Preservation Technology International (ApTI) - Romania
  7. Centre for Free Expression - Canada
  8. Centre for Information Technology and Development (CITAD) - Nigeria
  9. Chaos Computer Club (CCC) - Germany
  10. Cooperativa Sulá Batsú - Costa Rica
  11. Comun.al, Laboratorio de resiliencia digital - México
  12. Digital Rights Ireland - Ireland
  13. Državljan D / Citizen D -  ​​Slovenia
  14. Epicenter.works - Austria
  15. Electronic Frontier Foundation - International
  16. Electronic Frontier Finland - Finland
  17. European Center for Not-for-Profit Law (ECNL) - International
  18. Global Partners Digital
  19. Foundation for Information Policy Research (FIPR) - United Kingdom 
  20. Fundación Internet Bolivia
  21. Fundación Acceso - Central America
  22. Fundación Karisma - Colombia
  23. Global Voices - International
  24. Homo Digitalis - Greece
  25. Intervozes - Coletivo Brasil de Comunicação Social - Brazil
  26. IPANDETEC - Central America
  27. IT-Pol - Denmark
  28. Instituto Educadigital
  29. Jokkolabs Banjul - Gambia
  30. Kandoo - International
  31. Korean Progressive Network Jinbonet - Republic of Korea
  32. Laboratory of Public Policy and Internet (LAPIN) - Brazil
  33. Laboratorio de Datos y Sociedad (Datysoc) - Uruguay
  34. Movimento Mega - Brazil
  35. Privacy International - International
  36. Southeast Asia Freedom of Expression Network - South East Asia
  37. Social Media Exchange (SMEX) - Lebanon
  38. Usuarios Digitales
  39. Vrijschrift.org - Netherlands 
  40. Venezuela Inteligente / Conexión Segura - Venezuela
  41. Damian Loreti - Information and comm Law Professor - Universidad de Buenos Aires - Argentina
Electronic Frontier Foundation

EFF to Court: No Qualified Immunity for Wrongful Arrest of Independent Journalists

3 months 1 week ago

Independent journalists increasingly gather newsworthy information and publish it on social media, often without the involvement of traditional news media. They make important contributions to public discourse and are often the first to report newsworthy events. Courts must scrupulously safeguard their First Amendment rights to gather and publish the news. (Sometimes they are called “citizen journalists,” but of course, many independent journalists are non-citizens.)

EFF this week filed an amicus brief arguing that when police officers wrongly arrest an independent journalist in violation of the First Amendment, courts must order the officers to pay damages. The brief was written by Covington, and our co-amici are the National Press Photographers Association and the Pelican Institute. The brief explains that damages are necessary both to compensate the independent journalist for their injury, and to deter these officers and others from similar misconduct in the future. The case, Villarreal v. City of Laredo, is before the federal appeals court for the Fifth Circuit. A panel of judges issued a great decision in favor of the journalist earlier this year, but the entire court has agreed to rehear the case.

The issue on appeal is whether a dangerous legal doctrine called “qualified immunity” should protect the officers from paying damages. Fortunately, Congress empowered people to sue state and local officials who violate their constitutional rights. This was during Reconstruction after the Civil War, in direct response to state-sanctioned violence against Black people. Unfortunately, the U.S. Supreme Court created a misguided exception: even if a government official violated the Constitution, they don’t have to pay damages, unless the legal right at issue was “clearly established” at the time they violated it. Worse, federal courts can grant qualified immunity without even ruling on whether the right exists, which stunts the development of constitutional law. This is especially problematic for digital rights, because there sometimes will not be clearly established law regarding cutting-edge technologies.

The amicus brief explains the importance of internet-based independent journalism to public discourse. About half of Americans get news from social media. Independent journalists have published important stories on social media about, for example, police violence against Black people. The brief also explains the importance of a damages remedy to protect independent journalists from police violations of their First Amendment rights. Professional journalists often have the backup of their employers, the traditional news media. But independent journalists often must fight alone.

“Damages remedies are important for every journalist—indeed for anyone at all—whose First Amendment rights have been violated. But the need for an effective damages remedy is particularly acute in the case of citizen journalists,” the brief argues. “Many citizen journalists lack the means to effectively enforce their rights, making them more susceptible to intimidation and retaliation. An effective damages remedy is therefore vital both to the individual journalist and to citizen journalism.”

You can read the brief here.

Aaron Mackey

A Promising New GDPR Ruling Against Targeted Ads

3 months 2 weeks ago

Targeted advertising’s days may be numbered. The Wall Street Journal and Reuters report that the European Data Protection Board has ruled that Meta cannot continue targeting ads based on user’s online activity without affirmative, opt-in consent. This ruling is based on the European Union’s General Data Protection Regulation (GDPR). This is a big step in the right direction: voluntary opt-in consent should be the baseline requirement for any data collection, retention, or use. And we should take a step further: online behavioral advertising should be banned.

The ruling is not final, or even public. The Board has sent the matter back to Ireland’s Data Protection Commission to issue an order, and reportedly to assess fines. Meta can still appeal. If the decision is finalized and enforced, Meta will need to change its surveillance and consent practices, and ads on Facebook and Instagram will start working significantly differently. Meta would have to seek affirmative consent from users before sending them targeted ads based on surveillance of their online behavior. Meta could pivot to “contextual ads” based only on the content a user is currently interacting with.

The surveillance-based advertising in question here involves how people use Meta’s own apps. Since 2020, Meta has offered settings to opt out of ad targeting based on information from other apps, websites, and businesses that Meta knows you have visited. Meta tracks its users off-site through tools like Facebook Login, Facebook’s tracking Pixel, social widgets such as Like and Share buttons, and other less visible features for developers. But Meta offers its users no similar option to opt out of ad targeting based on what users click, like, watch, and interact with on Facebook, Instagram, and other Meta properties.

The company should be offering all of its users an affirmative, opt-in consent option—and not track its users, either on-site or off-site, unless they opt in. Instead, Meta stuck language about its ad targeting practices into its platforms’ Terms of Service. Then Meta claimed that this means that, when someone uses Facebook or Instagram, they’ve supposedly “consented” to the use of their information to target ads. This sleight of hand takes advantage of the GDPR concept of “contractual necessity,” in which the GDPR allows data processors to collect and use information as necessary to deliver services for which the data subject contracted. One canonical example is that if you ask a company to send you a package, it can collect your address and use it to send you the package, even without separate explicit consent.

This week’s ruling stems from a complaint filed by EU-based NOYB (short for “none of your business”) in 2018 against Meta. At the time, Ireland’s privacy regulator sided with Meta. Now the European-wide Data Protection Board has revisited the issue.

This is an important step in the right direction. No company—Meta included—should be able to side-step consent with Terms of Service trickery, and users should be able to affirmatively decide whether or not their information is used for ad targeting. Opt-in consent to collect, retain, or use a person’s data is at the core of the GDPR, and of EFF’s recommendations for any consumer data privacy legislation.

This is not the only recent blow to Meta’s ad business. Last year, Apple introduced AppTrackingTransparency, which requires mobile apps on iOS to obtain the user’s express permission before tracking them across other apps. Sure enough, when given a clear choice, most people prefer that their personal devices not enable around-the-clock surveillance of their every click and swipe, and Meta lost both advertising revenue and a source of valuable ad targeting data.

Ad tracking, profiling, and targeting violates privacy, warps technology development, and has discriminatory impacts on users. Online behavioral advertising should be banned outright. Until then, moves like the European Data Protection Board’s send a clear message to platforms and advertisers that neither regulators nor users are willing to tolerate this extractive business model.

Gennie Gebhart

eIDAS 2.0 Sets a Dangerous Precedent for Web Security

3 months 2 weeks ago

The Council of the European Union this week adopted new language for regulations governing internet systems that may put the security of your browser at greater risk.

The new language affects the EU’s electronic identification, authentication and trust services (eIDAS) rules, which are supposed to enable secure online transactions across countries in the EU. It contained a range of updates that raised privacy concerns for EU citizens about the European Digital Identity Wallet, a government app for storing personal information like drivers’ licenses and bank cards and making electronic payments via smartphones.

But some of the updates also impact web security that could expand beyond the EU, as other governments could choose to follow the EU’s example and adopt similarly flawed frameworks.

In a nutshell, the EU is mandating that browsers accept EU member state-issued Certificate Authorities (CAs) and not remove them even if they are unsafe. If you think this sounds bad, you’re right. Multiple times, EFF, along with other security experts and researchers, urged EU government regulators to reconsider the amended language that fails to provide a way for browsers to act on security incidents. There were several committees that supported amending the language, but the EU council went ahead and adopted this highly flawed language.

Before we jump into the details, here’s some background on safeguarding the web for users. Protecting users on the internet is hard. One remedy that we tried, but moved away from, was something called Extended Validation (EV) certificates. The theory was that these certificates would require the site to go through a strong background check, in the hope that that would make it easier for users to identify a legitimate site. Simply put, It didn’t work.

What has worked is focusing on wide adoption of HTTPS with Domain Validation (DV) certificates—often issued for free—so that you know you are communicating with the website you intend to reach. Browsers choose which CAs meet their security standards and store those in their “root stores,” which are organized to reject inferior or unsafe CAs. Here’s an in-depth explanation on how CAs work.

So it’s astonishing that in a giant step backwards, the EU’s modified eIDAS language, embraces the outdated EV framework. Article 45.2 of the rules not only enforces a framework based on EV certificates, it codifies into law mandated support for “qualified web authentication certificates”(QWACs) issued by designated Qualified Trust Service Providers (QTSPs), which is another name for EU member state CAs. QWACs are not free or easily automated like DV certificates.

On top of that, instead of being approved by browsers, the QTSPs are approved by EU regulation, and browsers are required to trust them—and not remove them—even if they don’t meet the security requirements of their root stores.

Today browsers can act iteratively as security issues arise. As we said above, security issues move fast and need immediate attention and action. Laws that impede action could make future incident response slow. The EU is not immune to members acting outside of the lines of democracy. So creating laws that make the internet more vulnerable to security threats is a problem that cannot be ignored.

Article 45.2 attempts to take away power from Big Tech companies like Google and Apple and give it back to individuals on the web through regulation, and enforce transparency about who owns what sites. But this outdated model will not help people avoid scams and malware across the internet.

As the current eIDAS adoption moves through the last legislative stages in the EU, we are calling out Article 45.2 because it makes web security harder to achieve and enforce, making the internet a less safe place for everyone.

Alexis Hancock

VICTORY! Judge’s Critical Investigation of Patent Troll Companies Can Move Forward

3 months 2 weeks ago

In recent months, Delaware-based U.S. District Court Judge Colm Connolly started an inquiry into some patent trolling companies that have filed dozens of lawsuits in his court. Last month, lawyers for the patent troll companies appealed to the U.S. Court of Appeals for the Federal Circuit, seeking to shut down the investigation.

Those events led EFF to file an amicus brief, in which we stood up for the public’s “right … to know who is controlling and benefiting from litigation in publicly-funded courts.” We filed this brief together with two other organizations that work with us on patent transparency issues, Engine Advocacy and Public Interest Patent Law Institute.

Today, the Federal Circuit accepted our brief, and denied the petition filed by patent troll Nimitz Technologies that sought to halt the investigation. The Federal Circuit panel called out strong language in Judge Connolly’s Memorandum explaining the concerns that led to his investigation (see p. 4): 

The records sought are all manifestly relevant to addressing the concerns I raised during the November 4 hearing. Lest there be any doubt, those concerns are: Did counsel comply with the Rules of Professional Conduct? Did counsel and Nimitz comply with the orders of this Court? Are there real parties in interest other than Nimitz, such as Mavexar and IP Edge, that have been hidden from the Court and the defendants? Have those real parties in interest perpetrated a fraud on the court by fraudulently conveying to a shell LLC the [patent-in-suit] and filing a fictitious patent assignment with the [United States Patent and Trademark Office] designed to shield those parties from the potential liability they would otherwise face in asserting the . . . patent in litigation? 

Later in its order, the Federal Circuit pointed about that these concerns are all within Judge Connolly’s purview and responsibility. 

Unfortunately, the Federal Circuit made no comment about the critical issue of funding transparency. The question of where shell companies like Nimitz Technologies LLC get their money is critical—it’s at the heart of Judge Connolly’s inquiry, and similar concerns affect the thousands of defendants who get hit with patent infringement accusations every month. However, in not commenting on standing orders regarding third-party funding, the Federal Circuit also left untouched this positive and growing trend: federal courts are increasingly demanding litigation funding disclosures in patent cases.   

We will keep EFF supporters informed about what comes out of the investigation in Judge Connolly’s court, as well as this issue more broadly.

Rachael Lamkin

VICTORY! Apple Commits to Encrypting iCloud, Drops Phone-Scanning Plans

3 months 2 weeks ago

Today Apple announced it will provide fully encrypted iCloud backups, meeting a longstanding demand by EFF and other privacy-focused organizations. 

We applaud Apple for listening to experts, child advocates, and users who want to protect their most sensitive data. Encryption is one of the most important tools we have for maintaining privacy and security online. That’s why we included the demand that Apple let users encrypt iCloud backups in the Fix It Already campaign that we launched in 2019. 

Apple’s on-device encryption is strong, but some especially sensitive iCloud data, such as photos and backups, has continued to be vulnerable to government demands and hackers. Users who opt in to Apple’s new proposed feature, which the company calls Advanced Data Protection for iCloud, will be protected even if there is a data breach in the cloud, a government demand, or a breach from within Apple (such as a rogue employee). Apple said today that the feature will be available to U.S. users by the end of the year, and will roll out to the rest of the world in “early 2023.”

We’re also pleased to hear that Apple has officially dropped its plans to install photo-scanning software on its devices, which would have inspected users’ private photos in iCloud and iMessage. This software, a version of what’s called “client-side scanning,” was intended to locate child abuse imagery and report it to authorities. When a user’s information is end-to-end encrypted and there is no device scanning, the user has true control over who has access to that data.

Apple’s image-scanning plans were announced in 2021, but delayed after EFF supporters protested and delivered a petition containing more than 60,000 signatures to Apple executives. While Apple quietly postponed these scanning plans later that year, today’s announcement makes it official. 

In a statement distributed to Wired and other journalists, Apple said: 

We have further decided to not move forward with our previously proposed CSAM detection tool for iCloud Photos. Children can be protected without companies combing through personal data, and we will continue working with governments, child advocates, and other companies to help protect young people, preserve their right to privacy, and make the internet a safer place for children and for us all.

The company has said it will focus instead on “opt-in tools for parents” and “privacy-preserving solutions to combat Child Sexual Abuse Material and protect children, while addressing the unique privacy needs of personal communications and data storage.” 

Constant scanning for child abuse images can lead to unwarranted investigations and false positives. Earlier this year, the New York Times reported on how faulty scans at Google led to false accusations of child abuse against fathers in Texas and California. The men were exonerated by police but were subjected to permanent account deletion by Google. 

Companies should stop trying to square the circle by putting bugs in our pockets at the request of governments, and focus on protecting their users, and human rights. Today Apple took a big step forward on both fronts. There are a number of implementation choices that can affect the overall security of the new feature, and we’ll be pushing Apple to make sure the encryption is as strong as possible. Finally, we’d like Apple to go a step further. Turning on these privacy-protective features by default would mean that all users can have their rights protected.

Joe Mullin

VICTORY! The Safe Connections Act is Now Law

3 months 2 weeks ago

In the 21st century, it is difficult to lead a life without a cell phone. It is also difficult to change your number—you’ve given it to all your friends, family, doctors, children’s schools, and so on. It’s especially difficult if you are trying to leave an abusive relationship where your abuser is in control of your family’s phone plan and therefore has access to your phone records. 

Thankfully, a bill to change that just became law.

The Safe Connections Act (S. 120) was introduced in the Senate on January 2021 by Senators Brian Schatz, Deb Fischer, Richard Blumenthal, Rick Scott, and Jacky Rosen and in the House (H.R. 7132) by Representatives Ann Kuster and Anna Eshoo. This common sense bill would make it easier for survivors of domestic violence to separate their phone line from a family plan while keeping their own phone number. It also requires the FCC to create rules to protect the privacy of the people seeking this protection. This bill overwhelmingly passed both chambers of Congress, and it was signed by the President on December 7, 2022, making it Public Law 117-223

Telecommunications carriers are already required to make numbers portable when users want to change carriers. So it should not be hard for carriers to replicate a seamless process when a paying customer wants to move an account within the same carrier. EFF strongly supports this bill.

We would have preferred a bill that did not require survivors to provide paperwork to “prove” their abuse. For many survivors, providing paperwork about their abuse from a third party is burdensome and traumatic, especially when it is required at the very moment when they are trying to free themselves from their abusers. However, this new law is a critical step in the right direction, and it is encouraging that Congress and the President agreed.

India McKinney
Checked
56 minutes 5 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed