EFF's Statement on Dobb's Abortion Ruling

2 days 8 hours ago

Today's decision deprives millions of people of a fundamental right, and also underscores the importance of fair and meaningful protections for data privacy. Everyone deserves to have strong controls over the collection and use of information they necessarily leave behind as they go about their normal activities, like using apps, search engine queries, posting on social media, texting friends, and so on. But those seeking, offering, or facilitating abortion access must now assume that any data they provide online or offline could be sought by law enforcement.

People should carefully review privacy settings on the services they use, turn off location services on apps that don’t need them, and use encrypted messaging services. Companies should protect users by allowing anonymous access, stopping behavioral tracking, strengthening data deletion policies, encrypting data in transit, enabling end-to-end message encryption by default, preventing location tracking, and ensuring that users get notice when their data is being sought. And state and federal policymakers must pass meaningful privacy legislation. All of these steps are needed to protect privacy, and all are long overdue.

More resources are available at our reproductive rights issue page. 

Cindy Cohn

The Bipartisan Digital Advertising Act Would Break Up Big Trackers

3 days 4 hours ago

In May, Senators Mike Lee, Amy Klobuchar, Ted Cruz, and Richard Blumenthal introduced the “Competition and Transparency in Digital Advertising Act.” The bill, also called the “Digital Advertising Act” or just “DAA” for short, is an ambitious attempt to regulate, and even break up, the biggest online advertising companies in the world.

The biggest trackers on the internet, including Google, Facebook, and Amazon, are all vertically integrated. This means they own multiple parts of a supply chain - specifically, the digital advertising supply chain, from the apps and websites that show ads to the exchanges that sell them and the warehouses of data that are used to target them. These companies harm users by collecting vast amounts of personal information without meaningful consent, sharing that data, and selling services that allow discriminatory and predatory behavioral targeting. They also use vertical integration to crush competition at every level of the market, preventing less-harmful advertising business models from gaining a foothold.

The DAA specifically targets vertical integration in the digital advertising industry. The bill categorizes ad services into, roughly, four kinds of business:

  • Publishers create websites and apps, and show content directly to users. They sell ad space around that content.
  • Ad exchanges run auctions for ad space from many different publishers, and solicit bids from many different advertisers.
  • Sell-side brokerages work with publishers to monetize their ad space on exchanges. These are sometimes called “supply-side platforms” in the industry.
  • Buy-side brokerages work with advertisers to buy ad space via exchanges. These are sometimes called “demand-side platforms” in the industry.

In broad strokes, the bill would prevent any company that makes more than $20 billion per year in advertising revenue from owning more than one of those components at a time. It also creates new obligations for advertising businesses to operate fairly, without self-preferencing, and prohibits them from acting against the interests of their clients. The bill is complex and nuanced, and we will not analyze every provision of it here. Instead, we will consider how the main ideas behind this bill might affect the internet if enacted.

How would this affect the real world?

The DAA would likely apply to all three of the biggest ad tech companies in the world: Meta, Amazon, and Google. As we’ll describe, all of these companies act as both publishers and service providers at multiple levels of the ad tech “stack.”

Meta is a “publisher” because it operates websites and apps that serve content to users directly, including Facebook, Instagram, Whatsapp, and Oculus. It also operates a massive third-party ad platform, called “Audience Network,” which sells ad space in “thousands” of third-party apps that reach “over 1 billion” people each month. Audience Network essentially acts as a supply-side platform, a demand-side platform, and an exchange at the same time. Furthermore, Meta uses both its user-facing apps and those “thousands” of third-party Audience Network apps to gather data about our online behavior. The data it gathers about users on its social media platforms are used to target them in Audience Network apps; those apps, in turn, collect yet more data user behavior. This kind of cross-platform data collection is common to all of the ad tech oligarchs, and it helps them target users more precisely (and more invasively) than their smaller competitors.

Amazon has been rapidly developing its own advertising business. While online advertising was once widely viewed as a duopoly of Google and Facebook, today the ad market is better characterized as a triopoly. Amazon operates several third-party advertising services, including Amazon DSP, an analytics platform called Amazon Attribution, and a supply-side ad server called Sizmek Ad Suite. It also sells ad space on Amazon properties like its flagship website amazon.com, its Kindle e-readers, Twitch.tv, and its many video streaming services. Like Facebook, Amazon can use data about user behavior on its own properties to target them on third-party publishers and vice versa.

Google is the biggest of all. It makes billions of dollars selling ads on its user-facing services, including Google Search, YouTube, and Google Maps. But behind the scenes, Google’s ad infrastructure is even more expansive. Google operates at least ten different components that handle different parts of the ad business for different kinds of clients. Its ad exchange (AdX, formerly Doubleclick Ad Exchange), supply-side platform (Google Ad Manager, formerly Doubleclick for Publishers) and mobile ad platform (AdMob) all dominate their respective market segments. Its trackers, inserted into third-party websites, are far and away the most common on the web. And in addition to the massive information advantage it has over competitors, Google has repeatedly been accused of using its different components to secretly self-preference and directly undermine competition. As a result, the company is currently the subject of several different antitrust investigations around the world.

All of these companies likely meet the revenue threshold specified by the DAA. That means if the bill becomes law, all three may be required to divest their advertising businesses. Google could operate Youtube and Search, or the infrastructure that serves ads on those sites, but not both. Furthermore, if all of its advertising components were spun off into one “Google Ads” conglomerate that still made over $20B in revenue, the resulting company would have to choose between its ad exchange, its supply-side platforms, or its demand-side platforms, and spin off its other parts. Essentially, the ad giants will have to break themselves into component parts until each component either falls below the revenue threshold or operates just one layer of the ad tech stack.

Why do break-ups matter?

Google and Facebook build user-facing platforms, but their main customers are advertisers. This central conflict of interest manifests in design choices that sell out our privacy. For example, Google has made sure that Chrome and Android keep sharing private information by default even as competing browsers and operating systems take a more pro-privacy stance. When advertiser interests conflict with user rights, Google tends to side with its customers.

Splitting user-facing platforms apart from ad tech tools would cut right through this tension. Chrome and Android developers would face competitive pressure from rivals who design tools that cater to users alone.

Separating ads from publishing can protect rights that US privacy laws do not address. A majority of proposed and enacted privacy laws in the U.S. regulate data sharing between distinct companies more strictly than data sharing within a single corporation. For example, the California Privacy Rights Act (CPRA) allows users to opt out of having their personal information “shared or sold,” but it does not give users the right to object to many kinds of intra-company sharing—like when Google’s search engine shares data with Google Ads to enable hyper-specific behavioral targeting on Google properties. Breaking user-facing services apart from advertiser-facing businesses will make it easier to regulate these flows of private information.

Splitting ad empires apart also holds the promise of a fairer ad market. Removing tech companies’ content and app businesses from their ad businesses, and splitting the sell-side and buy-side of the ad-tech stack, will make self-preferencing, bid-rigging, and other forms of fraud and cheating less profitable, less lucrative, and easier to detect. This will help media producers and individual creators get their rightful share of revenue from the ads that run against their work, and it will help protect small businesses and other advertisers from being price-gouged or defrauded by powerful, integrated ad-tech businesses.

Conclusion

The Digital Advertising Act is a bold, promising legislative proposal. It could split apart the most toxic parts of Big Tech to make the internet more competitive, more decentralized, and more respectful of users’ digital human rights, like the right to privacy. As with any complex legislation, the impacts of this bill must be thoroughly explored before it becomes law. But we believe in the methods described in the bill: they have the power to reshape the internet for the better.

Bennett Cyphers

Security and Privacy Tips for People Seeking An Abortion

3 days 5 hours ago

Given the shifting state of the law, people seeking an abortion, or any kind of reproductive healthcare that might end with the termination of a pregnancy,  may need to pay close attention to their digital privacy and security. We've previously covered how those involved in the abortion access movement can keep themselves and their communities safe. We've also laid out a principled guide for platforms to respect user privacy and rights to bodily autonomy. This post is a guide specifically for anyone seeking an abortion and worried about their digital privacy. There is a lot of crossover with the tips outlined in the previously mentioned guides; many tips bear repeating. 

We are not yet sure how companies may respond to law enforcement requests for any abortion related data, and you may not have much control over their choices.  But you can do a lot to control who you are giving your information to, what kind of data they get, and how it might be connected to the rest of your digital life.

Keep This Data Separate from Your Daily Activities

If you are worried about legal pressure, the most important thing to remember is to keep these activities separate from less sensitive ones. This can be done many ways, but the underlying idea is to keep that information compartmentalized away from other aspects of your "regular" life. This makes it harder to trace back to you. 

Choosing a separate browser with hardened privacy settings is an easy and free start. Browsers like Brave, Firefox, and DuckDuckGo on mobile are all easy-to-use options that come with hardened privacy settings out of the box. It's a good idea to look into the “preferences” menu of whichever browser you choose, and raise the privacy settings even further. It's also a good idea to turn off this browser's features to remember browsing history and site data/cookies. Here’s what that looks like in Firefox’s “Privacy and Security” menu: 

Firefox's cookies and history options in its privacy menu

How to turn off Firefox's feature that remembers browser history

If you are calling clinics or healthcare providers, consider keeping a secondary phone number like Google Voice (which is free), Hushed, or Burner (both Hushed and Burner are paid apps, but have significantly better privacy policies than Google Voice). Having a separate email address, especially one that is made with privacy and security in mind, is also a good idea. Some email services you might consider are Tutanota and Protonmail.

Mobile Privacy

One way to protect your privacy is to get a “burner phone” – meaning a phone that’s not connected to your normal cell phone account. But keeping a super secure burner phone may be hard for many people. If so, consider reviewing the privacy settings on your current cell phone to see what information is being collected about you, who is collecting it, and what they might do with it.

If you're using a period tracker app already, carefully examine its privacy settings. If you can, consider switching to a more privacy-focused app.  Euki, for example, promises not to store any user information.

Turn off ad identifiers on your phone. We've laid out a guide for doing so on iOS and Android here. This restricts individual apps' abilities to track your behavior when you use them, and limits their sharing of that information with others.

While you're at it, it's a good idea to review the other permissions that apps have on your phone, especially location services. For apps that require location data for their core functionality (such as Google Maps), choose an option like "While Using" that only gives the app permission to view your location when it's open (remember to fully close out of those apps when you are finished using them).

If you have a "Find My" feature turned on for your phone, like Apple's function to see where your phone is from your other computers, you will want to consider turning that off before traveling to or from a location you don't want someone else being able to see you visit.

If you're traveling to or from a location (such as a clinic or a rally) where there is a likelihood law enforcement may stop you or seize your device, or if you're often near someone who may look into your phone without permission, turning off biometric unlocking is a good idea. This means turning off any feature for unlocking your phone using your face ID or fingerprint. Instead you should opt for a passcode that is difficult to guess (like all passwords: make it long, unique, and random).

Since you are likely using your phone to text and call others that will share similar data privacy and security concerns as you, it’s a good idea to download Signal, an end-to-end-encrypted messaging app. For a more thorough walkthrough, check out this guide for Android and this for iOS.

Lock & Encrypt

Anticipating how data on your devices might be seized as evidence is a scary thought. You don't need to know how encryption works, but checking to make sure it's turned on for all your devices is vital. Android and iOS devices have full-disk encryption on by default (though it doesn't hurt to check). Doing the same for your laptops and other computers is just as important. It's likely that encryption is on by default for your operating system, but it's worthwhile to check. Here is how to check for MacOS, and also for Windows. Linux users ought to check for guides for their choice of distribution and how to enable full disk encryption from there.

Delete & Turn Off

Deleting things from your phone or computer isn't as easy as it sounds. For sensitive data, you want to make sure it's done right.

When deleting images from your phone, make sure to remove them from "recently deleted" folders. Here is a guide on permanently deleting from iOS. Similar to iOS, Android's Google Photos app requires you to delete photos from its "Bin" folder where it stores recently deleted images for a period of time.

For your computer, using "secure deletion" features on either Windows or MacOS is a good call, but are not as important as making sure full disk encryption is turned on (discussed in the above section)

If you’re especially worried that someone might learn about a specific location you are traveling to or, simply turning off your phone and leaving your laptop at home is the easiest and most foolproof solution. Only you can decide if the risk outweighs the benefit of keeping your phone on when traveling to or from a clinic or abortion rally. For more reading, here is our guide on safely attending a protest, which may be useful for you to make that decision for yourself.

Daly Barnett

Westlaw Must Face Antitrust Claims in a Case That Could Boost Competitive Compatibility

4 days 6 hours ago

Westlaw, the world’s largest legal research service, is very likely to face antitrust liability. A federal court has ruled that ROSS Intelligence, a tiny rival offering new research tools (which Westlaw forced out of business with a copyright infringement suit) could proceed with claims that Westlaw uses exclusionary and anticompetitive practices to maintain its monopoly over the legal research market. 

The ruling is a significant step in an antitrust case about Westlaw’s conduct as an entrenched incumbent. The company controls 80 percent of the market for legal research tools and maintains a massive, impossible-to-duplicate database of public case law built over decades. It faces few major competitors. Westlaw doesn’t license access to its database, which means that it’s difficult for another company to offer new and innovative online tools for searching case law or other follow-on products and services. 

The potential ramifications of this case are huge. The outcome could boost the case for competitive compatibility (comcom), the ability of challengers to build on the work of entrenched players like Westlaw to create innovative and useful new products. More prosaically, it could improve public access to court records.

The U.S. District Court for the District of Delaware in April refused to dismiss an antitrust claim against Westlaw by the now-defunct legal research company ROSS Intelligence. ROSS developed a new online legal research tool using artificial intelligence (AI), contracting with an outside company for a database of legal cases sourced from Westlaw. Westlaw sued ROSS for copyright infringement, accusing it of using AI to mine the Westlaw database as source material for its new tool. Though the database is mainly composed of judicial opinions, which can’t be copyrighted, Westlaw long maintained that it holds copyrights to the page numbers and other organizational features. ROSS went out of business less than a year after Westlaw filed suit. 

Despite going out of business, ROSS pressed ahead with a countersuit, claiming Westlaw and its parent company, Thomson Reuters Corporation, violate antitrust law by requiring customers to buy their online search tool to access its database of public domain case law, unlawfully tying the tool to the database to maintain dominance in the overall market for legal search platforms. 

The dispute is more than an example of David v. Goliath: Lawyers, students, and academics all over the world rely on online access to court records for scholarship, research, education, and case work. Westlaw controls access to the largest database of public court records, judge’s opinions, statutes, and regulations. Those who need this information have little choice but to do business with Westlaw, on Westlaw’s terms. The work of compiling it took decades, and effectively can’t be duplicated, but no amount of effort alone gives Westlaw ownership over judges’ opinions. Copyrights are not based on effort, but rather on original, creative work. The mere fact that Westlaw worked hard to build its database doesn’t mean the public domain records of the U.S. legal system become its copyrighted material.

No single company should gatekeep public access to our laws and public information. Companies should be able to build on non-copyrighted work, especially the non-copyrighted work of a massive incumbent with enormous market power. This is especially important in categories such as legal research tools, because these are necessary if the public is to participate in governance and lawmaking in an informed manner.

ROSS had made other antitrust claims against Westlaw, saying it violated the Sherman Antitrust Act by refusing to license its database and engaging in sham litigation to block competitors from its industry. The court dismissed those claims. But the court let the claim of tying stand, siding with ROSS and finding that Westlaw’s database—which existed in the form of printed books for many decades before the internet—can be a separate product from its legal search tool, even though the tool does not work on any other database.

ROSS has “adequately and plausibly alleged separate product markets for public law databases and legal search tools,” the court said, while noting that the Supreme Court has “often found that arrangements involving functionally linked products, at least one of which is useless without the other, to be prohibited tying devices.”

ROSS will now be entitled to investigate Westlaw’s business practices to try to prove illegal tying. That will be no small feat. But it’s still important because it shows that entities like ROSS can use antitrust law to argue that companies with market power should allow others to build on their work. Westlaw will now have to explain how it benefits users to refuse to license its case law database to competing search tools, which could yield new insights into the law.

The lack of competitive compatibility is what holds back many new internet products and services. Many big tech companies used comcom when they were first starting out, but now that they are entrenched, powerful companies, they don’t want to make it easy for anyone to build on what they have. When they were upstarts, the products and services of big firms were fair game. Now that they’re entrenched, they don’t want upstarts challenging their dominance. 

We need comcom to make better and more innovative products for tech users. This is particularly crucial with the products and services at the heart of this case. Companies like Westlaw/Thomson Reuters should not be able to monopolize online access to the law and limit the ways that people can engage with it. We will keep a close eye on this case.

Malaika Fraley

Victory! Court Rules That DMCA Does Not Override First Amendment’s Anonymous Speech Protections

5 days ago

Copyright law cannot be used as a shortcut around the First Amendment’s strong protections for anonymous internet users, a federal trial court ruled on Tuesday.

The decision by a judge in the United States District Court for the Northern District of California confirms that copyright holders issuing subpoenas under the Digital Millennium Copyright Act must still meet the Constitution’s test before identifying anonymous speakers.

The case is an effort to unmask an anonymous Twitter user (@CallMeMoneyBags) who posted photos and content that implied a private equity billionaire named Brian Sheth was romantically involved with the woman who appeared in the photographs. Bayside Advisory LLC holds the copyright on those images, and used the DMCA to demand that Twitter take down the photos, which it did.

Bayside also sent Twitter a DMCA subpoena to identify the user. Twitter refused and asked a federal magistrate judge to quash Bayside’s subpoena. The magistrate ruled late last year that Twitter must disclose the identity of the user because the user failed to show up in court to argue that they were engaged in fair use when they tweeted Bayside’s photos.

When Twitter asked a district court judge to overrule the magistrate’s decision, EFF and the ACLU Foundation of Northern California filed an amicus brief in the case, arguing that the magistrate’s ruling sidestepped the First Amendment when it focused solely on whether the user’s tweets constituted fair use of the copyrighted works.

In granting Twitter’s motion to quash the subpoena, the district court agreed with EFF and ACLU that the First Amendment’s protections for anonymous speech are designed to protect a speaker beyond the content of any particular statement that is alleged to infringe copyright. So the First Amendment requires courts to analyze DMCA subpoenas under the traditional anonymous speech tests courts have adopted.

“But while it may be true that the fair use analysis wholly encompasses free expression concerns in some cases, that is not true in all cases—and it is not true in a case like this,” the court wrote. “That is because it is possible for a speaker’s interest in anonymity to extend beyond the alleged infringement.”

The district court then applied the traditional two-step test used to determine when a litigant can unmask an anonymous internet user. The first step requires a proponent of unmasking to show that their claims have legal merit. The second step requires courts to balance the harm to the anonymous speaker against the proponent of unmasking’s need to identify the user.

The district court ruled that Bayside failed on both steps.

First, the court ruled that Bayside had not shown that its copyright claims had merit, finding that the tweets at issue constituted fair use, largely because they were transformative.

“Rather, by placing the pictures in the context of comments about Sheth, MoneyBags gave the photos a new meaning—an expression of the author’s apparent distaste for the lifestyle and moral compass of one-percenters,” the court wrote.

Second, the court ruled that there were significant First Amendment issues at stake because the tweets constituted “vaguely satirical commentary criticizing the opulent lifestyle of wealthy investors generally (and Brian Sheth, specifically).” The court ruled that identifying “MoneyBags thus risks exposing him to ‘economic or official retaliation’ by Sheth or his associates.”

In contrast, the court ruled, Bayside failed to show that it needed the information, particularly given that Twitter had already removed the copyrighted images from the tweets. Further, the court was suspicious that Bayside may have been using its DMCA subpoena as a proxy for Sheth, which the court described as a “puzzling set of facts” that Bayside had never fully explained.

In upholding the user’s First Amendment rights to speak anonymously, the district court also rejected the argument that because the user never showed in court to fight the subpoena, Twitter could not raise constitutional arguments on its users’ behalf. EFF and ACLU’s brief called on the court to ensure that online services like Twitter can always stand in their users’ shoes when they seek to protect their rights in court.

The court agreed:

There are many reasons why an anonymous speaker may fail to participate in litigation over their right to remain anonymous. In some cases, it may be difficult (or impossible) to contact the speaker or confirm they received notice of the dispute. Even where a speaker is alerted to the case, hiring a lawyer to move to quash a subpoena or litigate a copyright claim can be very expensive. The speaker may opt to stop speaking, rather than assert their right to do so anonymously. Indeed, there is some evidence that this is what happened here: MoneyBags has not tweeted since Twitter was ordered to notify him of this dispute.

EFF is pleased with the district court’s decision, which ensures that DMCA subpoenas cannot be used as a loophole to the First Amendment’s protections. The reality is that copyright law is often misused to silence lawful speech or retaliate against speakers. For example, in 2019 EFF successfully represented an anonymous Reddit user that the Watchtower Bible and Tract Society sought to unmask via a DMCA subpoena, claiming that they posted Watchtower’s copyrighted material. 

We are also grateful that Twitter stood up for its user’s First Amendment rights in court.

Aaron Mackey

When “Jawboning” Creates Private Liability

5 days 3 hours ago
A (Very) Narrow Path to Holding Social Media Companies Legally Liable for Collaborating with Government in Content Moderation

For the last several years we have seen numerous arguments that social media platforms are "state actors" that “must carry” all user speech. According to this argument, they are legally required to publish all user speech and treat it equally. Under U.S. law, this is almost always incorrect. The First Amendment generally requires only governments to honor free speech rights and protects the rights of private entities like social media sites to curate content on their sites and impose content rules on their users. 

Among the state actor theories presented is one based on collaboration with the government on content moderation. “Jawboning”—or when government authorities influence companies’ social media policies—is extremely common. At what point, if any, does a private company become a state actor when they act according to it?

Deleting posts or cancelling accounts because a government official or agency requested or required it—just like spying on people’s communications on behalf of the government—raises serious human rights concerns. The newly revised Santa Clara Principles, which outline standards that tech platforms must consider to make sure they provide adequate transparency and accountability, specifically scrutinize “State Involvement in Content Moderation.” As set forth in the Principles: “Companies should recognise the particular risks to users’ rights that result from state involvement in content moderation processes. This includes a state’s involvement in the development and enforcement of the company’s rules and policies, either to comply with local law or serve other state interests. Special concerns are raised by demands and requests from state actors (including government bodies, regulatory authorities, law enforcement agencies and courts) for the removal of content or the suspension of accounts.”

So, it is important that there be a defined, though narrow, avenue for holding social media companies liable for certain censorial collaborations with the government. But the bar for holding platforms accountable for such conduct must be high to preserve their First Amendment rights to edit and curate their sites. 

Testing Whether a Jawboned Platform is a State Actor

We propose the following test. At a minimum: (1) the government must replace the intermediary’s editorial policy with its own, (2) the intermediary must willingly cede the editorial implementation of that policy to the government regarding the specific user speech, and (3) the censored party lacks an adequate remedy against the government. These findings are necessary, but not per se sufficient to establish the social media service as a state actor; there may always be “some countervailing reason against attributing activity to the government.” 

In creating the test, we had two guiding principles.

First, when the government coerces or otherwise pressures private publishers to censor, the censored party’s first and favored recourse is against the government. Governmental manipulation of the already fraught content moderation systems to control public dialogue and silence disfavored voices raises classic First Amendment concerns, and both platforms and users should be able to sue the government for this. In First Amendment cases, there is a low threshold for suits against government agencies and officials that coerce private censorship: the government may violate speakers’ First Amendment rights with “system[s] of informal censorship” aimed at speech intermediaries. In 2015, for example, EFF supported a lawsuit by Backpage.com after the Cook County sheriff pressured credit card processors to stop processing payments to the website. 

Second, social media companies should retain their First Amendment rights to edit and curate the user posts on their sites as long as they are the ones controlling the editorial process. So, we sought to distinguish those situations where the platforms clearly abandoned editorial power and ceded editorial control to the government from those in which the government‘s desires were influential but not determinative. 

We proposed this test in an amicus brief recently filed in the Ninth Circuit in a case in which YouTube has been accused of deleting QAnon videos at the request and compulsion of individual Members of Congress. We argued in that brief that the test was not met in that case and that YouTube could not be liable as a state actor under the facts alleged. 

However, even though they are not legally liable, social media companies should voluntarily disclose to a user when a government has demanded or requested action on their post, or whether the platform’s action was required by law. Platforms should also report all government demands for content moderation, and any government involvement in formulating or enforcing editorial policies, or flagging posts. Each of these recommendations is set out in the revised Santa Clara Principles.

The Santa Clara Principles also calls on governments to limit their involvement in content moderation. The Principle for Governments and Other State Actors states that governments “must not exploit or manipulate companies’ content moderation systems to censor dissenters, political opponents, social movements, or any person.” The Santa Clara Principles go on to urge governments to disclose their involvement in content moderation and to remove any obstacles they have placed on the companies to do so, such as gag orders.

Our position with respect to state action being established by government collaboration stands in contrast to the more absolute positions we have taken against other state action theories.

Although we have been sharp critics of how the large social media companies curate user speech and its differential impacts on those traditionally denied voice, we are also concerned that holding social media companies to the legal standards of the First Amendment would hinder their ability to moderate content in ways that serves users well: by removing or downranking posts that although legally protected were harassing or abusive to other users, or which were just offensive to many users the company sought to reach; or by adopting policies or community standards that focus on certain subject matters or communities, and excluding off-topic posts. Many social media companies offer curation services that suggest or prioritize certain posts over others, whether through Facebook’s Top Stories feed, or Twitter’s Home feed, etc., that some users seem to like. Plus, there are numerous practical problems. First, clear distinctions between legal and illegal speech are often elusive. Law enforcement often gets these wrong and judges and juries struggle with these distinctions. Second, it just doesn’t reflect reality: every social media service has an editorial policy that excludes or at least disfavors certain legal speech, and always have had such policies.

We filed our first amicus brief setting out this position in 2018 and wrote about it here. And we’ve been asserting that position in various US legal matters ever since. That first case and others like it argued incorrectly that social media companies functioned like public forums, places open to the public to associate and speak to each other, and thus should be treated like government controlled public forums like parks and sidewalks. 

Other cases and the social media laws passed by Florida and Texas also argued that social media services, at least the very large ones, were “common carriers” which are open to all users on equal terms. In those cases, our reasoning for challenging the laws remained the same: users are best served when social media companies are shielded from governmental interference with their editorial policies and decisions.

This policy-based position was consistent with what we saw as the correct legal argument: that social media companies themselves have the First Amendment right to adopt editorial policies, and to curate and edit the user speech that get submitted to them. And it’s important to defend that First Amendment right so as to shield these services from becoming compelled mouthpieces or censors of the government: if they didn’t have their own First Amendment rights to edit and curate their sites as they saw fit, then governments could tell them how to edit and curate their sites according to the government’s wishes and desires.

We stand by our position that social media platforms have the right to moderate content, and believe that allowing the government to dictate what speech platforms can and can’t publish is anathema to our democracy. But when censorship is a collaboration between private companies and the government, there should be a narrow, limited path to hold them accountable.

 

David Greene

Pass the "My Body, My Data" Act

5 days 6 hours ago

EFF supports Rep. Sara Jacobs’ “My Body, My Data" Act, which will protect the privacy and safety of people seeking reproductive health care.

Privacy fears should never stand in the way of healthcare. That's why this common-sense bill will require businesses and non-governmental organizations to act responsibly with personal information concerning reproductive health care. Specifically, it restricts them from collecting, using, retaining, or disclosing reproductive health information that isn't essential to providing the service someone asks them for.

TAKE ACTION

Tell Congress to pass the "My Body, My Data" Act

These restrictions apply to companies that collect personal information related to a person’s reproductive or sexual health. That includes information such as data related to pregnancy, menstruation, surgery, termination of pregnancy, contraception, basal body temperature or diagnoses. The bill would protect people who, for example, use fertility or period-tracking apps or are seeking information about reproductive health services. 

We are proud to join Planned Parenthood, NARAL, National Abortion Federation, URGE, National Partnership for Women & Families, and Feminist Majority in support of the bill.

In addition to the restrictions on company data processing, this bill also provides people with necessary rights to access and delete their reproductive health information. Companies must also publish a privacy policy, so that everyone can understand what information companies process and why. It also ensures that companies are held to public promises they make about data protection, and gives the Federal Trade Commission the authority to hold them to account if they break those promises. 

The bill also lets people take on companies that violate their privacy with a strong private right of action. Empowering people to bring their own lawsuits not only places more control in the individual's hands, but also ensures that companies will not take these regulations lightly. 

Finally, while Rep. Jacobs' bill establishes an important national privacy foundation for everyone, it also leaves room for states to pass stronger or complementary laws to protect the data privacy of those seeking reproductive health care. 

We thank Rep. Jacobs and the other sponsors for taking up this important bill, and using it as an opportunity not only to protect those seeking reproductive health care, but also highlight why data privacy is an important element of reproductive justice. Please take action to express your support for the "My Body, My Data" Act today.

TAKE ACTION

Tell Congress to pass the "My Body, My Data" Act

Hayley Tsukayama

Daycare Apps Are Dangerously Insecure

5 days 6 hours ago

Last year, several parents at EFF enrolled kids into daycare and were instantly told to download an application for managing their children’s care. Daycare and preschool applications frequently include notifications of feedings, diaper changes, pictures, activities, and which guardian picked-up/dropped-off the child—potentially useful features for overcoming separation anxiety of newly enrolled children and their anxious parents. Working at a privacy-oriented organization as we do, we asked questions: Do we have to use these? Are they secure? The answer to the former, unfortunately, was “yes,” partly so that the schools could abide by health guidelines to avoid unnecessary in-person contact. But troublingly, the answer to the second was a resounding “no.”

As is the case with so many of these services, there are a few apps that are more popular than others. While we started with the one we were being asked to use, this prompted us to look closer at the entire industry.

"The (Mostly) Cold Shoulder"

These days, offering two-factor authentication (2FA), where two different methods are used to verify a user’s login, is fairly standard. EFF has frequently asserted that it is one of the easiest ways to increase your security. Therefore, it seemed like a basic first step for daycare apps.

In October 2021, we tried to reach out to one of the most popular daycare services, Brightwheel, about the lack of two-factor authentication on their mobile app. We searched around on the site for an email to report security concerns and issues, but we could not find one.

A few cold emails and a little networking later, we got a meeting. The conversation was productive and we were glad to hear that Brightwheel was rolling out 2FA for all admins and parents. In fact, the company’s announcement claimed they were the “1st partner to offer this level of security” in the industry—an interesting but also potentially worrisome claim.

Was it true? Apparently so. This prompted us to do more outreach to other popular daycare apps. In April 2022, we reached out to the VP of Engineering at another popular app, HiMama (no response). Next we emailed HiMama’s support email about 2FA, and received a prompt but unpromising response that our feature request would be sent to the product team for support. So we dug in further.

Digging Further—And a History of Cold Shoulders

Looking at a number of popular daycare and early education apps, we quickly found more issues than just the lack of 2FA. Through static and dynamic analysis of several apps, we uncovered not just security issues but privacy-compromising features as well. Issues like weak password policies, Facebook tracking, cleartext traffic enabled, and vectors for malicious apps to view sensitive data.

As a note on investigative tools and methodology: we used MobSF and apktool for static analysis of application code and mitmproxy, Frida, and adb (Android Debug Bridge) for dynamic analysis to capture network traffic and app behavior.

Initially, we had inferred that many of these services would be unaware of their issues, and we planned to disclose any vulnerabilities to each company. However, we discovered that not only were we not alone in wondering about the security of these apps, but that we weren’t alone in receiving little to no response from the companies.

In March 2022, a group of academic & security researchers from the AWARE7 agency, Institute for Internet Security, Max Planck Institute for Security and Privacy, and Ruhr University Bochum presented a paper to the PET (Privacy Enhancing Technologies) Symposium in Sydney, Australia. They described the lack of response their own disclosures met:

“Precisely because children's data is at stake and the response in the disclosure process was little (6 out of 42 vendors (±14%) responded to our disclosure), we hope our work will draw attention to this sensitive issue. Daycare center managers, daycare providers, and parents cannot analyze such apps themselves, but they have to help decide which app to introduce."

In fact, the researchers made vulnerability disclosures to many of the same applications we were researching in November 2021. Despite the knowledge that children’s data was at stake, security controls still hadn’t been pushed to the top of the agenda in this industry. Privacy issues remained as well. For example, The Tadpoles Android app (v12.1.5) sends event-based app activity to Facebook's Graph API. As well as very extensive device information to Branch.io.

Tadspoles App for Android using Facebook SDK to send custom app event data to graph.facebook.com


[Related: How to Disable Ad ID Tracking on iOS and Android, and Why You Should Do It Now]

Extensive information sent to branch.io

In its privacy policy, Branch.io states that they do not sell or “rent” this information, but the precise amount of data sent to them—down to the CPU type of the device—is highly granular, creating an extensive profile about the parent/guardian outside of the Tadpoles app. A profile that is subject to data sharing in situations like a merger or acquisition of Branch.io. Neither Branch.io or Facebook are listed or mentioned in Tadpole’s privacy policy.

A Note on Cloud Security

Another common trend in many daycare apps: relying on cloud services to convey their security posture. These apps often state they use “the cloud” to provide top-of-the-line security. HiMama, for example, writes in their Internet Safety statement that Amazon’s AWS “is suited to run sensitive government applications and is used by over 300 U.S. government agencies, as well as the Navy, Treasury and NASA.” This is technically true, but AWS has a particular offering (AWS GovCloud) that is isolated and configured to meet federal standards required for government servers and applications on those servers. In any case, regardless of whether an app uses standard or government level cloud offerings, a significant amount of configuration and application security is left up to the developers and the company. We wish HiMama and other similar apps would just highlight the specific security configurations they use on the cloud services they utilize.

Childcare Needs Conflict with Informed Choice

When a parent has an immediate need for childcare and a daycare near home or work opens up with one spot, they are less inclined to pick a fight over the applications the center chooses. And preschools and daycares aren’t forced to use a specific application. But they are effectively trusting a third party to act ethically and securely with a school’s worth of children’s data. Regulations like COPPA (Children’s Online Privacy Protection Act) likely don’t apply to these applications. Some service providers appear to reference COPPA indirectly with legal language that they do not collect data directly from children under 13 and we found a statement on one app committing to COPPA compliance.

Between vague language that could misguide parents about the reality of data security, fewer options for daycares (especially the first two years of the pandemic), leaky and insecure applications, and lack of account security control options, parents can’t possibly make a fully informed or sound privacy decision.

Call to Action for Daycare and Early Education Apps

It’s crucial that the companies that create these applications do not ignore common and easily-fixed security vulnerabilities. Giving parents and schools proper security controls and hardening application infrastructure should be the top priority for a set of apps handling children’s data, especially the very young children served by the daycare industry. We call on all of these services to prioritize the following basic protections and guidelines:

Immediate Tasks:
  • 2FA available for all Admins and Staff.
  • Address known security vulnerabilities in mobile applications.
  • Disclose and list any trackers and analytics and how they are used.
  • Use hardened cloud server images. Additionally, a process in place to continuously update out-of-date technology on those servers.
  • Lock down any public cloud buckets hosting children’s videos and photos. These should not be publicly available and a child’s daycare and parents/guardians should be the only ones able to access and see such sensitive data.

Those fixes would create a significantly safer and more private environment for data on children too young to speak for themselves. But there is always more that can be done to create apps that create industry benchmarks for child privacy.

Strongly Encouraged Tasks: E2EE (end-to-end encrypted) Messaging between School and Parents

Consider communication between schools and parents highly sensitive. There’s no need for the service itself to view communication being passed between schools and parents.

Create Security Channels for Reporting Vulnerabilities

Both EFF and the AWARE7 (et al.) researchers had issues finding proper channels when we uncovered problems with different applications. It would be great if they put up a simple security.txt file on their website for researchers to get in touch with the proper people, instead of hoping to get a response from company support emails.

At EFF, we are parents too. And the current landscape isn’t fair to parents. If we want a better digital future, it starts with being better stewards today and not enabling a precedent of data breaches that could lead to extensive profiling—or worse—of kids who have yet to take their first steps.

Alexis Hancock

EFF Warns Another Court About the Dangers of Broad Site-Blocking Orders

1 week 2 days ago

A copyright holder can’t use a court order against the owner of an infringing website to conscript every intermediary service on the internet into helping make that website disappear, EFF and the Computer & Communications Industry Association argued in an amicus brief.

The brief, filed in the U.S. District Court for the Southern District of New York, defends Cloudflare, a San Francisco-based global cloud services provider.

United King Film Distribution - a movie, television, sports and news content producer and provider - sued the creators of Israel.tv, which had streamed content on which United King held copyrights. After the people behind Israel.tv failed to appear in court, United King won a shockingly broad injunction not only against them but also claiming to bind hundreds, maybe thousands of intermediaries, including nearly every Internet service provider in the US, domain name registrars, web designers, shippers, advertising networks, payment processors, banks, and content delivery networks.

United King then sought to enforce that injunction against CDN/reverse proxy service Cloudflare, demanding that Cloudflare be held in contempt of court for refusing to block the streaming site and stop it from ever appearing again.

But the injunction is impermissibly broad, at odds with both Federal Rule of Civil Procedure 65 and the Digital Millennium Copyright Act (DMCA), EFF’s brief argued. It’s like ordering a telephone company to prevent a person from ever having conversations over the company’s network. It will cause collateral harm to numerous internet services and their users by imposing unnecessary costs and compliance burdens. And it could cause intermediaries like Cloudflare to block lawful websites and speech in order to avoid being sanctioned by courts in cases like this.

A copyright holder with an injunction simply can’t conscript every Internet intermediary to help cut every Internet user off from accessing an infringing website. In fact, they can’t conscript even one intermediary without fulfilling the law’s requirements. They would have to show that the intermediary acted in close coordination with the website owners, more than just providing them a basic service. And they would have to limit their injunction to the narrow guidelines allowed under the DMCA, including giving intermediaries a chance to be heard before being ordered to block.

We’ve seen this playbook before. In 2015, we helped Cloudflare get relief from a similar order that would have required them to play detective by finding and banning an infringing website owner whenever and wherever they appeared. And of course, the order in this case looks a lot like the kind of website-blocking order that the infamous SOPA and PIPA bills of 2011-2012 would have enabled. It’s preposterous to think that major media companies waged a giant, expensive, and ultimately losing battle for the power to censor websites if that power was allegedly available from the courts all along.

Today, we hope the courts understand that even if a website is infringing copyrights, the law doesn’t let rightsholders conscript the entire internet to help make that site go away. The costs to innocent users’ rights is simply too high.

The case is 21-cv-11024 KPF-RWL.





Mitch Stoltz

Copyright "Small Claims" Quasi-Court Opens. Here's Why Many Defendants Will Opt Out.

1 week 2 days ago

A new quasi-court for copyright, with nationwide reach, began accepting cases this week. The “Copyright Claims Board” or “CCB,” housed within the Copyright Office in Washington DC, will rule on private copyright infringement lawsuits from around the country and award damages of up to $30,000 per case. Though it’s billed as an “efficient and user-friendly” alternative to federal litigation, the CCB is likely to disadvantage many people who are accused of copyright infringement, especially ordinary internet users, website owners, and small businesses. It also violates the Constitution in ways that harm everyone. 

Even if this were a perfect process, there is bound to be confusion when a whole new regime for demanding money springs up. And the rules surrounding the CCB are far from perfect, so we want to hear from people who are hauled before the CCB. If you feel you’ve been wronged by the CCB process, please email info@eff.org and let us know.

It’s Voluntary—Except When It Isn’t

The Copyright Office calls the CCB a “voluntary” system. Copyright holders can choose to bring infringement cases in the CCB as an alternative to federal court. And those accused of infringement (called “respondents” here, rather than defendants) can opt out of a CCB proceeding by filing forms within a 60-day window. If a respondent opts out, the CCB proceeding goes no further, and the rightsholder can choose whether or not to file an infringement suit in federal court. But if the accused party doesn’t opt out in time, they become bound by the decisions of the CCB. Those decisions mostly can’t be appealed, even if they get the law wrong.

Although cases will vary, we think most knowledgeable parties will choose to opt out of the CCB process—again “knowledgeable.” The concern about this system mostly hurting regular users, website owners, and small businesses that don’t have staff who have been watching the CCB unfold cannot be understated. Every reason a knowledgeable party might decide to opt out is also a complicated legal issue that the average person should not be expected to know.

Not-So-Small Claims

Damages awards at the CCB will be limited to $15,000 per claim and $30,000 per case. That’s smaller than the maximum statutory damages a federal court can award on an infringement claim, which is $150,000. But is $30,000 per claim really a “small” claim? It’s 44% of the 2020 median US household income. It’s higher than the maximum damages allowed in the small claims courts of nearly every state. And three years from now, the Register of Copyrights can raise the CCB’s damages caps even higher—the statute puts no limit on increases.

The damages caps also hide a big liability pitfall. In federal court, massive and unpredictable statutory damages are generally only available if the rightsholder has registered their copyright before the alleged infringement began. Without advance registration, a rightsholder is limited to recovering the actual damages caused by the infringement, which they must prove and which are often much smaller than statutory damages. In practice, that rule has limited the possibility of big payouts in a copyright lawsuit to works where the author has either made a small proactive effort to protect against infringement, or that have significant market value.

The CCB, though, dispenses with the timely registration rule. In CCB proceedings, a rightsholder can recover up to $7500 in statutory damages per work without any proof of harm, even if they register their copyright the same day they file a claim with the CCB. That means nearly every photograph, tweet, scrap of prose, or scanned scribble can potentially be the basis for a profitable CCB lawsuit, even if it has no commercial value, or even any personal value to the author. It means that website owners and other internet users can face a CCP complaint even for low-value works that would never have merited a federal lawsuit. And it means that in many cases, opting out of CCB “small claims” proceedings will actually lower your financial risk.

Copyright Parking Tickets

The CCB will also have a worrisome “smaller claims” procedure for claims of $5,000 or less. For these claims, proceedings will be even more informal and the responding party’s ability to request evidence from the rightsholder to help in their defense, such as records of copyright ownership and licenses, may be extremely limited. We suspect that responding parties will face significant pressure to settle claims for cash—equivalent to paying a very large parking ticket—where a more careful consideration would show that they have a valid defense such as fair use. There’s a real risk that even clear cases of fair use won’t get their due in these “smaller claims” proceedings.

Cloudy with a Chance of Pro-Rightsholder Bias

Congress handed the Copyright Office a monumental task: creating the first federal “small claims” tribunal, and the first such body with nationwide jurisdiction. The rules that the Copyright Office has created over the past eighteen months strive for evenhandedness. But they still won’t do enough to overcome pro-rightsholder bias.

First, the CCB is housed within the Copyright Office, which has historically acted as a champion of rightsholders over users of copyrighted works. A former head of the Copyright Office, who now leads a publishing industry lobbying group, famously said that “copyright is for the author first and the nation second.” The Office has close relationships with major media and entertainment companies, including co-hosting events and maintaining a “revolving door” of leaders who move between the Office and these industries. “Regulatory capture” is a problem that affects many government agencies, but it’s especially concerning for an agency that is now running a court.

Second, we expect that a relatively small group of rightsholders and rightsholder attorneys will be “frequent flyers” at the CCB, while the responding parties will more often be first-timers to this new tribunal. All decision-making bodies tend to prioritize the desires of repeat players over new participants, and the CCB will be no exception. Compounding this problem, we expect that respondents who know the law well, or have the resources to hire a lawyer, will opt out of CCB proceedings, so that the majority of respondents judged by the CCB will be people with less practical ability to raise the defenses that the law gives them. This dichotomy could warp the CCB “Claims Officers’” perceptions of all of the parties that come before them.

Constitutional Concerns

The CCB was modeled in part on small claims courts at the state level. But the CCB is different in a significant way: it’s not part of the judiciary. The Copyright Office sits within the Library of Congress, which is part of the Legislative Branch. The Office is sometimes considered an executive agency, but either way, the Constitution doesn’t allow either the Legislative or the Executive branches to run courts that rule on disputes between private parties. (Administrative law judges who rule on questions about what rights and benefits the government owes to people are a different story.) This restriction isn’t just a technicality—the independence of the courts is one of the most important protections for individual rights. “Claims Officers” who are hired by and report to the Register of Copyrights can’t truly render independent judgments. This violates the Constitution, and is yet another reason why many people will opt out of a CCB proceeding.

The CCB has other constitutional defects as well. For example, the absence of a meaningful appeals process and the lack of a jury likely violate the Fifth and Seventh Amendments, and the ability to opt out doesn’t necessarily cure these problems.

The staff of the Copyright Office are dedicated public servants, and in setting up the CCB they are doing what Congress asked of them. But care and good intentions won’t be enough to make this unprecedented judicial experiment fair or constitutionally sound. As the CCB starts rendering decisions, EFF would like to hear from people who have been wronged by the process. Email us at info@eff.org.

 

Mitch Stoltz

Our Digital Lives Rest on a Robust, Flexible, and Stable Fair Use Regime

1 week 2 days ago

Much of what we do online involves reproducing copyrightable material, changing it, and/or making new works. Technically, pretty much every original tweet is copyrightable. And the vast majority of memes are based on copyrighted works. Your funny edits, mashups, and photoshopped jokes manipulate copyrighted works into new ones. Effective communication has always included a shared reference pool to make points clearly understood. And now we do that online.

In other words, as the digital world has grown, so has the reach of copyright protections. At the same time, copyright and related laws have changed: terms have expanded, limits (like registration) have shrunk, and new rules shape what you can do with your stuff if that stuff happens to come loaded with software. Some of those rules have had unintended consequences: a law meant to prevent piracy also prevents you from fixing your own car, using generic printer ink, or adapting your e-reader for your visual impairment. And a law meant to encourage innovation is routinely abused to remove critical commentary and new creativity.

In the age of copyright creep, fair use, which allows the use of copyrighted material without permission or payment in certain circumstances, is more vital than ever. A robust and flexible fair use doctrine allows us to make use of a copyrighted work to make new points, critiques, or commentary. It allows libraries to preserve and share our cultural heritage. It gives us more freedom to repair and remake.  It gives users the tools they need to fight back, in keeping with its core purpose—to ensure that copyright fosters, rather than inhibits, creative expression

The Supreme Court has an opportunity to ensure that the doctrine continues to do that essential work, in a case called Andy Warhol Foundation v. Goldsmith. At issue in the case is a series of prints by Andy Warhol, which adapt and recontextualize a photograph of the musician Prince. While the case itself doesn’t involve a digital work, its central issue is a fair use analysis by the Second Circuit that gets fair use and transformative works fundamentally wrong. First, it assumes that two works in a similar medium will share the same overarching purpose. Second, it holds that if a secondary use doesn’t obviously comment on the primary work, then a court cannot look to the artist’s asserted intent or even the impression reasonable third parties, such as critics, might draw. Third, it holds that, to be fair, the secondary use must be so fundamentally different that it should not recognizably derive from and retain essential elements of the original work.

As EFF and the Organization for Transformative Works explain in a brief filed today, all three conclusions not only undermine fair use protections but also run contrary to practical reality. For example, instead of addressing whether the respective works offered different meanings or messages, the Second Circuit essentially concluded that because the works at issue were both static visual works, they served the same purpose. This conclusion is perplexing, to say the least: the works at issue are a photograph of an individual and a collection of portraits in the classic Warhol style that used the photograph as a reference—which you do not need to be an art expert to see as distinct pieces of art. The intent of the photographer and Warhol were different, as are the effects on the different audiences.

This framing of fair use would be devastating for the digital space. For example, memes with the same image but different text could be seen as serving fundamentally the same purpose as the original, even though many memes depend on the juxtaposition of the original intent of the work and its new context. One scene from Star Wars, for example, has given us two memes. In the original film, Darth Vader’s big “NOOOO” was surely meant to be a serious expression of despair. In meme form, it’s a parodic, over-the-top reaction. Another meme comes from a poorly-subtitled version of the film, replacing “NOOOO” with “DO NOT WANT.”  Fan videos, or vids, remix the source material in order to provide a new narrative, highlighting an aspect of the source that may have been peripheral to the source’s initial message, and often commenting on or critiquing that source. And so on.

Just last year, the Supreme Court recognized the importance of fair use in our digital world in Oracle v. Google, and we look for it to reaffirm fair use’s robust, flexible, and stable protections by reversing the Second Circuit’s decision in this case.

Katharine Trendacosta

Facebook Says Apple is Too Powerful. They're Right.

1 week 3 days ago

In December, 2020, Apple did something insanely great. They changed how iOS, their mobile operating system, handled users’ privacy preferences, so that owners of iPhones and other iOS devices could indicate that they don’t want to be tracked by any of the apps on their devices. If they did, Apple would block those apps from harvesting users’ data.

This made Facebook really, really mad.

As far as Apple -and Facebook, and Google, and other large tech companies - are concerned, we’re entitled to just as much privacy as they want to give us, and no more.

It’s not hard to see why! Nearly all iOS users opted out of tracking. Without that tracking, Facebook could no longer build the nonconsensual behavioral dossiers that are its stock-in-trade. According to Facebook, empowering Apple’s users to opt out of tracking cost the company $10,000,000,000 in the first year, with more losses to come after that.

Facebook really pulled out the stops in its bid to get those billions back. The company bombarded its users with messages begging them to turn tracking back on. It threatened an antitrust suit against Apple. It got small businesses to defend user-tracking, claiming that when a giant corporation spies on billions of people, that’s a form of small business development.

For years, Facebook - and the surveillance advertising industry - have insisted that people actually like targeted ads, because all that surveillance produces ads that are “relevant” and “interesting.” The basis for this claim? People used Facebook and visited websites that had ads on them, so they must enjoy targeted ads.

Unfortunately, reality has an anti-surveillance bias. Long before Apple offered its users a meaningful choice about whether they wanted to be spied on, hundreds of millions of web-users had installed ad-blockers (and tracker-blockers, like our own Privacy Badger), in what amounts to the largest consumer boycott in history. If those teeming millions value ad-targeting, they’ve sure got a funny way of showing it.

Time and again, when internet users are given the choice of whether or not to be spied on, they choose not. Apple gave its customers that choice, and for that we should be truly grateful. 

And yet…Facebook’s got a point.

When “users” are “hostages”

In Facebook’s comments to the National Telecommunications and Information Administration’s “Developing a Report on Competition in the Mobile App Ecosystem” docket, Facebook laments Apple’s ability to override its customers’ choices about which apps they want to run. iOS devices like the iPhone use technological countermeasures to block “sideloading” (installing an app directly, without downloading it from Apple’s App Store) and to prevent third parties from offering alternative app stores.

This is the subject of ongoing legislation on both sides of the Atlantic. In the USA, The Open App Markets Act would force Apple to get out of the way of customers who want to use third party app stores and apps; in the EU, the Digital Markets Act contains similar provisions. Some app makers, upset with the commercial requirements Apple imposes on the companies that sell through its App Store, have sued Apple for abusing its monopoly power.

Fights over what goes in the App Store usually focus on the commissions that Apple extracts from its software vendors - historically, these were 30 percent, though recently some vendors have been moved into a discounted 15 percent tier. That’s understandable: lots of businesses operate on margins that make paying a 30 percent (or even 15 percent) commission untenable. 

For example, the retail discount for sellers of wholesale audiobooks - which compete with Apple’s iBooks platform - is 20 percent. That means that selling audiobooks on Apple’s platform is a money-losing proposition unless you’re Apple or its preferred partner, the market-dominating Amazon subsidiary Audible. Audiobook stores with iPhone apps have to use bizarre workarounds, like forcing users to login to their websites using a browser to buy their books, then go back to their phones and use their app to download their books.

That means that Apple doesn’t just control which apps its mobile customers can use; it also has near-total control over which literary works they can listen to. Apple may have not set out to control its customers’ reading habits, but having attained it, it jealously guards that control. When Apple’s customers express interest in using rival app stores, Apple goes to extraordinary technical and legal lengths to prevent them from doing so.

The iOS business model is based on selling hardware and collecting commissions on apps. Facebook’s charges that these two factors combine to impose high “switching costs” on Apple’s customers. “Switching costs” is the economist’s term for all the things you have to give up when you change loyalties from one company to another. In the case of iOS, switching to a rival mobile device doesn’t just entail the cost of buying a new phone, but also buying new apps:

[F]ee-based apps often require switching consumers to repurchase apps, forfeit in-app purchases or subscriptions, or expend time and effort canceling current subscriptions and establishing new ones.

Facebook is right. Apple’s restrictions on third-party browsers, and the limitations it puts on Safari/WebKit (its own browser tools) have hobbled “web apps,” which run seamlessly inside a browser. This means that app makers can’t deliver a single, browser-based app that works on all tablets and phones - they have to pay to develop separate apps for each mobile platform.

That also means that app users can’t just switch from one platform to another and access all their apps by typing a URL into a browser of their choice. 

Facebook is very well situated to comment on how high switching costs can lock users into a service they don’t like very much, because, as much as they dislike that platform, the costs of using it are outstripped by the costs the company imposes on users who leave.

That’s how Facebook operates.

Facebook has devoted substantial engineering effort to keeping its switching costs as high as possible. In internal memos - published by the FTC - the company’s executives, project managers and engineers frankly discuss plans to design Facebook’s services so that users who leave for a rival pay as high a price as possible. Facebook is fully committed to ensuring that deleting your account means leaving behind the friends, family, communities and customers who stay. 

So when Facebook points out that Apple is using switching costs to take its users hostage, they know what they’re talking about. 

Benevolent Dictators Are Still Dictators

Facebook’s argument is that when Apple’s users disagree with Apple, user choice should trump corporate preference. If users want to use an app that Apple dislikes, they should be able to choose that app. If users want to leave Apple behind and go to a rival, Apple shouldn’t be allowed to lock them in with high switching costs. 

Facebook’s right.

Apple’s App Tracking Transparency program - the company’s name for the change to iOS that let you block apps from spying on you - was based on the idea that when you disagree with Facebook (or other surveillance-tech companies), your choice should trump their corporate preferences. If you want to use an app without being spied on, you should be able to choose that. If you want to quit Facebook and go to a rival, Facebook shouldn’t be able to lock you in with high switching costs.

It’s great when Apple chooses to defend your privacy. Indeed, you should demand nothing less. But if Apple chooses not to defend your privacy, you should have the right to override the company’s choice. Facebook spied on iOS users for more than a decade before App Tracking Transparency, after all. 

Like Facebook - and Google, and other companies - Apple tolerates a lot of surveillance on its platform. In spring of 2021, Apple and Google kicked some of the worst location-data brokers out of their app stores - but left plenty behind to spy on your movements and sell them to third parties.

The problem with iOS isn’t that Apple operates an App Store - it’s that Apple prevents others from offering competing app stores. If you like Apple’s decisions about which apps you should be able to use, that’s great! But that’s a system that only works well - and fails badly. No matter how much you trust Apple’s judgments today, there’s no guarantee that you’ll feel that way tomorrow. 

After all, Apple’s editorial choices are,and always have been driven by a mix of wanting to deliver a quality experience to its users, and wanting to deliver profits to its shareholders. The inability of iOS users to switch to a rival app store means that Apple has more leeway to take down apps its users like without losing customers over it.

The US Congress is wrestling with this issue, as are the courts, and one of the solutions they’ve proposed is to order Apple to carry apps it doesn’t like in its App Store. This isn’t how we’d do it. There are lots of ways that forcing Apple to publish software it objects to can go wrong. The US government has an ugly habit of ordering Apple to sabotage the encryption its users depend on.

But Apple also sometimes decides to sabotage its encryption, in ways that expose its customers to terrible risk

Like Facebook, Apple makes a big deal out of those times where it really does stick up for its users - but like Facebook, Apple insists that when it chooses to sell those users out, they shouldn’t be able to help themselves.

As far as Apple -and Facebook, and Google, and other large tech companies - are concerned, we’re entitled to just as much privacy as they want to give us, and no more.

That’s not enough. Facebook is right that users should be able to choose app stores other than Apple, and Apple is wrong to claim that users who are given this choice will be exposed to predatory and invasive apps. Apple’s objections imply that its often fantastic privacy choices can’t possibly be improved upon. That’s categorically wrong. There’s lots of room for improvement, especially in a mass-market product that can’t possibly cater to all the specific, individual needs of billions of users.

Apple is right, too. Facebook users shouldn’t have to opt into spying to use Facebook.

The rights of users shouldn’t be left to the discretion of corporate boardrooms. Rather than waiting for Apple (or even Facebook) to stand up for their users, the public deserves a legally enforceable right to privacy, one that applies to Facebook and Apple…and the small companies that might pop up to offer alternative app stores or user interfaces.

Cory Doctorow

Stop This California Bill that Bans Affordable Broadband Rules

1 week 4 days ago

Update, June 16: In response to pressure from advocates, A.B. 2749's bans on affordability have been removed from the bill.

The California Senate Energy, Utilities, and Communications Committee will soon be the first to consider new and terrible amendments to Assemblymember Quirk-Silva’s A.B. 2749—which is backed by AT&T and other telecommunications interests. The legislation would prohibit the state from implementing affordable broadband rules for broadband companies receiving state subsidies as part of the new California infrastructure law, S.B. 156. This is despite the fact that California taxpayers—not the industry—are paying to build these networks.

An affordability requirement is crucial because this will be the only broadband access point for these Californians, and it's likely they will be subject to monopolistic pricing practices. A University of California-Berkeley study found that rural Californians facing a Frontier monopoly were paying rates more than four times higher than in areas with competitive markets. Under an AT&T monopoly, they paid rates more than three time higher. On average, Americans living in areas with only one or two ISPs pay prices five times higher than those in areas with competition in the market. Given what we've already seen in these areas across the state, we can guess these are similar to prices that Californians will face if A.B. 2749 becomes law.

The bill would also undermine the federal fiber infrastructure plan the Biden Administration released in May by allowing inferior non-fiber solutions to be equally eligible for state subsidies. This is a mistake federal and California legislators have made before, wasting billions of dollars. A.B. 2749 would prevent California from merging money from the state’s infrastructure plan with the funds from the new federal infrastructure plan to deliver high-quality access to underserved and unserved populations. Because the federal law also requires states to establish affordability rules for broadband access, California would be unable to fully leverage the federal funding if A.B. 2749 became law.

To read a detailed legal analysis of the legislation, EFF filed the following letter with the Senate Utilities Committee. EFF also submitted this letter explaining why it makes sense to require fiber networks financed with public money to be subject to basic affordability rules.

Passing A.B. 2749 would unwind the promise of California’s infrastructure law to promote local, affordable future- proof access to the internet. It would also undermine the amazing array of local efforts happening all across the state to deliver affordable solutions by allowing major corporations a chance to capture the dollars without conditions. To stop that from happening, we need California lawmakers to hear why you oppose the bill.

If you are a California based business, non-profit, organization, or local elected official, EFF is collecting signers here to join us in telling the legislature we oppose the bill. 

We’ve won many broadband victories in the past. We can do it again here.

Consumers Have Been Winning in Sacramento. Don’t Let the Empire Strike Back.

Large internet service providers (ISPs) such as AT&T have historically had their way in Sacramento when it comes to telecom law and policy. Ten years ago, they were granted laws that gave them $300 million to pay for increasingly obsolete technology. They also influenced policymakers to secure wide-reaching deregulation and even banned cities from building their own broadband. But the winds started to shift in 2018 after the Federal Communications Commission, then under the Trump Administration, repealed national network neutrality rules. That prompted California to pass its own net neutrality law, S.B. 822,  which was introduced by State Senator Scott Wiener.

Every year since then, there has a been successful, pro-consumer effort in Sacramento. In 2018, California repealed the last of its anti-municipal broadband laws with Former Assemblymember Chau’s A.B. 1999. In 2019, the California legislature restored regulation of broadband carriers at the California Public Utilities Commission by not moving forward with A.B. 1366. It also passed new rules in response to firefighters' broadband services being throttled by Verizon during a state emergency (A.B. 1699). Most recently, in response to the COVID pandemic, California passed a package of landmark bills including Senator Lena Gonzalez’s S.B. 4 and Governor Newsom’s budget bill S.B. 156. These set up funding to invest enough money to deliver future-proof fiber infrastructure to nearly every Californian. It was the largest investment in public broadband infrastructure of any U.S. State and was the result of a two-year effort from advocates.

California has made incredible progress for consumer broadband access, and industry incumbents have opposed every single one of these proposals. Now they’re  back at it again with Assemblymember Quirk-Silva’s A.B. 2749. Local efforts to promote competition in communities across California are beginning to take root, and ISPs want to stop them before it’s too late. Don’t let them get their way again. Tell your lawmakers to oppose A.B. 2479.

Ernesto Falcon

Victory! New York’s Vaccine Privacy Bill Heads to Governor’s Desk

1 week 5 days ago

In a win for medical privacy in the Empire State, the New York Legislature has passed a pivotal bill that protects people’s private immunity information like COVID-19 vaccine status from being used to track their movements and be used against them in unauthorized ways. The Electronic Frontier Foundation advocated for the bill, now headed to New York Gov. Kathy Hochul’s desk to be signed into law. 

A. 7326/S. 6541 protects the confidentiality of medical immunity information by limiting what data can be collected or shared, who it can be shared with, and how long it can be stored. In New York, bills must have identical versions in each chamber in order to move forward; these passed the Senate and Assembly on June 2 and 3, respectively.

New Yorkers are often required to present information about their immunity—like vaccination records or test results—to get in the door at restaurants, gyms and entertainment venues. This bill protects them from that information being misused by private companies, the government, or other entities that wish to track their movements or use their private medical information to punish or discriminate against them. Assuring people that their medical information will not be used in unauthorized ways increases much-needed trust in public health efforts. 

This bill expressly prohibits immunity information from being shared with immigration or child services agencies seeking to deport someone or take away their children based on vaccination status. It also requires that those asking for immunity information must accept an analog credential, such as a paper record.

EFF has previously expressed privacy and security concerns about how the Excelsior Pass system, in which New Yorkers store and prove their immunity information, was introduced and how it will be expanded under current plans. 

We must put privacy protections in place now to ensure personal medical information is kept safe throughout and beyond the pandemic, and that information won’t be used to harm the most vulnerable members of our society. EFF looks forward to this bill becoming law as soon as possible.

Malaika Fraley

Senator Declares Amazon Ring's Audio Surveillance Capabilities "Threaten the Public"

1 week 5 days ago

Massachusetts Senator Ed Markey, a long-time critic of Amazon’s surveillance doorbell camera, Ring, has released a letter of concern and inquiry concerning the device’s audio capabilities. This is partially in response to a recent study conducted by Consumer Reports that found that once the device’s motion sensor has been triggered, it can record conversation-level audio from up to 25-feet away.

This has disturbing implications for people who walk, bike, or even drive by dozens of these devices every day, not knowing that their conversations may have been captured and recorded. It may be even more problematic for people who live in an apartment building where neighbors have installed Ring cameras indoors, where echoey hallways might amplify conversations that could be recorded even beyond line of sight with the device. A surveillance doorbell owner may even have their own private conversations caught on tape if the device is triggered and captures voices drifting through open windows.

In his letter to Amazon, the senator writes: 

Since Ring has well over 10 million device users, it appears likely that Ring products record millions of Americans’ activity without their knowledge every day. This surveillance system threatens the public in ways that go far beyond abstract privacy invasion: individuals may use Ring devices’ audio recordings to facilitate blackmail, stalking, and other damaging practices. As Ring products capture significant amounts of audio on private and public property adjacent to dwellings with Ring doorbells—including recordings of conversations that people reasonably expect to be private— the public’s right to assemble, move, and converse without being tracked is at risk.

In the UK, a judge ruled in October 2021 that the audio capabilities of Ring cameras amounted to a violation of the Data Protection Act when a neighbor put up multiple cameras aimed at a communal parking lot.

We applaud Senator Markey for his willingness to raise these concerns in public and to bring them straight to Amazon. We echo his concerns and will continue to advocate for default end-to-end encryption for the devices and for an end to default audio collection with every motion-triggered video recording. We will also push Amazon never to incorporate biometrics like voice recognition. 

Matthew Guariglia

EFF Urges Congress to Strengthen the American Data Privacy and Protection Act

1 week 5 days ago

EFF has long been in the business of searching for ways to guarantee digital privacy. That made essential that, ahead of a hearing titled “Protecting America's Consumers: Bipartisan Legislation to Strengthen Data Privacy and Security,” we send a letter to the House Energy and Commerce Committee on the draft text of a bill called “The American Data Privacy and Protection Act.”

American consumers need a strong federal privacy law. EFF appreciates the Committee highlighting the national conversation over how the government should protect us from businesses that harvest and monetize our personal information, and address the racial and other bias that excludes consumers of color from opportunities. To achieve these goals, the discussion draft of the American Data Privacy and Protection Act needs to be strengthened in several areas listed in our letter.

This draft might be a step in the right direction on many privacy concerns – assuming it is amended, as discussed in the letter, to ensure strong private enforcement in court, and to not undo other privacy laws at the federal and state levels.

We look forward to working with the sponsors to improve this legislation and strengthen the necessary protections. 

India McKinney

How the Federal Government Buys Our Cell Phone Location Data

1 week 6 days ago

Over the past few years, data brokers and federal military, intelligence, and law enforcement agencies have formed a vast, secretive partnership to surveil the movements of millions of people. Many of the mobile apps on our cell phones track our movements with great precision and frequency. Data brokers harvest our location data from the app developers, and then sell it to these agencies. Once in government hands, the data is used by the military to spy on people overseas, by ICE to monitor people in and around the U.S., and by criminal investigators like the FBI and Secret Service. This post will draw on recent research and reporting to explain how this surveillance partnership works, why is it alarming, and what can we do about it.

Where does the data come from?

Weather apps, navigation apps, coupon apps, and “family safety” apps often request location access in order to enable key features. But once an app has location access, it typically has free rein to share that access with just about anyone.

That’s where the location data broker industry comes in. Data brokers entice app developers with cash-for-data deals, often paying per user for direct access to their device. Developers can add bits of code called “software development kits,” or SDKs, from location brokers into their apps. Once installed, a broker’s SDK is able to gather data whenever the app itself has access to it: sometimes, that means access to location data whenever the app is open. In other cases, it means “background” access to data whenever the phone is on, even if the app is closed.

One app developer received the following marketing email from data broker Safegraph:

SafeGraph can monetize between $1-$4 per user per year on exhaust data (across location, matches, segments, and other strategies) for US mobile users who have strong data records. We already partner with several GPS apps with great success, so I would definitely like to explore if a data partnership indeed makes sense.

But brokers are not limited to data from apps they partner with directly. The ad tech ecosystem provides ample opportunities for interested parties to skim from the torrents of personal information that are broadcast during advertising auctions. In a nutshell, advertising monetization companies (like Google) partner with apps to serve ads. As part of the process, they collect data about users—including location, if available—and share that data with hundreds of different companies representing digital advertisers. Each of these companies uses that data to decide what ad space to bid on, which is a nasty enough practice on its own. But since these “bidstream” data flows are largely unregulated, the companies are also free to collect the data as it rushes past and store it for later use. 

The data brokers covered in this post add another layer of misdirection to the mix. Some of them may gather data from apps or advertising exchanges directly, but others acquire data exclusively from other data brokers. For example, Babel Street reportedly purchases all of its data from Venntel. Venntel, in turn, acquires much of its data from its parent company, the marketing-oriented data broker Gravy Analytics. And Gravy Analytics has purchased access to data from the brokers Complementics, Predicio, and Mobilewalla. We have little information about where those companies get their data—but some of it may be coming from any of the dozens of other companies in the business of buying and selling location data.

If you’re looking for an answer to “which apps are sharing data?”, the answer is: “It’s almost impossible to know.” Reporting, technical analysis, and right-to-know requests through laws like GDPR have revealed relationships between a handful of apps and location data brokers. For example, we know that the apps Muslim Pro and Muslim Mingle sold data to X-Mode, and that navigation app developer Sygic sent data to Predicio (which sold it to Gravy Analytics and Venntel). However, this is just the tip of the iceberg. Each of the location brokers discussed in this post obtains data from hundreds or thousands of different sources. Venntel alone has claimed to gather data from “over 80,000” different apps. Because much of its data comes from other brokers, most of these apps likely have no direct relationship with Venntel. As a result, the developers of the apps fueling this industry likely have no idea where their users’ data ends up. Users, in turn, have little hope of understanding whether and how their data arrives in these data brokers’ hands.

Who sells location data? 

Dozens of companies make billions of dollars selling location data on the private market. Most of the clients are the usual suspects in the data trade—marketing firms, hedge funds, real estate companies, and other data brokers. Thanks to lackluster regulation, both the ways personal data flows between private companies and the ways it’s used there are exceedingly difficult to trace. The companies involved usually insist that the data about where people live, sleep, gather, worship, and protest is used for strictly benign purposes, like deciding where to build a Starbucks or serving targeted ads. 

But a handful of companies sell to a more action-oriented clientele: federal law enforcement, the military, intelligence agencies, and defense contractors. Over the past few years, a cadre of journalists have gradually uncovered details about the clandestine purchase of location data by agencies with the power to imprison or kill, and the intensely secretive companies who sell it.

This chart illustrates the flow of location data from apps to agencies via two of the most prominent government-facing brokers: Venntel and Babel Street.

The vendor we know the most about is Venntel, a subsidiary of the commercial agency Gravy Analytics. Its current and former clients in the US government include, at a minimum, the IRS, the DHS and its subsidiaries ICE and CBP, the DEA, and the FBI. Gravy Analytics does not embed SDKs directly into apps; rather, it acquires all of its data indirectly through other data brokers. 

Few data brokers reveal where their data comes from, and Venntel is no exception. But investigations and congressional testimony have revealed at least a few of Venntel’s sources. In 2020, Martin Gundersen of NRK Beta filed requests under the GDPR’s Right to Know in order to trace how data about his location made its way to Venntel. He installed two navigation apps from the company Sygic, as well as an app called Funny Weather, and granted them location permissions. Funny Weather sold his data to location broker Predicio, which then sold it to Gravy Analytics. The Sygic apps sold data to both Predicio and another firm, Complementics, which sent data to Gravy as well. All of the data ended up inside Venntel’s database. In 2021, following a lengthy investigation by Sen. Ron Wyden, broker Mobilewalla revealed that it too had sold data to Venntel. 

Gravy Analytics shares some information about its location-data practices on its website. Gravy claims it has access to “over 150 million” devices. It also states outright that it does not gather data from the bidstream. But government officials have told Congress that they believe Venntel’s data is derived both from SDKs and from the bidstream, and there is other evidence to support that belief. One of Venntel’s sources, Mobilewalla, has testified to Congress that it gathers and sells bidstream-based location data. Government contracts describe Venntel’s dataset as containing data from “over 80,000 apps.” Data brokers that rely solely on SDKs, like X-Mode, tend to maintain direct relationships with just a few hundred apps. Venntel’s incredible app coverage makes it likely that at least a portion of its data has been siphoned from the bidstream.

Venntel’s data is disaggregated and device-specific—making it easier for this data to point right to you. Motherboard reported that Venntel allows users to search for devices in a particular area, or to search for a particular device identifier to see where that device has been. It allows customers to track devices to specific workplaces, businesses, and homes. Although it may not include explicitly identifying information like names or phone numbers, this does not mean it is “anonymous.” As one former employee told Motherboard, “you could definitely try and identify specific people.” 

Venntel has sold several annual licenses to its “Venntel Portal,” a web app granting access to its database, at a price of around $20,000 for 12,000 queries. It has also sold direct access to all of its data from a region, updated daily and uploaded to a government-controlled server, for a more lavish $650,000 per year. 

Babel Street is a government contractor that specializes in “open-source intelligence” (OSINT) services for law enforcement. Its flagship product, Babel X, scrapes and interprets text from social media and other websites and merges OSINT with data gathered from more traditional intelligence techniques. Babel Street is “widely used” by the military, intelligence agencies, private companies, and federal, state, and local law enforcement. It also sells access to app-derived location data through a service called “Locate X,” as first reported by Protocol in March 2020. 

Babel Street first registered Locate X with the U.S. Patent and Trademark Office in 2017. The service allows Babel’s clients to query a database of app-derived location data. Locate X can be used to draw a digital fence around an address or area, pinpoint devices that were in that location, and see where else those devices went in prior months. Records obtained by Motherboard from DHS reveal that, according to a DHS official, “Babel Street basically re-hosts Venntel’s data at a greater cost and with significant constraints on data access.” Babel Street employees have also said Venntel is the ultimate source of most of the location data flowing to the federal government that we are aware of.

Although Babel Street has many public-facing marketing materials, it has attempted to keep details about Locate X a secret. Terms of use provided by Babel Street to its clients ban using Locate X data as evidence, or even mentioning it in legal proceedings. Still, several buyers of Locate X have been reported publicly, including the Air National Guard, the U.S. Special Forces Command (SOCOM), CBP, ICE, and the Secret Service.

Anomaly 6 (or “A6”) also sells app-derived location data to the government. Its existence was first reported by the Wall Street Journal in 2020. 

A6 was founded by a pair of ex-Babel Street employees, Brendan Huff and Jeffrey Heinz. At Babel Street, the two men managed relationships with large government clients, including the Defense Department, the Justice Department, and the intelligence community. After striking off on their own, A6 allegedly began developing a product to compete with Babel Street’s Locate X, and catering its services to a very similar clientele. In 2018, Babel Street sued the company and its founders, and the two companies eventually settled out of court.

A6 presents very little information about itself publicly. Its website comprises just a company logo and an email address on an animated background. It is not registered as a data broker in either California or Vermont. Not much is known about A6’s data sources, either. The Wall Street Journal reported that it collects data via SDKs in “more than 500” mobile apps. According to a 2021 report by Motherboard, these SDKs are deployed by “partners” of the company, not A6 itself, creating a buffer between the company and its data sources. A6 claims its contracts with the government are “confidential” and it can’t reveal which agencies it’s working with. Public procurement records reveal at least one relationship: in September 2020, SOCOM division SOCAFRICA paid $589,000 for A6’s services.

In April 2022, The Intercept and Tech Inquiry reported on presentations that A6 made in a meeting with Zignal Labs, a social media monitoring firm with access to Twitter’s “firehose.” A6 proposed a partnership between the two firms that would allow their clients to determine “who exactly sent certain tweets, where they sent them from, who they were with,” and more. In order to demonstrate its capability, A6 performed a live demonstration: it tracked phones of Russian soldiers amassed on the Ukrainian border to show where they had come from, and it tracked 183 devices that had visited both the NSA and CIA headquarters to show where American intelligence personnel might be deployed. It followed one suspected intelligence officer around the United States, to an American airfield in Jordan, and then back to their home. 

X-Mode is a location data broker which collects data directly from apps with its own SDK. X-mode began as the developer of a single app, “drunk mode,” designed to help users avoid sending embarrassing texts after dark. But once the app started getting traction, the company decided its real value was in the data. It pivoted to develop an SDK that gathered location data from apps and funneled it to X-Mode, which sold the data streams to nearly anyone who would pay. It’s not clear whether X-Mode had direct relationships with any government clients, but it has sold data to several defense contractors that work directly with the U.S. military, including Systems & Technology Research and the Sierra Nevada Corporation. It has also sold to HYAS, a private intelligence firm that tracks “threat actors” suspected of being involved with cyberattacks “to their door" on behalf of law enforcement and private clients.

X-Mode developed an SDK that was embedded directly in apps. It paid developers directly for their data, at a rate of $0.03 per U.S. user per month, and $0.005 per international user. X-mode’s direct-SDK model also made it possible to figure out exactly which apps shared data with the company by analyzing the apps themselves. That’s why the company made headlines in 2020, when Motherboard revealed that dozens of apps that target at-risk groups - including two of the largest Islamic apps in the U.S., Muslim Pro and Salaat First - were monetizing location data with X-Mode. This visibility also made X-Mode more accountable for its behavior: both Apple and Google concluded that X-Mode violated their developer terms of service, and banned any apps using X-Mode’s SDK from the App Store and the Play Store.

At one time, X-Mode boasted it had data from about 25 million active users in the U.S. and 40 million more worldwide, tracked through more than 400 different apps. After the crackdown by mobile platforms, the company was bought out and rebranded as Outlogic, and it adjusted its public image. But the company is still active in the location data market. Its new parent, Digital Envoy, sells “IP-based location” services, and describes its Outlogic subsidiary as “a provider of location data for the retail, real estate and financial markets.” Digital Envoy also has deep ties to the U.S. government. The Intercept has reported that Digital Envoy contracts with the IRS enforcement division, the DHS Science and Technology Directorate (which has also contracted with Venntel), and the Pentagon’s Defense Logistics Agency. It’s unclear whether Outlogic’s app-based location data is incorporated into any of those Digital Envoy relationships.

How is location data used?

While several contracts between data brokers and federal agencies are public records, very little is known about how those agencies actually use the services. Information has trickled out through government documents and anonymous sources.

Department of Homeland Security

Perhaps the most prominent federal buyer of bulk location data is the U.S. Department of Homeland Security (DHS), as well as its subsidiaries, Immigrations and Customs Enforcement (ICE) and Customs and Border Patrol (CBP). The Wall Street Journal reported that ICE used the data to help identify immigrants who were later arrested. CBP uses the information to “look for cellphone activity in unusual places,” including unpopulated portions of the US-Mexico border. According to the report, government documents explicitly reference the use of location data to discover tunnels along the border. Motherboard reported that CBP purchases location data about people all around the United States, not just near the border. It conducts those searches without a court order, and it has refused to share its legal analysis of the practice with Congress.

The Federal Procurement Database shows that, in total, DHS has paid at least $2 million for location data products from Venntel. Recently released procurement records from DHS shed more light on one agency’s practice. The records relate to a series of contracts between Venntel and a recently-shuttered research division of DHS, the Homeland Security Advanced Research Projects Agency (HSARPA). In 2018, the agency paid $100,000 for five licenses to the Venntel Portal. A few months later, HSARPA upgraded to a product called “Geographic Marketing Data - Western Hemisphere,” forking over $650,000 for a year of access. This data was “delivered on a daily basis via S3 bucket”—that is, shipped directly to DHS in bulk. From context, it seems like the “Venntel Portal” product granted limited access to data hosted by Venntel, while the purchase of “Geographic Marketing Data” gave DHS direct access to all of Venntel’s data for particular regions in near-real-time.

The HSARPA purchases were made as part of a program called the Data Analytics Engine (DA-E). In a Statement of Work, DHS explained that it needed data specifically for Central America and Mexico in order to support the project. Elsewhere, the government has boasted that ICE has used “big data architecture” from DA-E to generate “arrests, seizures, and new leads.” ICE has maintained an ongoing relationship with Venntel in the years since, signing at least six contracts with the company since 2018.

Federal law enforcement

The FBI released its own contracts with Venntel in late 2021. The documents show that the FBI paid $22,000 for a single license to the Venntel Portal, but are otherwise heavily redacted. Another part of the Department of Justice, the Drug Enforcement Administration (DEA), committed $25,000 for a one-year license in early 2018, but Motherboard reported that the agency terminated its contract before the first month was up. According to the Wall Street Journal, the IRS tried to use Venntel’s data to track individual suspects, but gave up when it couldn’t locate its targets in the company’s dataset. Some of Babel Street’s law enforcement customers have had more success: Protocol reported that the U.S. Secret Service used Locate X to seize illegal credit card skimmers installed at gas pumps in 2018.

Military and intelligence agencies

Military and foreign intelligence agencies have used location data in numerous instances. In one unclassified project, researchers at Mississippi State University used Locate X data to track movements around Russian missile test sites, including those of high-level diplomats. The U.S. Army funded the project and said it showed “good potential use” of the data in the future. It also said that the collection of cell phone data was consistent with Army policy as long as no “personal characteristics” of the phone’s owner were collected (but of course, detailed movements of individuals are actually “personal characteristics”).

Another customer of Locate X is the Iowa Air National Guard, as first reported by Motherboard. Specifically, the Des Moines-based 132d wing—which reportedly conducts “long-endurance coverage” and “dynamic execution of targets” with MQ-9 Reaper drones—purchased a 1-year license to Locate X for $35,000. The air base said the license would be used to “support federal mission requirements overseas,” but did not elaborate further.

Anomaly 6 only has one confirmed federal client: the U.S. Special Operations Command, or SOCOM. In 2020, SOCAFRICA - a division which focuses on the African continent - spent nearly $600,000 on a “commercial telemetry feed” from A6. In March 2021, SOCOM told Vice that the purpose of the contract was to “evaluate” the feasibility of using A6 services in an “overseas operating environment,” and that the government was no longer executing the contract. In September 2021, federal procurement records show that the U.S. Marines’ special operations command, MARSOC, executed another contract for $8,700 for “SME Support” from A6. (SME could stand for Subject Matter Expert, implying that A6 provided training or expertise.)

Finally, the Defense Intelligence Agency (DIA) has confirmed that it, too, works with location data brokers. In a January 2021 memo to Senator Ron Wyden, DIA stated that it “provides funding to another agency” that purchases location data from smartphones on its behalf. The data is global in scope, including devices inside and outside the United States, though the DIA said it segregates U.S. data points into a separate database as it arrives. The U.S. location database can only be queried after a “specific process” involving approval from multiple government agencies, and the DIA stated that permission had been granted five times in the previous two and a half years. The DIA claimed it needs a warrant to access the information. It’s unclear which data broker or brokers the DIA has worked with.

Is it legal for the federal government to buy our location data?

In a word, “no.” The Fourth Amendment prohibits unreasonable searches and seizures, and it requires particularity in warrants. If the federal government wants specific location data about a specific person, it must first get a warrant from a court based on probable cause of crime. If the federal government wants to set up a dragnet of the ongoing movements of millions of identifiable people for law enforcement purposes, too bad – that’s a forbidden general search. The federal government cannot do an end-run around these basic Fourth Amendment rules through the stratagem of writing a check to location data brokers.

The U.S. Supreme Court’s ruling on cell-site location information, or CSLI, is instructive. CSLI is generated as cell phones interact with cell towers. It’s collected passively, all the time, from every phone that has cell service. It is less granular than GPS-based location data, and thus cannot locate devices as accurately. The only companies that can access it directly are the phone carriers themselves. In 2018, the Supreme Court ruled in Carpenter v. United States that CSLI is protected by the Fourth Amendment. It also held that the government can’t demand CSLI from telecom companies without a court-approved warrant. Since 2018, all major U.S. carriers have publicly committed to stop selling raw CSLI to anyone. Police do commonly obtain warrants for CSLI pertaining to active investigations.

Courts also are beginning to crack down on “geofence warrants” for GPS data from large companies like Apple and Google. These warrants seek all the phones present in a particular time and place. As EFF has explained, they are general searches that violate the Fourth Amendment’s particularity requirement. One was struck down by a federal district court earlier this year in United States v. Chatrie. Federal purchase of location data about millions of people raises similar Fourth Amendment concerns.

With access to location data from commercial data brokers, federal agencies can query data about the movements of millions or billions of identifiable people at once. They are not limited to data about a single area or slice of time. As Anomaly 6 reportedly demonstrated, they can start from a single time and place, then look forwards or backwards at the location histories of hundreds of devices at once, learning where their owners live, work, and travel. Agencies can make extraordinarily broad queries that span entire states or countries, and filter the resulting data however they see fit. It appears that this kind of full-database access is what the DHS purchased in its 2018 deal with Venntel. This stretches the Fourth Amendment’s particularity requirement far beyond the breaking point.

In 2021, the Center for Democracy and Technology published a comprehensive report on the legal framework underpinning the government’s purchasing of location data. It concluded that when law enforcement and intelligence agencies purchase personal data about Americans, “they are evading Fourth Amendment safeguards as recognized by the Supreme Court.” EFF agrees. The Fourth Amendment should not be for sale. Sensitive data about our movements should not be collected and sold in the first place, and it certainly shouldn’t be made available to government agencies without a particularized warrant.

Finally, transparency laws in Vermont and California require certain kinds of data brokers, including those that process location data, to register with the state. Of the companies discussed above, X-Mode, Gravy Analytics, and Venntel are registered in California, but Babel Street and Anomaly 6 are not. These laws need better enforcement.

What can we do?

Congress must ban federal government purchase of sensitive location information. The issue is straightforward: government agencies should not be able to buy any personal data that normally requires a warrant. 

But legislatures should not stop there. Personal data is only available to government because it’s already amassed on the private market. We need to regulate the collection and sale of personal data by requiring meaningful consent. And we should ban online behavioral advertising, the industry which built many of the tracking technologies that enable this kind of mass surveillance.

The developers of mobile operating systems also have power to shut down this insidious data market. For years, both Apple and Google have explicitly supported third-party tracking with technology like the advertising identifier. They must reverse course. They also must crack down on alternative methods of tracking like fingerprinting, which will make it much more difficult for brokers to track users. Furthermore, OS developers should require apps to disclose which SDKs they pack into their apps and whom they share particular kinds of data with. Both Apple and Google have made strides towards data-sharing transparency, giving users a better idea of how particular apps access sensitive permissions. However, users remain almost entirely in the dark about how each app may share and sell their data.

Fortunately, you can also take steps towards preventing your location data from winding up in the hands of data brokers and the federal government. As a first step, you can disable your advertising identifier. This removes the most ubiquitous tool that data brokers use to link data from different sources to your device. You can also look at the apps on your phone and turn off any unnecessary permissions granted to third-party apps. Data brokers often obtain information via apps, and any app with location permission is a potential vector. Revoke permissions that apps do not absolutely need, especially location access, and uninstall apps that you do not trust. 

Bennett Cyphers

EFF’s Flagship Jewel v. NSA Dragnet Spying Case Rejected by the Supreme Court

1 week 6 days ago

We all deserve the right to have a private conversation online. That's why EFF has taken on government surveillance for the past 30-plus years. One of our longest-running efforts has been to stop the National Security Agency’s (NSA) surveillance that sweeps up tens—if not hundreds—of millions of innocent people in its dragnet. Our work will continue.

But today the U.S. Supreme Court slammed the courthouse door on our flagship NSA surveillance lawsuit, Jewel v. NSA, effectively validating the government’s claims that something known and debated across the world—the NSA’s mass surveillance—is somehow too secret to be challenged in open court by ordinary members of the public whose communications were caught in the net.

The Supreme Court this week allowed our case to be dismissed because it’s a “secret” that the mass spying programs that everyone has known about since at least the Snowden documents came to light in 2013 (and disclosed in the national news long before that) involved the nation’s two largest telecommunications carriers.  Yes, you read that right: something we all know is a still officially a "secret" and so cannot be the subject to litigation. Specifically, the Court refused to take on and reconsider a Ninth Circuit decision (and an underlying district court ruling) that held that the state secrets privilege blocked our clients’ efforts to prove that their data was intercepted such that they had standing to sue.

The central fact that these courts found to be “secret” is that AT&T and Verizon participated in the mass spying, even though we had submitted ample public evidence to support that finding. The Ninth Circuit decision was so cursory that the court didn’t even review the lower court’s sealed opinion addressing the government’s actual evidence of the spying, despite the fact that the District Court specifically required the government to present that evidence in secret.

In the name of national security, the Supreme Court has now allowed the government to unilaterally cut off lawsuits like ours at the knees, thereby preventing people from challenging egregiously unlawful surveillance.

As we said in our briefs, the courts have now : “created a broad national-security exception to the Constitution that allows all Americans to be spied upon by their government while denying them any viable means of challenging that spying.”  This exception prevents courts from even considering whether the surveillance violates the Constitution or other privacy laws, effectively denying Americans their day in court and the benefits of the laws that Congress passed to protect them. The American people and our Constitution deserve more from federal courts.

EFF's NSA Spying Cases

First, some history: EFF filed Jewel v. NSA in 2008, and our original case, Hepting v. AT&T, in 2006. Both cases arose from three different kinds of surveillance that the U.S. government initiated in the aftermath of 9/11: first, the mass telephone records collection program, second, the mass Internet metadata collection program and third, what we later learned was called the Upstream program, where the NSA, with the help of the major telecommunications companies, tapped into the Internet backbone at key locations to monitor communications as they passed by. And of course in 2013, after years of disingenuous denials by those in government, Edward Snowden’s documents helped make it crystal clear to the entire world that these programs existed, pushed the government to admit them, and helped spur some real reform (more on that below).

And so much happened along the way. In 2006, EFF won in the District Court against AT&T’s claim that the case must be dismissed, along with collected cases against Verizon and other telecommunications carriers and we defended that decision in the Ninth Circuit.  But in 2008, Congress shamefully enacted “retroactive immunity,” protecting the telecommunications carriers from the consequences of their rampant violations of federal and state privacy laws. Undeterred (and promised by many in Congress that their only concern was protecting the carriers) we launched a suit directly against the NSA in Jewel v. NSA, for the same three illegal mass spying programs. An earlier-filed case against the government, called Shubert, also survived. Later we launched another case, First Unitarian Church, which focused in on the right of association at issue in the telephone records collection portion of the NSA Spying, but Jewel remained the flagship case.  Initially, the federal courts recognized that the law protected our case against several government attacks and the District Court judge even ordered the government to present evidence of our standing in secret. But ultimately, and confusingly, the lower courts changed course.

Supreme Court Embraces National Security Secrecy as Blocking the Rights of Victims

The Supreme Court’s rejection of our case is shameful, but not surprising.  This term the Supreme Court had two other chances to bring some basic constitutional accountability to the national security state and it failed to do so.  First, in Abu Zubaydah (also called Husayn), the Supreme Court confirmed that in national security speak, “secret” doesn’t mean secret. Instead, for purposes of the state secrets privilege, “secret” means whatever facts the government wants to keep out of court by refusing to formally confirm them, regardless of how widely known they are.

In Abu Zubaydah, the Supreme Court allowed the government to claim the state secrets privilege over the fact that the plaintiff’s torture (which was admitted) occurred in a U.S. government black site in Poland, a fact that had been confirmed by the European courts as well as the former Polish Prime Minister.  The Supreme Court allowed this claim of state secrets to block the plaintiff’s attempt to get information from former government contractors via the international discovery process to support his claim against Polish officials in the European Courts

The second case is called Fazaga, where the Supreme Court rejected the argument that Congress preempted the state secrets privilege when it created a specific process in the Foreign Intelligence Surveillance Act (FISA), found in section 1806(f), for handling the government’s claims of national security secrecy in cases arising from alleged illegal surveillance.  Fazaga is a case that arose out of an undercover FBI investigation by a confidential informant into a Muslim community in Southern California that was so alarming that the targets themselves called the FBI to report the informant as a potential terrorist.

In Fazaga the Supreme Court held that despite Congress’ express creation of a method for a federal court to secretly review evidence of claimed illegal surveillance, the Executive Branch can just still unilaterally assert the state secrets privilege and prevent the FISA statute from actually being applied. The Jewel plaintiffs relied on section 1806(f), plus another statute, 18 U.S.C. § 2712, that separately authorizes redress for illegal surveillance. But the Supreme Court refused to recognize that Congress intended to override the Executive Branch’s ability to claim state secrets.

Although we will still be on the lookout for ways to get the Courts to stand up for your rights not to be spied upon by your government, with these decisions the Supreme Court has fully endorsed the idea that the Executive Branch has unilateral authority to use secrecy arguments, no matter how flimsy, to close the courthouse doors for those seeking to vindicate their rights to have a private conversation.

No Legal Victory, but Lots of Shifts in NSA Spying

While we did not prevail in the litigation, the work we did, and the millions of Americans who raised concerns about the NSA spying over the years, did result in some dramatic changes in NSA spying. Congress stopped the mass telephone records program in 2015 as part of the USA Freedom Act.  Of course, the revamped program still (and predictably) ended up collecting and keeping a huge number of telephone records, a fact the government reported itself before the law authorizing the program expired. The mass Internet metadata program was stopped in 2011, allegedly due to Congressional concerns that it wasn’t actually providing any useable intelligence.

The Upstream program continues, however, although it has been limited to just metadata (it had included content review) because of ongoing issues raised by the FISA Court.  Upstream is nonetheless still used to broadly surveil millions of Americans, with the government recently disclosing that between December 2020 and November 2021, the FBI queried the data of potentially more than 3 million U.S. persons without a warrant. The statute that purports to authorize Upstream, section 702, is set to expire in December, 2023.

Many Heros to Thank

We are forever grateful to our clients, Carolyn Jewel, Tash Hepting, Erik Knutzen, Joice Walton, and Gregory Hicks, who stood up for everyone in the U.S., and who remained steadfast despite the many twists and turns of this case. They understood how illegal the NSA’s spying was and were resolute in demanding their day in court. We are grateful for their courage.

We are also eternally grateful to our whistleblowers, most especially Mark Klein, who first showed up at our door in 2006, and who at great personal risk brought us key evidence of the NSA’s spying. He demonstrated that there was an NSA facility in Room 641a of the AT&T building on Folsom Street in San Francisco, a revelation that served as the centerpiece of the case. Mark also came with us to Washington in 2008 to try to stop AT&T from getting retroactive immunity from Congress.

Big thanks also to Bill Binney, J. Kirk Wiebe, Thomas Drake, and of course, the indomitable Ed Snowden, for sacrificing so much to try to bring the truth to America and the world.  They are our heroes and should be to anyone who cares about privacy.

We’d also like to especially thank our co-counsel, who were indispensable and unflagging despite years of ups and downs in the courts. Our team leader, Richard R. Wiebe shepherded this case, along with help from Thomas E. Moore III, Jim Tyre (RIP), Aram Antaramian, Michael Kwun.  Additionally, Rachel Meny, Benjamin Berkowitz, and, and many other folks from the law firm of Keker, Van Nest & Peters LLP helped in this long journey. This case would not have been possible without all of their immense contributions.

The Fight for your Privacy Continues

Finally, though our challenge in Jewel has ended, the fight to end the NSA’s mass surveillance continues. As noted above, section 215 has expired, although the government is allowed to continue using it in investigations that started before expiration. Nonetheless, it should not be renewed and Congress should push the government to end its use under preexisting authorizations.

Equally importantly, in late 2023 we’ll have a chance to put an end to Section 702, one of the key provisions that Congress passed in 2008 to protect the NSA’s activities and which currently authorizes what is left of the Upstream program. Congress should not renew Section 702 next year. Year after year, the Inspector General’s reports and FISA Court review of the program find huge problems in its implementation. The NSA simply cannot do this kind of mass surveillance consistent with the Constitution. It’s time for all of these gigantic, ungovernable, unaccountable and insanely expensive mass spying endeavors to end. It’s time for Americans to be once again allowed to have an online conversation without the NSA watching who they talk to, when and for how long.  While the courts have abdicated their responsibility to protect you against NSA Spying, there is a good chance to push Congress to scale back the NSA’s authority. And we’ll need all of your help to make sure our voices are heard and heeded. 

 

 

Related Cases: Jewel v. NSAHepting v. AT&TFirst Unitarian Church of Los Angeles v. NSA
Cindy Cohn

Platform Liability Trends Around the Globe: Moving Forward

2 weeks 4 days ago

This is the final installment in a four-part blog series surveying global intermediary liability laws.  You can read additional posts here: 

As this blog series has sought to show, increased attention on issues like hate speech, online harassment, misinformation, and the amplification of terrorist content continues to prompt policymakers around the globe to adopt stricter regulations for speech online, including more responsibilities for online intermediaries. 

EFF has long championed efforts to promote freedom of expression and create an enabling environment for innovation in a manner that balances the needs of governments and other stakeholders. We recognize that there’s a delicate balance to be struck between addressing the very real issue of platforms hosting and amplifying harmful content and activity while simultaneously providing enough protection to those platforms so that they are not incentivized to remove protected user speech, thus promoting freedom of expression. 

Today, as global efforts to change long-standing intermediary liability laws continue, we now use a set of questions to guide the way we look at such proposals. We approach new platform regulation proposals with three primary questions in mind: Are intermediary liability regulations the problem? Is the proposed solution going to fix that problem? And can inevitable collateral effects be mitigated? 

We are hopeful that policymakers will shift in the right direction on internet policy and affirm the important role of immunity for online intermediaries in fostering an enabling environment for users’ freedom of expression. We outline our recommendations on how to do so below.

Our Recommendations

Online Intermediaries Should Not Be Held Liable for User Content

Intermediaries are vital pillars of internet architecture, and fundamental drivers of free speech, as they enable people to share content with audiences at an unprecedented scale. Immunity from liability for third-party content plays a vital role in propelling the success of online intermediaries. This is one of the fundamental principles that we believe must continue to underpin internet regulation: Platforms should not be held responsible for the ideas, images, videos, or speech that users post or share online. 

Regulators should make sure that online intermediaries continue to benefit from comprehensive liability exemptions and are not held liable for content provided by users as they are not involved in co-creating or modifying that content in a way that substantially contributes to illegality. Any additional obligations must be proportionate and must not curtail free expression and innovation.

No mandated content restrictions without an order by a judicial authority 

Where governments choose to impose positive duties on online platforms, it’s crucial that any rules governing intermediary liability must be provided by laws and be precise, clear, and accessible. Such rules must follow due process and respect the principle that it should be up to independent judicial authorities to assess the illegality of content and to decide whether content should be restricted. Most importantly, intermediaries should not be held liable for choosing not to remove content simply because they received a private notification by a user. In jurisdictions where knowledge about illegal content is relevant for the liability of online intermediaries, regulators should follow the principle that actual knowledge of illegality is only obtained by intermediaries if they are presented with an order by a court or similar authority that operates with sufficient safeguards for independence, autonomy, and impartiality. 

No Mandatory Monitoring or Filtering

Obligations for platforms to monitor what users share online have a chilling effect on the speech of users, who change their behavior and abstain from communicating freely if they know they are being actively observed. It also undermines users’ privacy rights and their right to private life. Policymakers should thus not impose obligations on digital service providers to affirmatively monitor their platforms or networks for illegal content that users post, transmit, or store. Nor should there be a general obligation for platforms to actively monitor facts or circumstances indicating illegal activity by users. The use of automated filters that evaluate the legality of third-party content or which prevent the (re)upload of illegal content should never be mandated, especially considering that filters are prone to error and tend to over-block legitimate material. By the same token, no liability should be based on an intermediary’s failure to detect illegal content as this would incentivise platforms to filter, monitor, and screen user speech.

Limit the Scope of Takedown Orders

Recent cases have demonstrated the perils of worldwide content takedown orders. In Glawischnig-Piesczek v Facebook, the Court of Justice of the EU held that a court of a Member State can order platforms not only to take down defamatory content globally, but also to take down identical or “equivalent” material. This was a terrible outcome, as the content in question may be deemed illegal in one State, but is clearly lawful in many other States. Also, by referring to “automated technologies” to detect similar language, the court opened the gates of monitoring by filters, which are notoriously inaccurate and prone to overblocking legitimate material.

Reforms to internet legislation are an opportunity to acknowledge that the internet is global and takedown orders of global reach are immensely unjust and impair users’ freedom. New rules should make sure that court orders—and particularly injunctions—should not be used to superimpose the laws of one country on every other state in the world. Takedown orders should be limited to the content in question and based on the principles of necessity and proportionality in terms of geographical scope. Otherwise, it is possible that we will see one country’s government dictating what residents of other countries can say, see, or share online. This would lead to a “race to the bottom” toward creating an ever more restrictive and splintered global internet. A worthwhile effort to put limits on the scope of takedown orders was done in the proposal for the EU’s Digital Services Act. It provides that court orders should not exceed what is strictly necessary to achieve its objective and pay respect to the Charter of Fundamental Rights and general principles of international law.

Regulate Processes, Rather than Speech

Instead of holding platforms accountable for content shared by users or forcing platforms to scan every piece of content uploaded on their servers, modern platform regulation should focus on setting out standards for platforms’ processes, such as changes to terms of service and algorithmic decision making. Accountable governance, such as notifications and explanations to users whenever platforms change their terms of service, can help reduce the information asymmetry between users and powerful gatekeeper platforms. Users should be empowered to better understand how they can notify platforms about both problematic content and problematic takedown decisions and should be informed about how content moderation works on large platforms. Privacy by default, improved transparency, and procedural safeguards, such as due process and effective redress mechanisms for removal or blocking decisions, can help to ensure the protection of fundamental rights online.

Moving Forward in the Right Direction

We strongly believe that enforcing heavy-handed liability provisions on intermediaries for the content shared by their users hinders the right to freedom of expression. This doesn’t mean that we shouldn't consider proposals to reform existing regulatory regimes and introduce new elements in legislation that help address the fundamental flaws of the current online ecosystem. 

For many users, being online means being locked into a few powerful platforms, nonconsensually tracked across the Web, with their ability to access and share information left at the mercy of algorithmic decision-making systems that curate their online lives. Policymakers should put users back in control of their online experiences rather than give the few large platforms that have monopolized the digital space more power, or even oblige them, to police expression and to arbitrate access to content, knowledge, and goods and services. 

Adjustments to internet legislation offer policymakers an opportunity to examine existing  rules and to make sure that the internet remains an open platform for free expression. While the trend towards stricter liability for online intermediaries has us dismayed, it has simultaneously reinvigorated our commitment to advocating for regulatory frameworks that promote freedom of expression and innovation.

The other blogs in this series can be found here:
Christoph Schmon

Mandatory Student Spyware Is Creating a Perfect Storm of Human Rights Abuses

2 weeks 4 days ago

Spyware apps were foisted on students at the height of the Covid-19 lockdowns. Today, long after most students have returned to in-person learning, those apps are still proliferating, and enabling an ever-expanding range of human rights abuses. In a recent Center for Democracy and Technology report, 81 percent of teachers said their schools use some form of this "student monitoring" spyware. Yet many of the spyware companies supplying these apps seem neither prepared nor concerned about the harms they are inflicting on students. 

These student spyware apps promise scalable surveillance-as-a-service. The lure of “scalability” is a well-documented source of risk to marginalized users, whose needs for individualized consideration are overshadowed by the prospect of building mass-scale, one-size-fits-all “solutions” to social problems. The problems of scale are dangerously exacerbated by laws that disparately impact marginalized communities.

Today, Americans face an unprecedented, record-breaking wave of legislation targeting transgender youth: from sports bans, to speech and literature bans, to the criminalization of life-saving healthcare, all on top of the widespread practices of locker-room- and bathroom-bans.

And it’s not just trans kids in the crosshairs: Roe v. Wade, the Supreme Court precedent that protects the right to have an abortion, is likely about to be overturned.

That means that students who use their devices to research trans healthcare or abortion related material could find those devices weaponized against them, potentially resulting in criminal charges. If prosecutors consider charges against students, the data gathered by mandatory student spyware apps like Bark, Gaggle, GoGuardian, and Securly will prove invaluable.

Another recent report, this one from Senator Warren’s office, concluded that student spyware apps are more dangerous than previously imagined. Their use in schools has disproportionately targeted students from marginalized communities and needlessly increased their contact with law enforcement.

Bark, one of the spyware companies singled out by the report’s authors, replied by insisting that they develop their machine learning mechanisms informed by data ethics checklists. But these checklists are ineffective, as demonstrated by the ongoing, mounting harms caused by student spyware, such as outing LGBTQ+ students.

Securly’s own example spreadsheet of content filtering categories includes “Health” sites (like WebMD), which are flagged as “needs supervision,” and “Adult” sites, which are fully blocked. While blocking “adult” content in schools may sound reasonable, this category needs to be understood in context: the machine learning algorithms that filter content routinely misclassify any LGBTQ+ content as “Adult” content. Gaggle blocks access to any LGBTQ+ content. GoGuardian blocks access to reproductive health materials.

The recklessness of flagging WebMD and huge quantities of LGBTQ+ material gives us a sense of the lack of care taken by many student spyware vendors. If visits to WebMD are flagged for adult review, and there are already examples of these apps outing LGBTQ+ students, it isn’t difficult to see the harms that will occur as more anti-trans laws pass and the legal right to abortion is overturned.

Apps like Bark and Gaggle could be compelled by law enforcement into gathering information on students who are LGBTQ+ or seeking an abortion. But these apps are wildly unprepared to be the in-school enforcers of such laws. Even a casual reading of their underwhelming responses to Senator Warren’s report makes it clear that they are unconcerned about their future role as Witchfinder General in the abortion and gender wars.

The overwhelming medical consensus holds that denying trans healthcare puts youths’ lives at risk. Laws that criminalize your identity violate our civil liberties. So do bills that undermine freedom of speech.

Software that produces and forwards data that is used as evidence against young people seeking to exercise their human rights and civil liberties affects us all. Whether or not you are immediately affected by anti-LGBTQ+ laws, anti-trans laws, or anti-abortion laws; whether or not you are a student required to use a spyware-infected device, this should matter to you.

It matters to us. EFF fights for the right of all users to be served by their technology, not jailed by it.

Daly Barnett
Checked
2 hours 7 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed