Help Bring Dark Patterns To Light

3 weeks 3 days ago

On social media, shopping sites, and even childrens’ apps, companies are using deceptive user experience design techniques to trick us into giving away our data, sharing our phone numbers and contact lists, and submitting to fees and subscriptions. Everyday, we’re exploited for profit through “dark patterns”: design tactics used in websites and apps to manipulate you into doing things you probably would not do otherwise. 

So today, we’re joining Consumer Reports, Access Now, PEN America, and Harry Brignull (founder of DarkPatterns.org), in announcing the Dark Patterns Tip Line. It’s an online platform hosted by Consumer Reports that allows people to submit and highlight deceptive design patterns they see in everyday products and services.

Your submissions will help privacy advocates, policymakers, and agency enforcers hold companies accountable for their dishonest and harmful practices. Especially misleading designs will be featured on the site.

Dark patterns can be deceptive in a variety of ways. For example, a website may trick visitors into submitting to unwanted follow-up emails by making the email opt-out checkbox on a checkout page harder to see: for instance, using a smaller font or placing the opt-out in an inconspicuous place in the user flow. Consider this example from Carfax

The screenshot was gathered from a Reddit user u/dbilbey to the Asshole Design subreddit community in September 2020.

Another example: Grubhub hid a 15% service fee under the misleadingly vague  “taxes and fees” line of its receipt. 

The screenshot was taken directly from the Grubhub iOS app in September 2020.

You can find many more samples of dark patterns on the “sightings” page of the Dark Patterns Tip Line.

The process for submitting a dark pattern to the Tip Line is simple. Just enter the name and type of company responsible, a short description of the deceptive design, and where you encountered it. You can also include a screenshot of the design.  Submitting to the Dark Pattern Tip Line requires you to agree to the Consumer Reports User Agreement and Privacy Policy. The Dark Patterns Tip Line site has some special limitations on Consumer Reports' use of your email, and the site doesn’t use cookies or web tracking. You can opt-out of some of the permissions granted in the Consumer Reports Privacy Policy here.

A sample submission to the Dark Patterns Tip Line.

Please share the Tip Line with people you think may be interested in submitting, such as community organizations, friends, family and colleagues. For this period, the Dark Patterns Tip Line is collecting submissions until June 9th.

Help us shine a light on these deceptive designs, and fight to end them, by submitting any dark patterns you’ve come across to the Dark Patterns Tip Line.


Report Dark Pattern

Help us shine a light on these deceptive designs

Shirin Mori

Coalition Launches ‘Dark Patterns’ Tip Line to Expose Deceptive Technology Design

3 weeks 3 days ago
EFF Joins Groups Fighting Exploitative Data-Gathering in Apps and on the Web

San Francisco – The Electronic Frontier Foundation (EFF) has joined Consumer Reports, Access Now, PEN America, and DarkPatterns.org in launching the “Dark Patterns Tip Line”—a project for the public to submit examples of deceptive design patterns they see in technology products and services.

“Dark patterns” design tactics are used to trick people into doing all kinds of things they don’t mean to, from signing up for a mailing list to submitting to recurring billing. Examples seen by users every day include hard-to-close windows urging you to enter your email address on a news site, email opt-outs on shopping sites in difficult-to-find locations in difficult-to-read text, and pre-checked boxes allowing ongoing charges.

“Your submissions to the Dark Patterns Tip Line will help provide a clearer picture of peoples’ struggles with deceptive interfaces. We hope to collect and document harms from dark patterns and demonstrate the ways companies are trying to manipulate all of us with their apps and websites,” said EFF Designer Shirin Mori. “Then we can offer people tips to spot dark patterns and fight back.”

If you see a dark pattern, head to Darkpatternstipline.org, hosted by Consumer Reports. Then, click “submit a pattern,” and enter the name and type of company responsible, a short description of the misleading design, and where you found it. You can also include a screen shot. Submitting to the Dark Patterns Tip Line requires you to agree to the Consumer Reports’ user agreement and privacy policy. The Dark Patterns Tip Line site has some special limitations on Consumer Reports’ use of your email, and the site doesn’t use cookies or web tracking.

“If we want to stop dark patterns on the internet and beyond, we first have to assess what’s out there, and then use these examples to influence policymakers and lawmakers,” said Mori. “We hope the Dark Patterns Tip Line will help us move towards more fair, equitable, and accessible technology products and services for everyone.”

For the Dark Patterns Tip Line, hosted by Consumer Reports:
https://darkpatternstipline.org

Contact:  ShirinMoriDesignermori@eff.org
Rebecca Jeschke

Lawsuit Against Snapchat Rightfully Goes Forward Based on “Speed Filter,” Not User Speech

3 weeks 4 days ago

The U.S. Court of Appeals for the Ninth Circuit has allowed a civil lawsuit to move forward against Snapchat, a smartphone social media app, brought by the parents of three teenage boys who died tragically in a car accident after reaching a maximum speed of 123 miles per hour. We agree with the court’s ruling, which confirmed that internet intermediaries are not immune from liability when the harm does not flow from the speech of other users.

The parents argue that Snapchat was negligently designed because it incentivized users to drive at dangerous speeds by offering a “speed filter” that could be used during the taking of photos and videos. The parents allege that many users believed that the app would reward them if they drove 100 miles per hour or faster. One of the boys had posted a “snap” with the “speed filter” minutes before the crash.

The Ninth Circuit rightly held in Lemmon v. Snap, Inc. that Section 230 does not protect Snapchat from the parents’ lawsuit. Section 230 is a critical federal law that protects user speech by providing internet intermediaries with partial immunity against civil claims for hosting user-generated content (see 47 U.S.C. § 230(c)(1)). Thus, for example, if a review site publishes a review that contains a statement that defames someone else, the reviewer may be properly sued for writing and uploading the defamatory content, but not the review site for hosting it.

EFF has been a staunch supporter of Section 230 since it was enacted in 1996, recognizing that the law has facilitated free speech and innovation online for 25 years. By partially shielding internet intermediaries from potential liability for what their users say and do on their platforms, Section 230 creates the legal breathing room for entrepreneurs to create a multitude of diverse spaces and services online. By contrast, with greater legal exposure, companies are incentivized in the opposite direction—to take down more user speech or to cease operations altogether.

However, this case against Snapchat shows that Section 230 does not—and was never meant to—shield internet intermediaries (such as social media platforms) from liability in all cases. Section 230 already has several exceptions, including for when online platforms host user speech that violates federal criminal law or intellectual property law.

In this case, the court explained that Section 230 does not protect companies when a claim is premised on harm that flows from the company’s own speech or actions, independent from the speech of other users. As the Ninth Circuit explained, the parents are aiming to hold Snapchat liable for creating a defective product with a feature that inspired users, including their children, to drive too fast. Nothing in the claim tries to hold Snapchat liable for publishing the “speed filter” post by one of the boys before they died in the crash. Nor would the parents “be permitted under § 230(c)(1) to fault Snap for publishing other Snapchat-user content (e.g., snaps of friends speeding dangerously) that may have incentivized the boys to engage in dangerous behavior.”

Thus, the court repeatedly emphasizes in the opinion that the parents’ claim “stand[s] independently of the content that Snapchat’s users create with the Speed Filter,” and internet intermediaries may lose Section 230 immunity for offering defective tools, “so long as plaintiffs’ claims do not blame them for the content that third parties generate with those tools.”

The Ninth Circuit also noted that the Lemmon case is distinguishable from other cases where the plaintiffs tried to creatively plead around Section 230 by arguing that the design of the website or app was the problem, when in fact the plaintiffs’ harm flowed from other users’ content—such as online content related to sex trafficking, illegal drug sales, and harassment. In these cases, the courts rightly granted the companies immunity under Section 230.

By emphasizing this distinction, we believe the decision does not create a troublesome incentive to censor user speech in order to avoid potential liability. 

One thing to keep in mind is that the Ninth Circuit’s decision not to apply Section 230 here does not automatically mean that Snapchat will be held liable for negligent product design. As we saw in a seminal Section 230 case, the website Roommates.com was denied Section 230 immunity by the Ninth Circuit, but later defeated a housing discrimination claim. The Lemmon case now goes back down to the district court, which will allow the case to proceed to a consideration of the merits.

Sophia Cope

EFF tells California Court that Forensic Software Source Code Must Be Disclosed to the Defendant

4 weeks 1 day ago

Last week, EFF filed an amicus brief in State v. Alvin Davis in California, in support of Mr. Davis's right to inspect the source code of STRMix, the forensic DNA software used at his trial. This is the most recent in a string of cases in which EFF has argued that a defendant has the right to examine DNA analysis software. Earlier this year, the courts in two of those cases, United States v. Ellis and State v. Pickett, agreed with EFF that the defendants were entitled to the source code of TrueAllele, one of STRMix's main competitors. 

Criminal defendants must be allowed to examine how DNA matching software used against them works to make sure that the software's result is reliable. Access to the source code cannot be replaced by testimony regarding how the program should work, since there could be coding errors. This is especially true for the newest generation of forensic DNA software, like STRMix and TrueAllele, which are fraught with reliability and accuracy concerns. In fact, a prior examination of STRMix led to the discovery that there were programming errors that could have created false results in 60 cases in Queensland, Australia.

That same worry is present in this case. Although the crime itself is harrowing, the evidence is anything but conclusive. An elderly woman was sexually assaulted and murdered in her home and two witnesses described seeing a black man in his 50s on the property on the day of the murder. Dozens of people had passed through the victim's home in the few months leading up to the murder, including Mr. Davis and another individual. Mr. Davis is an African American man who was in his 70s at the time of the murder and suffers from Parkinson’s disease. Another individual who met the witnesses’ description had a history of sex crimes including sexual assault with a foreign object.

DNA samples were taken from dozens of locations and items at the crime scene. Mr. Davis’s DNA was not found on many of those, including a cane that was allegedly used to sexually assault the victim. Traditional DNA software was not able to match Mr. Davis to the DNA sample from a shoelace that was likely used to tie up the victim—but STRMix did, and the prosecution relied heavily on the latter before the jury. The first trial against Mr. Davis, who is now in a wheelchair due to Parkinson’s, ended with a hung jury. He was convicted after a second trial and sentenced to life without parole. 

Hopefully the California court will follow the rulings in Ellis and Pickett, and recognize that there is no justice in convictions based on secret evidence.

Related Cases: United States v. EllisCalifornia v. Johnson
Hannah Zhao

President Biden Revokes Unconstitutional Executive Order Retaliating Against Online Platforms

4 weeks 1 day ago

President Joe Biden on Friday rescinded a dangerous and unconstitutional Executive Order issued by President Trump that threatened internet users’ ability to obtain truthful information online and retaliated against services that fact-checked the former president. The Executive Order called on multiple federal agencies to punish private online social media services for content moderation decisions that President Trump did not like.

Biden’s rescission of the Executive Order comes after a coalition of organizations challenging the order in court called on the president to abandon the order last month. In a letter from Rock The Vote, Voto Latino, Common Cause, Free Press, Decoding Democracy, and the Center for Democracy & Technology, the organizations demanded the Executive Order’s rescission because “it is a drastic assault on free speech designed to punish online platforms that fact-checked President Trump.”

The organizations filed lawsuits to strike down the Executive Order last year, with Rock The Vote, Voto Latino, Common Cause, Free Press, and Decoding Democracy’s challenge currently on appeal in the U.S. Court of Appeals for the Ninth Circuit. The Center for Democracy & Technology’s appeal is currently pending in the U.S. Court of Appeal for the D.C. Circuit.

Cooley LLP, Protect Democracy, and EFF represent the plaintiffs in Rock The Vote v. Biden. We applaud Biden’s revocation of the “Executive Order on Preventing Online Censorship,” and are reviewing his rescission of the order and conferring with our clients to determine what impact it has on the pending legal challenge in the Ninth Circuit.

Trump issued the unconstitutional Executive Order in retaliation for Twitter fact-checking May 2020 tweets spreading false information about mail-in voting. The Executive Order issued two days later sought to undermine a key law protecting internet users’ speech, 47 U.S.C. § 230 (“Section 230”) and punish online platforms, including by directing federal agencies to review and potentially stop advertising on social media and kickstarting a federal rulemaking to re-interpret Section 230.

Related Cases: Rock the Vote v. Trump
Aaron Mackey

Victory! California City Drops Lawsuit Accusing Journalists of Violating Computer Crime Law

4 weeks 1 day ago

The City of Fullerton, California has abandoned a lawsuit against two bloggers and a local website. The suit dangerously sought to expand California’s computer crime law in a way that threatened investigative reporting and everyday internet use.

The city’s lawsuit against the bloggers and the website Friends For Fullerton’s Future alleged, in part, that the bloggers violated the California Comprehensive Computer Data Access and Fraud Act because they improperly accessed non-public government records on the city’s file-sharing service that it used to disclose public records. But the settlement agreement between the city and bloggers shows those allegations lacked merit and badly misrepresented the city’s online security practices. It also vindicates the bloggers, who the city targeted for doing basic journalism.

The city’s poor approach to online security was apparent from the start. The city used Dropbox to create a shareable folder, which it called the “Outbox,” that was publicly accessible to anyone who had the link. And evidence in the lawsuit showed that city officials did not enable any of Dropbox’s built-in security features, such as requiring passwords or limiting access to particular individuals, before making the Outbox link publicly accessible.

Then the city widely shared the Outbox URL with members of the public, including the bloggers, when disclosing public records and for other city business. And because there were no restrictions or other controls on the Outbox folder, anyone with the link could access all the files and subfolders it contained, including files city officials claimed should not have been publicly accessible.

The crux of the city’s lawsuit alleged that the bloggers, Joshua Ferguson and David Curlee, accessed some Outbox subfolders and files “without permission,” in violation of California’s computer crime law, because the individuals did not follow officials’ directions to only access particular folders or files in the Outbox.

The city’s interpretation was a disturbing effort to stretch California’s criminal law, known as Section 502, to punish the journalists. That’s why EFF, along with the ACLU and ACLU of Southern California, filed a friend-of-the-court brief in support of the journalists and website. The Reporters Committee for Freedom of the Press also filed a brief in support of the bloggers. And an appellate court was scheduled to hear arguments in the case next week.

The city’s interpretation ignored that officials had made the entire Outbox public, such that anyone with the link would be able to access everything in it, just as anyone is able peruse any publicly accessible website. This configuration is the opposite of what the city should have done if it wanted to prevent access to sensitive information. Moreover, the city’s theory flouted open-access norms of the internet.

The city’s interpretation also sought to turn officials’ written directions to access only certain files into a violation of Section 502, a dangerous proposition that would give government officials broad discretion to criminalize internet access that they did not like. Also, the interpretation threatened to chill investigative journalism by criminalizing reporting about government records obtained by mistake or otherwise without officials’ permission, a dubious claim that the Supreme Court has repeatedly rejected on First Amendment grounds.

In the settlement, the city abandoned its Section 502 claims and admitted that its allegations did not accurately reflect its security practices for the Outbox folder. The settlement states “[t]he City acted on its belief that access controls were in place” when it filed its lawsuit and “that its primary goal was to retrieve confidential documents for the protection of city employees, residents and those doing business with the City.”

But a statement the city included in the settlement states:

However, due to errors by former employees of the City in configuring the account and lax password controls, some of the files and folders were in fact accessible and able to be downloaded and/or accessed without circumventing access controls.

The statement continues:

Based on the City’s additional investigation and through discussions with Mr. Ferguson and Mr. Curlee, the City now agrees that documents were not stolen or taken illegally from the shared file account as the City previously believed and asserted. The City retracts any and all assertions that Friends for Fullerton’s Future, Mr. Ferguson and/or Mr. Curlee acted illegally in accessing the documents.

The settlement also requires the city to pay Ferguson and Curlee $60,000 each as well as $230,000 for their attorney’s fees and costs.

EFF is thrilled that the city has walked away from its effort to penalize Ferguson, Curlee, and the blog for engaging in good journalism. And we congratulate the pair, the blog, and their attorney, Kelly Aviles, on being vindicated.

Of course, it would have been better if the city had never filed the lawsuit in the first place, which resulted in two rounds of appeals, including reversing a prior restraint issued against the blog, and a potentially dangerous expansion of Section 502. The statute, like the federal Computer Fraud and Abuse Act, is notoriously vague and can be misused to target individuals for their online activities.

Aaron Mackey

Governor Newsom’s Budget Proposes Historic Investment in Public Fiber Broadband

4 weeks 1 day ago

This morning, California Governor Gavin Newsom announced his plans for the state’s multi-billion dollar surplus and federal recovery dollars, including a massive, welcome $7 billion investment in public broadband infrastructure. It's a plan that would give California one of the largest public broadband fiber networks in the country. The proposal now heads to the legislature to be ratified by June 15 by a simple majority. Here are the details:

The Plan: California Builds Fiber Broadband Highway; Locals Build the Onramps

Internet infrastructure shares many commonalities with public roads. Surface streets that crisscross downtowns and residential areas connect to highways via on-ramps. Those highways are a high-speed, high-capacity system that connect cities to one another over long distances.

In broadband, that highway function— connecting distant communities— is called "the middle mile," while those local roads, which connect with every home and business, are called "the last mile."

Governor Newsom's plan is for the State of California to build all that middle-mile infrastructure— high-speed links that will bring future-proof capacity to the state's small, remote, rural communities, putting them on par with the state's large and prosperous cities.

Laying fiber infrastructure like this brings terabits of broadband capacity to unserved and underserved communities in rural areas.  Simultaneously, this plan dramatically lowers the cost to the communities themselves, who are in charge of developing their own, locally appropriate last mile plans.

To make local efforts economically viable, the Governor’s budget envisions a long-term financing program, accessible by any municipality, cooperative, or local non-profit engaged in building local fiber infrastructure that connects to the state’s open access network.

Long-term financing and fiber go hand in hand. Fiber is future-proof, capable of meeting public broadband demand for decades to come. That long-term value is an uncomfortable fit with the short-term expectations of Big ISP market investors, whose focus on immediate returns has held back much-needed American investment in adequate digital infrastructure, fit for the 21st century.

The California plan for $500 million to be leveraged to access multiple billions in low-cost loans with 30- to 40-year repayment schedules is the patient money that fiber infrastructure needs— a visionary bet on the state's long-term future. The fiber itself will be useful for decades after that debt is retired, giving rural communities broadband access to cover all their projected needs into the 22nd century.

As we’ve noted in the past, national private ISPs have proven themselves unwilling to tackle the rural fiber challenge, even when they stand to make hundreds of millions of dollars by doing so. Their desire for fast profits over long-term investments is so great, they would rather bankrupt themselves before deploying fiber in rural areas. The same is true for low-income access even in the most densely populated cities, which the Governor's plan will enable local solutions to resolve.

The State Government Will Help Communities Prepare for the Fiber Future

A crucial aspect of the plan is the creation of technical assistance teams tasked with helping communities plan their fiber rollouts. This team is also charged with helping those communities design sustainable models that will deliver affordable broadband to all.

When the U.S. embarked upon a national electrification program in the early 20th century, government agencies didn't simply announce the program and retire to the sidelines while local communities worked out the details for themselves. Instead, the government formed myriad partnerships with local communities to help plan out their electrical grids, create financial plans, and train local operators so they could keep their new electrical grids humming. Governor Newsom’s budget updates this proven strategy for local fiber broadband networks.

EFF strongly supports this measure. Running a gigabit fiber network is technically challenging, but with guidance and technical support, it is well within the capacity of every community. State assistance in designing 21st century infrastructure plans combined with a state rollout of middle mile fiber networks is a powerful mixture of local empowerment and economic development.

We Have to Rally in Sacramento, Like Right Now

The cable lobby has long viewed fiber broadband as an existential threat to their high-speed broadband monopolies. EFF’s technical analysis by our engineering team found that fiber optics as a transmission medium vastly surpasses anything coaxial cable will be capable of doing simply as a matter of physics. That's why more than 1 billion fiber lines are being laid across advanced Asian nations from South Korea to China.

If Californians want cheap, symmetrical (fast uploads as well as downloads) gigabit (and beyond) internet at their homes and businesses, we must get our state legislature to pass this infrastructure plan next month.

Otherwise, most of us will remain trapped in a cable monopoly market paying 200% to 300% above competitive rates for our sluggish broadband service. Worse yet, when the next generation of applications and services requiring symmetrical gigabit, 10 gigabit, and even 100 gigabit speeds are developed in the coming years, Californians will be frozen out of them altogether.

At that point, the "digital divide" will be joined by a "speed chasm" in broadband access. We risk a major drag on our state's economic development. We can avoid that risk!  All we need is a long-term, future-proof investment in our communities and a law stating the obvious: all Californians deserve 21st century internet access.

Ernesto Falcon

How A Camera Patent Was Used to Sue Non-Profits, Cities, and Public Schools

4 weeks 1 day ago
Stupid Patent of the Month

Patent trolls are everyone’s problem. A study from 2019 showed that 32% of patent troll lawsuits are directed at small and medium-sized businesses. We told the stories of some of those small businesses in our Saved by Alice project.

But some patent trolls go even further. Hawk Technology LLC doesn’t just sue small businesses (although it does do that)—it has sued school districts, municipal stadiums, and non-profit hospitals. Hawk Tech has filed more than 200 federal lawsuits over the last nine years, mostly against small entities. Even after the expiration of its primary patent, RE43,462, in 2014, Hawk continued filing lawsuits on it right up until 2020. That’s possible because patent owners are allowed to seek up to six years of past damages for infringement.

One might have hoped that six years after the expiration of this patent, we might have seen the end of this aggressive patent troll. Nope. The U.S. Patent and Trademark Office has granted Hawk Tech another patent, U.S. Patent No. 10,499,091. It’s just as bad as the earlier one, and starting last summer, Hawk Tech has started to litigate.

Camera Plus Generic Terms

The ‘091 patent’s first claim simply claims a video surveillance system, then adds a bunch of computer terms. Those terms include things like “receiving video images at a personal computer,” “digitizing” images that aren’t already digital, “displaying” images in a separate window, “converting” video to some resolution level, “storing” on a storage device, and “providing a communications link.” These terms are utterly generic.

Claim 2 just describes allowing live and remote viewing and recording at the same time—basic streaming, in other words. Claim 3 adds the equally unimpressive idea of watching the recording later. The additional claims are no more impressive, as they basically insist that it was inventive in 2002 to livestream over the Internet—nearly a decade after the first concert to have a video livestream. Most laughably, claim 5 specifies a particular bit rate of Internet connection—as if that would make this non-invention patentable.

In order to be invalidated in court, however, the ‘091 patent would have to be considered by a judge. And Hawk Tech’s lawsuits get dismissed long before that stage—often in just a few months. That’s because the company reportedly settles cases at the bottom level of patent troll demands, typically for $5,000 or even less. That’s significantly less than a patent attorney would request even for a retainer to start work, and a tiny fraction of the $2 million (or sometimes much more) it can cost to defend a patent lawsuit through trial.

The patent monetization industry includes the kind of folks that can be counted on to sue a ventilator company in the middle of a pandemic. Even in that context, Hawk Tech has taken some remarkable steps.

Hawk Tech has sued a municipal stadium that hosts an Alabama college football team; a suburban Kentucky transit system with just 27 routes; non-profit thrift stores and colleges; and a Mississippi public school district that serves an area with a very high (46%) rate of child poverty. That last lawsuit is one of at least three different public school districts that Hawk Tech has sued.  These defendants would be hard pressed to mount a legal defense that could easily cost hundreds of thousands of dollars.

One type of company you won’t see on the long list of defendants is a company that actually makes camera systems. Instead, Hawk Tech finds those companies’ customers and goes after them. For instance, Hawk Tech drew up an infringement claim chart against Seon, a maker of bus camera and GPS systems; then used that chart to sue not Seon, but the Transit Authority of Northern Kentucky (TANK), based on a Seon pamphlet that pointed to TANK as a “case study.” Instead of suing camera company Eagle Eye, Hawk Tech sued the city of Mobile, Alabama, likely after seeing a promotional video made by Eagle Eye on how the city’s stadium used its camera systems.

The problem of what to do about patent trolls that demand nuisance-level settlements is a tough one. What may be a “nuisance” settlement in the eyes of large law firms can still be harmful to a charity or a public school serving impoverished students.

That’s why EFF has advocated for strong fee-shifting rules in patent cases. Parties who bring lawsuits based on bogus patents won’t be chastened until they are penalized by courts. We also have supported reforms like the 2013 Innovation Act, which would have allowed customer-based lawsuits like the Hawk Tech cases to be stayed in situations when the manufacturer of the allegedly infringing device steps in to litigate.

Right now, there are two different parties seeking to invalidate Hawk Tech’s ‘091 patent and collect legal fees. One is Nevada-based DTiQ, a camera company whose customers, including a Las Vegas sandwich shop, have been sued by Hawk Tech. Another is Castle Retail, a company that owns three supermarkets in Memphis. Let’s hope one of those cases gets to a judgment before Hawk Tech files off another round of bogus lawsuits against small companies—or public schools. 

Joe Mullin

How Your DNA—or Someone Else’s—Can Send You to Jail

4 weeks 1 day ago

Although DNA is individual to you—a “fingerprint” of your genetic code—DNA samples don’t always tell a complete story. The DNA samples used in criminal prosecutions are generally of low quality, making them particularly complicated to analyze. They are not very concentrated, not very complete, or are a mixture of multiple individual’s DNA—and often, all of these conditions are true. If a DNA sample is like a fingerprint, analyzing mixed DNA samples in criminal prosecutions can often be like attempting to isolate a single person’s print from a doorknob of a public building after hundreds of people have touched it. Despite the challenges in analyzing these DNA samples, prosecutors frequently introduce those analyses in trials, using tools that have not been reviewed and jargon that can mislead the jury—giving a false sense of scientific certainty to a very uncertain process. This is why it is essential that any DNA analysis tool’s source code is made available for evaluation. It is critical to determine whether the software is reliable enough to be used in the legal system, and what weight its results should be given. 

A Breakdown of DNA Data

To understand why DNA software analyses can be so misleading, it helps to know a tiny bit about how it works. To start, DNA sequences are commonly called genes. A more generic way to refer to a specific location in the gene sequence is a “locus” (plural “loci”). The variants of a given gene or of the DNA found at a particular locus are called “alleles.” To oversimplify, if a gene is like a highway, the numbered exits are loci, and alleles are the specific towns at each exit.

[P]rosecutors frequently introduce those analyses in trials, using tools that have not been reviewed and jargon that can mislead the jury—giving a false sense of scientific certainty to a very uncertain process.

Forensic DNA analysis typically focuses on around 13 to 20 loci and the allele present at each locus, making up a person’s DNA profile. By looking at a sufficient number of loci, whose alleles are distributed among the population, a kind of fingerprint can be established. Put another way, knowing the specific towns and exits a driver drove past can also help you figure out which highway they drove on.

To figure out the alleles present in a DNA sample, a scientist chops the DNA into different alleles, then uses an electric charge to draw it through a gel in a method called electrophoresis. Different alleles will travel at different rates, and the scientist can measure how far each one traveled and look up which allele corresponds to that length. The DNA is also stained with a dye, so that the more of it there is, the darker that blob will be on the gel.

Analysts infer what alleles are present based on how far they traveled through the gel, and deduce what amounts are present based on how dark the band is—which can work well in an untainted, high quality sample. Generally, the higher the concentration of cells from an individual and the less contaminated the sample by any other person’s DNA, the more accurate and reliable the generated DNA profile.

The Difficulty of Analyzing DNA Mixtures

Our DNA is found in all of our cells. The more cells that we shed, the higher the concentration of our DNA can be found, which generally also means more accuracy from DNA testing. However, our DNA can also be transferred from one object to another. So it’s possible that your DNA can be found on items you’ve never had contact with or at locations you’ve never been. For example, if you’re sitting in a doctor’s waiting room and scratch your face, your DNA may be found on the magazines on a table next to you that you never flipped through. Your DNA left on a jacket you lent a friend can transfer onto items they brush by or at locations they travel to. 

Given the ease at which DNA is deposited, it is no surprise that DNA samples from crime scenes are often a mixture of DNA from multiple individuals, or “donors.” Investigators gather DNA samples by swiping a cotton swab at the location that the perpetrator may have deposited their DNA, such as a firearm, a container of contraband, or the body of a victim. In many cases where the perpetrator’s bodily fluids are not involved, the DNA sample may only contain a small amount of the perpetrator’s DNA, which could be less than a few cells, and is likely to also contain the DNA of others. This makes trying to identify whether a person’s DNA is found in a complex DNA mixture a very difficult problem. It’s like having to figure out whether someone drove on a specific interstate when all you have is an incomplete and possibly inaccurate list of towns and exits they passed, all of which could have been from any one of the roads they used. You don’t know the number of roads they drove on, and can only guess at which towns and exits were connected. 

Running these DNA mixture samples through electrophoresis creates much noisier results, and often contains errors that indicate additional alleles at a locus or ignore alleles that are present. Human analysts then decide which alleles appear dark enough in the gel to count or which are light enough to ignore. At least, traditional DNA analysis worked in this binary way: an allele either counted or did not count as part of a specific DNA donor profile.

Probabilistic Genotyping Software and Their Problems 

Enter probabilistic genotyping software. The proprietors of these programs—the two biggest players are STRMix and TrueAllele—claim that their products, using statistical modeling, can determine the likelihood that a DNA profile or combinations of DNA profiles contributed to a DNA mixture, instead of the binary approach. Prosecutors often describe the analysis from these programs this way: It is X times more likely that defendant, rather than a random person, contributed to this DNA mixture sample.

However, these tools, like any statistical model, can be constructed poorly. And whether, what, and how assumptions are incorporated in them can cause the results to vary. They can be analogized to the election forecast models from FiveThirtyEight, The Economist, and The New York Times. They all use statistical modeling, but the final numbers are different because of the myriad design differences from each publisher. Probabilistic genotyping software is the same: they all use statistical modeling, but the output probability is affected by how that model is built. Like the different election models, different probabilistic DNA software has diverging approaches for which, and at what threshold, factors are considered, counteracted, or ignored. Additionally, input from human analysts, such as the hypothetical number of people who contributed to the DNA mixture, also change the calculation. If this is less rigorous than you expected, that’s exactly the point—and the problem. In our highway analogy, this is like a software program that purports to tell you how likely it is that you drove on a specific road based on a list of towns and exits you passed. Not only is the result affected by the completeness and accuracy of the list, but the map the software uses, and the data available to it, matter tremendously as well.

If this is less rigorous than you expected, that’s exactly the point—and the problem. 

Because of these complex variables, a probability result is always specific to how the program used was designed, the conditions at the lab, and any additional or discretionary input used during the analysis. In practice, different DNA analysis programs have resulted in substantially different probabilities for whether a defendant’s DNA appeared in the same DNA sample, even breathtaking discrepancies in the millions-fold!

And yet it is impossible to determine which result, or software, is the most accurate. There is no objective truth against which those numbers can be compared. We simply cannot know what the probability that a person contributed to a DNA mixture is. In controlled testing, we know whether a person’s DNA was part of a DNA mixture or not, but there is no way to figure out whether it was 100 times more likely that the donor’s DNA rather than an unknown person’s contributed to the mixture, or a million times more likely. And while there is no reason to assume that the tool that outputs the highest statistical likelihood is the most accurate, the software’s designers may nevertheless be incentivized to program their product in a way that is more likely to output a larger number, because “1 quintillion” sounds more precise than “10,000”—especially when there is no way to objectively evaluate the accuracy.

DNA Software Review is Essential

Because of these issues, it is critical to examine any DNA software’s source code that is used in the legal system. We need to know exactly how these statistical models are built, and looking at the source code is the only way to discover non-obvious coding errors. Yet, the companies that created these programs have fought against the release of the source code—even when it would only be examined by the defendant’s legal team and be sealed under a court order. In the rare instances where the software code was reviewed, researchers have found programming errors with the potential to implicate innocent people.

Forensic DNA analyses have the whiff of science—but without source code review, it’s impossible to know whether or not they pass the smell test. Despite the opacity of their design and the impossibility of measuring their accuracy, these programs have become widely used in the legal system. EFF has challenged—and continues to challenge—the failure to disclose the source code of these programs. The continued use of these tools, the accuracy of which cannot be ensured, threatens the administration of justice and the reliability of verdicts in criminal prosecutions.

Related Cases: California v. Johnson
Hannah Zhao

FAQ: DarkSide Ransomware Group and Colonial Pipeline

4 weeks 2 days ago

With the attack on Colonial Pipeline by a ransomware group causing panic buying and shortages of gasoline on the US East Coast, many are left with more questions than answers to what exactly is going on. We have provided a short FAQ to the most common technical questions that are being raised, in an effort to shine light on some of what we already know.

What is Ransomware?

Ransomware is a combination word of “ransom”—holding stolen property to extort money for its return or release; and “malware”—malicious software installed on a machine. The principle is simple: the malware encrypts the victim’s files so that they can no longer use them and demands payment from the victim before decrypting them.

Most often, ransomware uses a vulnerability to infect a system or network and encrypt files to deny the owner access to those files. The key to decrypt the files is possessed by a third party—the extortionist—who then (usually through a piece of text left on the desktop or other obvious means) communicates instructions to the victim on how to pay them in exchange for the decryption key or program.

Most modern ransomware uses a combination of public-key encryption and symmetric encryption in order to lock out the victim from their files. Since the decryption and encryption key are separate in public-key encryption, the extortionist can guarantee that the decryption key is never (not even briefly, during the execution of the ransomware code) transmitted to the victim before payment.

Extortionists in ransomware attacks are mainly motivated by the prospects of payment. Other forms of cyberattack are most often used by hackers motivated by political or personal factors.

What is the Ransomware Industry?

Although ransomware has existed since the late 1980s, its use has expanded exponentially in recent years. This is partly due to the effectiveness of cryptocurrencies in facilitating payments to anonymous, remote recipients. An extortionist can demand payment in the form of bitcoin in exchange for decryption keys, rather than relying on older, much more regulated financial exchanges. This has driven the growth of a $1.4 billion ransomware industry in the US, based solely on locking out users and companies from their files. Average payments to extortionists are increasing as well. A report by Coveware shows a 31% growth in the average payment between Q2 and Q3 of 2020.

The WannaCry attack in 2017 was one of the largest ransomware incidents to date. Using a leaked NSA exploit dubbed “EternalBlue,” WannaCry spread to more than 200,000 machines across the world, demanding payment from operators of unpatched Windows systems. Displaying a message with a bitcoin address to send payment to, the attack cost hundreds of millions to billions of dollars. An investigation of WannaCry code by a number of information security firms and the FBI pointed to the hacking group behind the attack having connections to the North Korean state apparatus.

What is DarkSide?

The FBI revealed on Monday that the hacking group DarkSide is behind the latest ransomware attack on Colonial Pipeline. DarkSide is a relatively new ransomware group, only appearing on the scene in August 2020 in Russian-language hacking forums. They have poised themselves as a new type of ransomware-as-a-service business, attempting to inculcate “trust” and a sense of reliability between themselves and their victims. In order to ensure payment, DarkSide has found it useful to establish a reputation which ensures that when the victims deliver the ransom, they are guaranteed to receive a decryption key for their files. In this vein, the group has established a modern, polished website called DarkSide Leaks, aimed at reaching out to journalists and establishing a public face. They say that they solely target well-funded individuals and corporations which are able to pay the ransom asked for, and have a code of conduct claiming not to target hospitals, schools, or non-profits. They have also attempted to burnish their image with token donations to charity. Darkside, who reportedly typically asks for ransoms that range between $200,000 to $2,000,000, produced receipts showing a total of $20,000 in donations to charities Children International and The Water Project. The charities refused to accept the money.

DarkSide claims that they are not affiliated with any government, and that their motives are purely financial gain—a claim that has been assessed most likely to be true by cybersecurity firm Flashpoint. However, DarkSide code analyzed by the firm Cyberreason has been shown to check the systems language settings as a very first step, and halt the attack if the result is a language “associated with former Soviet Bloc nations.” This has fuelled speculation in the US that Russia may be affording the group special protection, or at least turning a blind eye to their misdeeds.

The result has been profitable for the cyber-extortion group. In mid-April, the group obtained $11 million from a high-profile victim. Bloomberg reports that Colonial Pipeline paid $5 million to the group.

What exactly happened last Friday?

Colonial Pipeline has operated continuously since the early 1960s, supplying 45% of the US East Coast gasoline supply, in addition to diesel and jet fuel. On Friday, May 8th, it shut down 5,500 miles of its pipeline infrastructure in response to a cyber-extortion attempt. The pipeline restarted on May 12th. Though the incident is still under investigation, the FBI confirmed on Monday what was already speculated: DarkSide was behind the attack.

In an apparent response to—though not an admission of involvement in—the attack, DarkSide released a statement on their website stating that they would introduce “moderation” to “avoid social consequences in the future.”

Why did they target Colonial Pipeline?

If patterns are any indication, DarkSide chose Colonial as a “big game” target due to the deep pockets of the firm, worth about $8 billion. Still, many suspect that DarkSide is now feeling a dawning sense of dread as the lateral effects of their attack are playing out: panic buying, gas shortages, and involvement by federal investigators as well as an executive order by President Biden intending to bolster America’s cyberdefenses as a response. Escalated to the level of an international incident, DarkSide may see the independence and latitude they are reported to enjoy dissipate under geopolitical pressure.

What can I do to defend myself against ransomware?

Frequently backing up your data to an external hard drive or cloud storage provider will ensure you are able to retrieve it later. If you already have a backup, do not plug the external hard drive into your computer after it is infected: the ransomware will likely target any new device that is recognized. You may need to reinstall your operating system, replace your hard drive, or bring it to a specialist to ensure complete removal of any infection.

You can also follow our guide to keeping your data safe. The Cybersecurity and Infrastructure Security Agency (CISA) has also provided a detailed guide on protecting yourself from ransomware. Note that it’s much easier to defend yourself against malware than to remove it once you’re infected, so it is always advisable to take proactive steps to defend yourself.

Bill Budington

EFF to Ninth Circuit: Don’t Block California’s Pathbreaking Net Neutrality Law

4 weeks 2 days ago

Partnering with the ACLU and numerous other public interest advocates, businesses and educators, EFF has filed an amicus brief urging the Ninth Circuit Court of Appeals to uphold a district court’s decision not to block enforcement of SB 822, a law that ensures that Californians have fair access to all internet content and services.

For those who haven’t been following this issue: after the Federal Communications Commission rolled back net neutrality protections in 2017, California stepped up and passed a bill that does what the FCC wouldn’t: bar ISPs from blocking and throttling internet content and imposing paid prioritization schemes. The major ISPs promptly ran to court, claiming that California’s law is preempted– meaning, the FCC’s choice to abdicate binds everyone else – and asking the court to halt enforcement until the question was resolved. On February 23, 2021, Judge John Mendez said no, making it pretty clear that he did not think the ISP's challenge would succeed on the merits.  As expected, the parties then headed to the Ninth Circuit.

Our brief supporting the district court’s decision explains some of the stakes of SB 822, particularly for communities that are already as a disadvantage. Without legal protections, low-income Californians who rely on mobile devices for internet access and can’t pay for more expensive content may face limits on that access which is critical for distance learning, maintaining small businesses, and staying connected. Schools and libraries are also justifiably concerned that without net neutrality protections, paid prioritization schemes will degrade access to material that students and public need in order to learn. SB 822 addresses that by ensuring that large ISPs do not take advantage of their stranglehold on Californians’ internet access to slow or otherwise manipulate internet traffic.

The large ISPs also have a vested interest in shaping internet use to favor their own subsidiaries and business partners, at the expense of diverse voices and innovation. Absent meaningful competition, ISPs can leverage their last-mile monopolies to customers’ homes and bypass competition for a range of online services. That would mean less choice, lower quality, and higher prices for users—and new barriers to entry for innovators.

We hope the court recognizes how important SB 822 is, and upholds Judge Mendez’s ruling.

 

Related Cases: California's Net Neutrality Law California Net Neutrality Cases - American Cable Assocation, et al v. Rob Bonta, American Cable Association, et al v. Xavier Becerra, and United States of America v. State of California
Corynne McSherry

Japan’s Rikunabi Scandal Shows The Dangers of Privacy Law Loopholes

1 month ago

Special thanks to former legal intern Hinako Sugiyama, who was a lead co-author of this post.

Technology users around the world are increasingly concerned, and rightly so, about protecting their data. But many are unaware of exactly how their data is being collected and would be shocked to learn of the scope and implications of mass consumer data collection by technology companies. For example, many vendors use tracking technologies including cookies—a small piece of text that is stored in your browser that lets websites recognize your browser, see your browsing activity or IP address but not your name or address—to build expansive profiles about user behavior over time and across apps and sites. Such data can be used to infer, predict, or evaluate information about a user or group. User profiles may or may not be accurate, fair, or discriminatory, but can still be used to inform life-altering decisions about them. 

A recent data privacy scandal in Japan involving Rikunabi—a major job-seeking platform that calculated and sold companies algorithmic scores which predicted how likely individual job applicants would decline a job offer—has underscored how users’ behavioral data can be used against their best interests. Most importantly, the scandal showcases how companies design workarounds or “data-laundry” schemes to circumvent data protection obligations under Japan's data protection law (Act on the Protection of Personal Information (APPI)). This case also highlights the dangers of badly-written data protection laws and their loopholes. Japanese Parliament adopted amendments to the APPI, expected to be implemented by early 2022, intended to close some of these loopholes, but the changes still fall short. 

The Rikunabi Scandal

Rikunabi is operated by Recruit Career (at the time of the scandal. It’s now Recruit Co., Ltd.), a subsidiary of a media conglomerate Recruit Group, which also owns Indeed and Glassdoor. Rikunabi allows job-seekers to search for job opportunities and mostly caters to college students and others just beginning their careers. It hosts job listings for thousands of companies. Like many Internet platforms, Rikunabi used cookies to collect data about how its users search, browse, and interact with its job listings. Between March 2018 and February 2019, using Rikunabi’s data, Recruit Career—without users’ consent—was calculating and selling companies algorithmic scores that predicted how likely an individual job applicant would decline a job offer or withdraw their application.

Thirty-five companies, including Toyota Motor Corporation, Mitsubishi Electric Corporation, and other Japanese corporate giants, purchased the scores. In response to a public outcry, Recruit Career tried to excuse itself by saying that the companies who purchased the job-declining scores agreed not to use them for the selection of candidates. The company claimed the scores were intended only for clients to have better communication with their candidates, but, there was no such guarantee that’s how they would be used. Because of Japan’s dominant lifetime employment system, students feared such scores could limit their job opportunities and career choices, potentially affecting their whole professional life. 

APPI: Japanese Data Protection Law 

A loophole in APPI was key to understanding the Rikunabi scheme. Ironically, Japan, the world’s third-biggest economic power and one of the most technologically advanced, is the first country whose data protection law was recognized as offering equivalent levels of protection as European Union (EU) law.  However, the APPI lags considerably behind  EU law on cookie regulations, and the use of cookies to identify people.

Under the stronger, stricter, and detailed EU data protection regulations, cookies can constitute personal data. Identifiers don’t have to include a user’s legal name (meaning identity found on national ID card or drivers’ license) to be considered personal data under EU law.  If entities processing personal data can indirectly identify you, based on multiple data, such as cookies,  and other identifiers likely to distinguish you from others, that is considered processing personal data.  This is what EU authorities refer to as “singling-out” to indirectly identify people:  isolating some or all records which identify an individual, linking at least two records of the same individual to identify someone, or inferring identification by looking at certain characteristics and comparing them to other characteristics. The very definition of personal data under the EU’s General Data Protection Regulation (GDPR) refers to things that are "online identifiers." GDPR guidelines specifically mention that cookie identifiers may be used to create profiles of and identify people. If companies process personal data in a way that could tell one person apart from another, then this person is "identified or identifiable." And if the data is about a  person, and used with the purpose of evaluating the individual, or is likely to have an impact on the person’s rights or interests, such data "relates to" the "identified or identifiable" person. 

These are key elements of what is defined as personal data within EU regulation and valuable to understand this case.  Why? Because EU regulation requires companies to request users’ prior consent before using any identifying cookies, except ones strictly necessary for things like remembering items in your shopping cart or information entered into forms. In contrast, APPI uses very different criteria to judge whether cookies or similar machine-generated identifiers are personal data. APPI guidelines look at whether a company collecting, processing, and transferring cookies can readily collate them with other information by a method used in the ordinary course of business to find out the legal identity of an individual. So if a company can identify an individual by asking another company to access other data to collate with a cookie and identify an individual, the cookie is not considered personal data for the company. The company can thus freely collect, process, and transfer the cookie even when a recipient of the cookie can easily re-identify the person by linking it with another data set. Under this test, companies can indirectly identify people by means of singling out without running afoul of the APPI. 

The Rikunabi Scheme: Data Laundering to Circumvent the Spirit of the Law

The strategy involved three players. The first two are Recruit Career and Recruit Communications. Recruit Career is the company that operates Rikunabi, the job-search website. Recruit Communications is a marketing and advertising company, which Recruit Career subcontracted to create and deliver algorithmic scores. The third player is the one purchasing the scores: Rikunabi’s clients such as Toyota Motor Corporation.

According to a disclosure by Recruit Career, the scheme operated as follows:

Rikunabi First Scheme

Recruit Career collected data about users who visited and used the Rikunabi site. This included their real names, email addresses, and other personal data, as well as their browsing activity on Rikunabi. For example, one user’s profile might contain information about which companies they searched for, which ones they looked at, and what industries they seemed most interested in. All of this information was linked to a Rikunabi cookie ID. For the creation of algorithmic scores, Recruit Career shared with Recruit Communications Rikunabi users’ browsing history and activity linked to their Rikunabi cookie IDs, omitting real names

Rikunabi Second Scheme

At the same time, client companies such as Toyota accepted job applications on their own website. Each client company collected applicants’ legal names and contact information, and also assigned each applicant a unique applicant ID. All of this information was linked to the companies’ Employer cookie IDs. For the scoring work, each client company instructed applicants to take a web survey, which was designed to allow Recruit Communications to directly collect their Employer cookie IDs and applicant IDs connected to them. In this way, Recruit Communications was able to collect applicants’ Rikunabi cookies and the cookies assigned to applicants by client companies.

Recruit Communications somehow linked these two sets of identifiers, possibly by using cookie synching (a method that web trackers use to link cookies with one another and combine the data one company has about a user with data that other companies might have), so that it could associate their Rikunabi browsing activity with applicant IDs and single out an individual.

With the linked database, Recruit Communications put the data to work. It trained a machine learning model to look at a user’s Rikunabi browsing history and then predict whether that user would accept or reject a job offer form a particular company. 

Recruit Communications then delivered those scores associated with applicant IDs back to client companies. Since each client had its own database linking its applicant IDs to real identities, client companies could easily associate the scores they received from Recruit Communications with the real names of job applicants. And job seekers who trusted their data with Rikunabi? Without their knowledge or consent, the site’s operator and its sister company, in collaboration with Rikunabi’s clients, had created a system that may have cost them a job offer by inaccurately predicting what jobs or companies they were interested in.

Why Do It Like This?

The APPI prohibits businesses from sharing a user’s personal data without prior consent. So, if Recruit Career delivered scores linked to applicants’ names, it would be required to get users’ consent to process their information in that way. 

APPI doesn’t regard cookies or similar machine-generated identifiers as personal data if a company itself cannot readily collate it  with other data sets to identify a person. So, Recruit Communication was, by being provided only with data disconnected from names and other personal identifiers, systematically unready to collate other information to identify individuals. Thus, under APPI, Recruit Communications was not collecting, processing, and providing any personal data and had no need to get user consent to calculate and deliver algorithmic scores to client companies.

This data laundering scheme could have been created to ensure that the whole program was technically legal, even without users’ consent. But as Recruit Career knew those client companies can easily associate the scores linked to each applicant ID with applicants’  real names, the Japanese data protection authority, Personal Information Protection Commission, found that it had engaged in “very inappropriate services, which circumvented the spirit of the law,” and ordered the company to improve privacy protections.  

The 2020 APPI Amendment Closed Some Loopholes, But Others Remain 

After the scandal, the APPI was amended in June 2020.  When the amended law goes into effect by early 2022, it will require companies transferring a cookie or similar machine-generated identifiers to confirm beforehand whether the recipient of the data can identify an individual by combining such data with other information that the recipient has. When that is the case, the new APPI requires companies transferring such data to ensure that the recipient obtained users' prior consent for the collection of personal data. Rikunabi’s scheme would violate the 2020 amendment unless Recruit Communications, knowing full well that clients can combine the data it provides with data they already have to identify individuals,  confirmed with clients before transferring algorithmic scores that they had obtained users’ prior consent for collecting their private information.  

But even after the 2020 amendment, the APPI does not classify a cookie as personal data when combined indirectly with the dossiers of behavioral data often associated with them. This is a mistake. Cookies and similar machine-generated identifiers (like mobile ad IDs) are the linchpins that enable widespread online tracking and profiling. Cookies are used to link behavior from different websites to a single user, allowing trackers to connect huge swaths of a person’s life into a single profile. Just because a cookie isn’t directly linked to a person’s real identity doesn’t make the profile any less sensitive. And thanks to the data broker industry, cookies often can be linked to real identities with relative ease. A slew of “identity resolution” service providers sell trackers the ability to link pseudonymous cookie IDs to mobile phones, email addresses, or real names.

Hinako Sugiyama

Outliving Outrage on the Public Interest Internet: the CDDB Story

1 month ago

This is the third in our blog series on the public interest internet: past, present and future.

In our previous blog post, we discussed how in the early days of the internet, regulators feared that without strict copyright enforcement and pre-packaged entertainment, the new digital frontier would be empty of content. But the public interest internet barn-raised to fill the gap—before the fledgling digital giants commercialised and enclosed those innovations. These enclosures did not go unnoticed, however—and some worked to keep the public interest internet alive.

Compact discs (CDs) were the cutting edge of the digital revolution a decade before the web. Their adoption initially followed Lehman’s rightsholder-led transition – where existing publishers led the charge into a new medium, rather than the user-led homesteading of the internet. The existing record labels maintained control of CD production and distribution, and did little to exploit the new tech—but they did profit from bringing their old back catalogues onto the new digital format. The format was immensely profitable, because everyone re-bought their existing vinyl collections to move it onto CD. Beyond the improved fidelity of CDs, the music industry had no incentive to add new functionality to CDs or their players. When CD players were first introduced, they were sold exclusively as self-contained music devices—a straight-up replacement for record players that you could plug into speakers or your hi-fi “music centre,”  but not much else. They were digital, but in no way online or integrated with any other digital technology.

The exception was the CD playing hardware that was incorporated into the latest multimedia PCs—a repurposing of the dedicated music playing hardware which sent the CD to the PC as a pile of digital data. With this tech, you could use CDs as a read-only data store, a fixed set of data, a “CD-ROM”; or you could insert a CD music disc, and use your desktop PC to read in and play its digital audio files through tinny desktop speakers, or headphones.

The crazy thing was that those music CDs contained raw dumps of audio, but almost nothing else. There was no bonus artist info stored on the CDs; no digital record of the CD title, no digital version of the CD’s cover image JPEG, not even a user-readable filename or two: just 74 minutes of untitled digital sound data, split into separate tracks, like its vinyl forebear. Consequently, a PC with a CD player could read and play a CD, but had no idea what it was playing. About the only additional information a computer could extract from the CD beyond the raw audio was the total number of tracks, and how long each track lasted. Plug a CD into a player or a PC, and all it could tell you was that you were now listening to Track 3 of 12.

Around about the same time as movie enthusiasts were building the IMDb, music enthusiasts were solving this problem by collectively building their own compact disk database—the CD Database (CDDB). Programmer Ti Kan wrote open source client software that would auto-run when a CD was put into a computer, and grab the number of tracks and their length. This client would query a public online database (designed by another coder, Steve Scherf) to see if anyone else had seen a CD with the same fingerprint. If no one had, the program would pop up a window asking the PC user to enter the album details themselves, and would upload that information to the collective store, ready for the next user to find. All it took was one volunteer to enter the album info and associate it with the unique fingerprint of track durations, and every future CDDB client owner could grab the data and display it the moment the CD was inserted, and let its user pick tracks by their name, peruse artist details, and so on. 

The modern internet, buffeted as it is by monopolies, exploitation, and market and regulatory failure, still allows people to organize at low cost, with high levels of informality.

When it started, most users of the CDDB had to precede much of their music-listening time with a short burst of volunteer data entry. But within months, the collective contributions of the Internet’s music fans had created a unique catalogue of current music that far exceeded the information contained even in expensive, proprietary industry databases. Deprived of any useful digital accommodations by the music industry, CD fans, armed with the user-empowering PC and the internet, built their own solution.

This story, too, does not have a happy ending. In fact, in some ways the CDDB is the most notorious tale of enclosure on the early Net. Kan and Scherf soon realised the valuable asset that they were sitting on, and along with the hosting administrator of the original database server, built it into a commercial company, just as the overseers of Cardiff’s movie database had. Between 2000 and 2001, as “Gracenote”, this commercial company shifted from a free service, incorporated by its many happy users into a slew of open source players, to serving hardware companies, who they charged for a CD recognition service. It changed its client software to a closed proprietary software license, attached restrictive requirements on any code that used its API, and eventually blocked clients who did not agree to its license entirely.

The wider CDDB community was outraged, and the bitterness persisted online for years afterwards. Five years later, Scherf defended his actions in a Wired magazine interview. His explanation was the same as IMDB’s founders: that finding a commercial owner and business model was the only way to fund CDDB as a viable ongoing concern. He noted that other groups of volunteers, notably an alternative service called freedb, had forked the database and client code from a point just before Gracenote locked it up. He agreed that was their right, and encouraged them to keep at it, but expressed scepticism that they would survive. “The focus and dedication required for CDDB to grow could not be found in a community effort,” he told Wired. “If you look at how stagnant efforts like freedb have been, you’ll see what I mean.”  By locking down and commercializing CDDB, Scherf said that he “fully expect[ed] our disc-recognition service to be running for decades to come.”

Scherf may have overestimated the lifetime of CDs, and underestimated the persistence of free versions of the CDDB. While freedb closed last year,  Gnudb, an alternative derived from freedb, continues to operate. Its far smaller set of contributors don’t cover as much of the latest CD releases, but its data remains open for everyone to use—not just for the remaining CD diehards, but also as a permanent historical record of the CD era’s back catalogue: its authors, its releases, and every single track. Publicly available, publicly collected, and publicly usable, in perpetuity. Whatever criticism might be laid at the feet of this form of the public interest internet, fragility is not one of them. It hasn’t changed much, which may count as stagnation to Scherf—especially compared to the multi-million dollar company that Gracenote has become. But as Gracenote itself was bought up (first by Sony, then by Nielsen), re-branded, and re-focused, its predecessor has distinctly failed to disappear.

Some Internet services do survive and prosper by becoming the largest, or by being bought by the largest. These success stories are very visible, if not organically, then because they can afford marketers and publicists. If we listen exclusively to these louder voices, our assumption would be that the story of the Internet is one of consolidation and monopolization. And if—or perhaps just when—these conglomerates go bad, their failings are just as visible.

But smaller stories, successful or not, are harder to see. When we dive into this area, things become more complicated. Public interest internet services can be engulfed and transformed into strictly commercial operations, but they don’t have to be. In fact, they can persist and outlast their commercial cousins.

And that’s because the modern internet, buffeted as it is by monopolies, exploitation, and market and regulatory failure, still allows people to organize at low cost, with high levels of informality, in a way that can often be more efficient, flexible and antifragile than strictly commercial, private interest services,or the centrally-planned government production of public goods.

Next time: we continue our look at music recognition, and see how public interest internet initiatives can not only hang on as long as their commercial rivals, but continue to innovate, grow, and financially support their communities.

Danny O'Brien

The Enclosure of the Public Interest Internet

1 month ago

This is the second in our blog series on the public interest internet: past, present and future.

It’s hard to believe now, but in the early days of the public internet, the greatest worry of some of its most high-powered advocates was that it would be empty. As the Clinton administration prepared to transition the internet from its academic and military origins to the heart of the promised “national information infrastructure” (NII), the government’s advisors fretted that the United States entertainment and information industries would have no commercial reason to switch from TV, radio, and recorded music. And without Hollywood and the record labels on board, the new digital environment would end up as a ghost mall, devoid of businesses or users.

 “All the computers, telephones, fax machines, scanners, cameras, keyboards, televisions, monitors, printers, switches, routers, wires, cables, networks and satellites in the world will not create a successful NII, if there is not content”, former Patent Office head Bruce Lehman’s notorious 1994 government green paper on intellectual property on the Net warned. The fear was that without the presence of the pre-packaged material of America’s entertainment industry, the nation would simply refuse to go online. As law professor Jessica Litman describes it, these experts’ vision of the Internet was “a collection of empty pipes, waiting to be filled with content.” 

Even as the politicians were drafting new, more punitive copyright laws intended to reassure Hollywood and the record labels (and tempt them into new, uncharted waters), the Internet’s first users were moving in and building anyway. Even with its tiny audience of technologists, first-adopters, and university students, the early net quickly filled with compelling “content,” a  free-wheeling, participatory online media that drew ever larger crowds as it evolved.

Even in the absence of music and movies, the first net users built towers of information about them anyway. In rec.arts.movies, the Usenet discussion forum devoted to all things Hollywood, posters had been compiling and sharing lists of their favourite motion picture actors, directors, and trivia since the 1980s. By the time of the Lehman report, the collective knowledge of the newsgroup had outgrown its textual FAQs, and expanded first to a collectively-managed database on Colorado University’s file site, and then onward to one of the very first database-driven websites, hosted on a spare server at Wales’ Cardiff University.

Built in the same barn-raising spirit of the early net, the public interest internet exploits the low cost of organizing online to provide stable, free repositories of user-contributed information. They have escaped an exploited fate as proprietary services owned by a handful of tech giants.

These days, you’ll know that Cardiff Movie Database by another name – the IMDb. The database that had grown out of the rec.arts.movies contributions was turned into a commercial company in 1996 and sold to Amazon in 1998 for around $55 million dollars (equivalent to $88 million today). The Cardiff volunteers, led by one of its original moderators, Col Needham, continued to run the service as salaried employees of an Amazon subsidiary.

The IMDB shows how the original assumptions of Internet growth were turned on their head. Instead of movie production companies leading the way, their own audience had successfully built and monetised the elusive “content” of the information superhighway by themselves—for themselves.  The data of the rec.arts.movie databases was used by Amazon as the seed to build an exclusive subscriptions service, IMDbpro, for movie business professionals, and to augment their Amazon Prime video streaming service with quick-access film facts. Rather than needing the movie moguls’ permission to fill the Internet, the Internet ended up supplying information that those moguls themselves happily paid a new, digital mogul for.

But what about those volunteers who gave their time and labor to the collective effort of building this database for everyone? Apart from the few who became employees and shareholders of the commercial IMDb, they didn’t get a cut of the service’s profits. They also lost access to the full fruits of that comprehensive movie database. While you can still download the updated core of the Cardiff Database for free, it only covers the most basic fields of the IMDb. It is licensed under a strictly non-commercial license, fenced off with limitations and restrictions. No matter how much you might contribute to the IMDb, you can’t profit from your labor. The deeper info that was originally built by the user-contributions  and supplemented by Amazon has been enclosed: shut away, in a proprietary paywalled property, gated off from the super-highway it rode in on.

It’s a story as old as the net is, and echoes historic stories of the enclosure of the commons. A pessimist would say that this has been the fate of much of the early net and its aspirations. Digital natives built, as volunteers, free resources for everyone. Then, struggling to keep them online in the face of the burdens of unexpected growth, they ended up selling up to commercial interests. Big Tech grew to its monopoly position by harvesting this public commons, and then locking it away.

But it’s not the only story from the early net. Everyone knows, too, the large public projects that somehow managed to steer away from this path. Wikipedia is the archetype, still updated by casual contributors and defiantly unpaid editors across the world, with the maintenance costs of its website comfortably funded by regular appeals from its attached non-profit. Less known, but just as unique, is Open Street Map (OSM), a user-built, freely-licensed alternative to Google Maps, which has compiled from public domain sources and the hard work of its volunteer cartographers one of the most comprehensive maps of the entire earth. 

These are flagships of what we at EFF call the public interest internet. They produce and constantly replenish priceless public goods, available for everyone, while remaining separate from government, those traditional maintainers of public goods. Neither are they commercial enterprises, creating private wealth and (one hopes) public benefit through the incentive of profit. Built in the same barn-raising spirit of the early net, the public interest internet exploits the low cost of organizing online to provide stable, free repositories of user-contributed information. Through careful stewardship, or unique advantages, they have somehow escaped an enclosed and exploited fate as a proprietary service owned by a handful of tech giants.

That said, while Wikipedia and OSM are easy, go-to examples of the public interest internet, they are not necessarily representative of it. Wikipedia and OSM, in their own way, are tech giants too. They run at the same global scale. They struggle with some of the same issues of accountability and market dominance. It’s hard to imagine a true competitor to Wikipedia or OSM emerging now, for instance—even though many have tried and failed. Their very uniqueness means that their influence is outsized. The remote, in-house politics at these institutions has real effects on the rest of society. Both Wikipedia and OSM have complex, often carefully negotiated, large-scale interactions with the tech giants. Google integrates Wikipedia into its searches, cementing the encyclopedia’s position. OSM is used by, and receives contributions from, Facebook and Apple. It can be hard to know how individual contributors or users can affect the governance of these mega-projects or change the course of them. And there’s a recurring fear that the tech giants have more influence than the builders of these projects.

Besides, if there’s really only a handful of popular examples of public good production by the public interest internet, is that really a healthy alternative to the rest of the net? Are these just crocodiles and alligators, a few visible survivors from a previous age of out-evolved dinosaurs, doomed to be ultimately outpaced by sprightlier commercial rivals?

At EFF, we don’t think so. We think there’s a thriving economy of smaller public interest internet projects, which have worked out their own ways to survive on the modern internet. We think they deserve a role and representation in the discussions governments are having about the future of the net. Going further, we’d say that the real dinosaurs are our current tech giants. The small, sprightly, and public-minded public interest internet has always been where the benefits of the internet have been concentrated. They’re the internet’s mammalian survivors, hiding out in the nooks of the net, waiting to take back control when the tech giants are history.

In our next installment, we take a look at one of the most notorious examples of early digital enclosure, its (somewhat) happier ending, and what it says about the survival skills of the public interest internet when a free database of compact discs outlasts the compact disc boom itself.

Danny O'Brien

Introducing the Public Interest Internet

1 month ago

Say the word “internet” these days, and most people will call to mind images of Mark Zuckerberg and Jeff Bezos, of Google and Twitter: sprawling, intrusive, unaccountable. This tiny handful of vast tech corporations and their distant CEOs demand our online attention and dominate the offline headlines. 

But on the real internet, one or two clicks away from that handful of conglomerates, there remains a wider, more diverse, and more generous world. Often run by volunteers, frequently without any obvious institutional affiliation, sometimes tiny, often local, but free for everyone online to use and contribute to, this internet preceded Big Tech, and inspired the earliest, most optimistic vision of its future place in society.

When Big Tech is long gone, a better future will come from the seed of this public interest internet: seeds that are being planted now, and which need everyone to nurture them. 

The word “internet” has been so effectively hijacked by its most dystopian corners that it’s grown harder to even refer to this older element of online life, let alone bring it back into the forefront of society’s consideration. In his work documenting this space and exploring its future, academic, entrepreneur, and author Ethan Zuckerman has named it our “digital public infrastructure.” Hana Schank and her colleagues at the New America think tank have revitalized discussions around what they call “public interest technology.”  In Europe, activists, academics and public sector broadcasters talk about the benefits of the internet’s “public spaces” and improving and expanding the “public stack.” Author and activist Eli Pariser has dedicated a new venture to advancing better digital spaces—what its participants describe as the “New Public”.

Not to be outdone, we at EFF have long used the internal term: “the public interest internet.” While these names don’t quite point to exactly the same phenomenon, they all capture some aspect of the original promise of the internet. Over the last two decades, that promise largely disappeared from wider consideration.  By fading from view, it has grown underappreciated, underfunded, and largely undefended. Whatever you might call it, we see our mission to not just act as the public interest internet’s legal counsel when it is under threat, but also to champion it when it goes unrecognized. 

This blog series, we hope, will serve as a guided tour of some of the less visible parts of the modern public interest internet. None of the stories here, the organizations, collectives, and ongoing projects have grabbed the attention of the media or congressional committees (at least, not as effectively as Big Tech and its moguls). Nonetheless, they remain just as vital a part of the digital space. They not only better represent the spirit and vision of the early internet, they underlie much of its continuing success: a renewable resource that tech monopolies and individual users alike continue to draw from.

When Big Tech is long gone, a better future will come from the seed of this public interest internet: seeds that are being planted now, and which need everyone to nurture them until they’re strong enough to sustain our future in a more open and free society. 

But before we look into the future, let’s take a look at the past, to a time when the internet was made from nothing but the public—and because of that, governments and corporations declared that it could never prosper.

This is the introduction to our blog series on the public interest internet. Read more in the series: 

Danny O'Brien

Surveillance Self-Defense Playlist: Getting to Know Your Phone

1 month ago

We are launching a new Privacy Breakdown of Mobile Phones "playlist" on Surveillance Self-Defense, EFF's online guide to defending yourself and your friends from surveillance by using secure technology and developing careful practices. This guided tour walks through the ways your phone communicates with the world, how your phone is tracked, and how that tracking data can be analyzed. We hope to reach everyone from those who may have a smartphone for the first time, to those who have had one for years and want to know more, to savvy users who are ready to level up.

The operating systems (OS) on our phones weren’t originally built with user privacy in mind or optimized fully to keep threatening services at bay. Along with the phone’s software, different hardware components have been added over time to make the average smartphone a Swiss army knife of capabilities, many of which can be exploited to invade your privacy and threaten your digital security. This new resource attempts to map out the hardware and software components, the relationships between the two, and what threats they can create. These threats can come from individual malicious hackers or organized groups all the way up to government level professionals. This guide will help users understand a wide range of topics relevant to mobile privacy, including: 

  • Location Tracking: Encompassing more than just GPS, your phone can be tracked through cellular data and WiFi as well. Find out the various ways your phone identifies your location.
  • Spying on Mobile Communications: The systems our phone calls were built on were based on a model that didn’t prioritize hiding information. That means targeted surveillance is a risk.
  • Phone Components and Sensors: Today’s modern phone can contain over four kinds of radio transmitters/receivers, including WiFi, Bluetooth, Cellular, and GPS.
  • Malware: Malicious software, or malware, can alter your phone in ways that make spying on you much easier.
  • Pros and Cons of Turning Your Phone Off: Turning your phone off can provide a simple solution to surveillance in certain cases, but can also be correlated with where it was turned off.
  • Burner Phones: Sometimes portrayed as a tool of criminals, burner phones are also often used by activists and journalists. Know the do's and don’ts of having a “burner.”
  • Phone Analysis and Seized Phones: When your phone is seized and analyzed by law enforcement, certain patterns and analysis techniques are commonly used to draw conclusions about you and your phone use.

This isn’t meant to be a comprehensive breakdown of CPU architecture in phones, but rather of the capabilities that affect your privacy more frequently, whether that is making a phone call, texting, or using navigation to get to a destination you have never been to before. We hope to give the reader a bird’s-eye view of how that rectangle in your hand works, take away the mystery behind specific privacy and security threats, and empower you with information you can use to protect yourself.

EFF is grateful for the support of the National Democratic Institute in providing funding for this security playlist. NDI is a private, nonprofit, nongovernmental organization focused on supporting democracy and human rights around the world. Learn more by visiting https://NDI.org.

Alexis Hancock

Foreign Intelligence Surveillance Court Rubber Stamps Mass Surveillance Under Section 702 - Again

1 month ago

As someone once said, “the Founders did not fight a revolution to gain the right to government agency protocols.”  Well it was not just someone, it was Chief Justice John Roberts. He flatly rejected the government’s claim that agency protocols could solve the Fourth Amendment violations created by police searches of our communications stored in the cloud and accessible through our phones.  

Apparently, the Foreign Intelligence Surveillance Court (FISC) didn’t get the memo. That’s because, under a recently declassified decision from November 2020, the FISC again found that a series of overly complex but still ultimately swiss cheese agency protocols -- that are admittedly not even being followed -- resolve the Fourth Amendment problems caused by the massive governmental seizures and searches of our communications currently occurring under FISA Section 702. The annual review by the FISC is required by law -- it’s supposed to ensure that both the policies and the practices of the mass surveillance under 702 are sufficient. It failed on both counts.  

The protocols themselves are inherently problematic. The law only requires that intelligence officials “reasonably believe” the “target” of an investigation to be a foreigner abroad -- it is immaterial to the initial collection that there is an American, with full constitutional rights, on the other side of a communication

Justice Roberts was concerned with a single phone seized pursuant to a lawful arrest.  The FISC is apparently unconcerned when it rubber stamps mass surveillance impacting, by the government’s own admission, hundreds of thousand of nonsuspect Americans.

What’s going on here?  

From where we sit, it seems clear that the FISC continues to suffer from a massive case of national security constitutional-itis. That is the affliction (not really, we made it up) where ordinarily careful judges sworn to defend the Constitution effectively ignore the flagrant Fourth Amendment violations that occur when the NSA, FBI, (and to a lesser extent, the CIA, and NCTC) misuse the justification of national security to spy on Americans en mass. And this malady means that even when the agencies completely fail to follow the court's previous orders, they still get a pass to keep spying.  

The FISC decision is disappointing on at least two levels. First, the protocols themselves are not sufficient to protect Americans’ privacy. They allow the government to tap into the Internet backbone and seize our international (and lots of domestic) communications as they flow by -- ostensibly to see if they have been targeted. This is itself a constitutional violation, as we have long argued in our Jewel v. NSA case. We await the Ninth Circuit’s decision in Jewel on the government’s claim that this spying that everyone knows about is too secret to be submitted for real constitutional review by a public adversarial court (as opposed to the one-sided review by the rubber-stamping FISC).  

But even after that, the protocols themselves are swiss cheese when it comes to protecting Americans. At the outset, unlike traditional foreign intelligence surveillance, under Section 702, FISC judges do not authorize individualized warrants for specific targets. Rather, the role of a FISC judge under Section 702 is to approve abstract protocols that govern the Executive Branch’s mass surveillance and then review whether they have been followed.  

The protocols themselves are inherently problematic. The law only requires that intelligence officials “reasonably believe” the “target” of an investigation to be a foreigner abroad -- it is immaterial to the initial collection that there is an American, with full constitutional rights, on the other side of a conversation whose communications are both seized and searched without a warrant. It is also immaterial that the individuals targeted turn out to be U.S. persons.  This was one of the many problems which ultimately ended with the decommissioning of the Call Detail Records program, which despite being Congress' attempt to rein in the program which started under section 215 of the Patriot Act, still mass surveilled communications metadata, including inadvertently collecting millions of call detail records from American persons illegally. 

Next, the protocols allow collection for any “foreign intelligence,” purpose, which is a much broader scope than merely searching for terrorists. The term encompasses information that, for instance, could give the U.S. an advantage in trade negotiations. Once these communications are collected, the protocols allow the FBI to use the information for domestic criminal prosecutions if related to national security.  This is what Senator Wyden and others in Congress have rightly pointed out is a “backdoor” warrantless search. And those are just a few of the problems.  

While the protocols are complex and confusing, the end result is that nearly all Americans have their international communications seized initially and a huge number of them are seized and searched by the FBI, NSA, CIA and NCTC, often multiple times for various reasons, all without individual suspicion, much less a warrant.

Second, the government agencies -- especially the FBI -- apparently cannot be bothered to follow even these weak protocols.  This means that in practice, we users don’t even get that minimal protection.  The FISC decision reports that the FBI has never limited its searches to just those related to national security. Instead agents query the 702 system for investigations relating to health care fraud, transnational organized crime, violent gangs, domestic terrorism, public corruption and bribery. And that’s in just 7 FBI field offices reviewed. This is not a new problem, as the FISC notes. Although it once again seems to think that the FBI just needs to be told again to do it and to do proper training (which it has failed to do for years). The court notes that it is likely that other field offices also did searches for ordinary crimes, but that the FBI also failed to do proper oversight so we just don’t know how.  

A federal court would accept no such tomfoolery.....Yet the FISC is perfectly willing to sign off on the FBI’s failures and the Bureau’s flagrant disregard of its own rulings for year upon year.

Next, the querying system for this sensitive information had been designed to make it hard not to search the 702-collected data, including by requiring agents to opt out (not in) to searching the 702 data and then timing out that opt-out after only thirty minutes. And even then, the agents could just toggle “yes” to search 702 collected data, with no secondary checking prior to those searches. This happened multiple times (that we know of) to allow for searches without any national security justification. The FBI also continued to improperly conduct bulk searches, which are large batch queries using multiple search terms without written justifications as required by the protocols. Even the FISC calls these searches “indiscriminate,” yet it reauthorized the program.  

In her excellent analysis of the decision, Marcy Wheeler lists out the agency excuses that the Court accepted:

  • It took time for them to make the changes in their systems
  • It took time to train everyone
  • Once everyone got trained they all got sent home for COVID 
  • Given mandatory training, personnel “should be aware” of the requirements, even if actual practice demonstrates they’re not
  • FBI doesn’t do that many field reviews
  • Evidence of violations is not sufficient evidence to find that the program inadequately protects privacy
  • The opt-out system for FISA material — which is very similar to one governing the phone and Internet dragnet at NSA until 2011 that also failed to do its job — failed to do its job
  • The FBI has always provided national security justifications for a series of violations involving their tracking system where an Agent didn’t originally claim one
  • Bulk queries have operated like that since November 2019
  • He’s concerned but will require more reporting

And the dog also ate their homework.  While more reporting sounds nice, that’s the same thing ordered the last time, and the time before that.  Reporting of problems should lead to something actually being done to stop the problems.  

At this point, it’s just embarrassing. A federal court would accept no such tomfoolery from an impoverished criminal defendant facing years in prison. Yet the FISC is perfectly willing to sign off on the FBI and NSA failures and the agencies' flagrant disregard of its own rulings for year upon year.  Not all FISC decisions are disappointing.  In 2017, we were heartened that another FISC judge had been so fed up that it issued requirements that led to the end of the “about” searching of collected upstream data and even its partial destruction. And the extra reporting requirements do give us at least a glimpse into how bad it is that we wouldn’t otherwise have.  

But this time the FISC has let us all down again. It’s time for the judiciary, whether a part of the FISC or not, to inoculate themselves against the problem of throwing out the Fourth Amendment whenever the Executive Branch invokes national security, particularly when the constitutional violations are so flagrant, long-standing and pervasive. The judiciary needs to recognize mass spying as unconstitutional and stop what remains of it. Americans deserve better than this charade of oversight. 




Related Cases: Jewel v. NSA
Cindy Cohn

The Florida Deplatforming Law is Unconstitutional. Always has Been.

1 month 1 week ago

Last week, the Florida Legislature passed a bill prohibiting social media platforms from “knowingly deplatforming” a candidate (the Transparency in Technology Act, SB 7072), on pain of a fine of up to $250k per day, unless, I kid you not, the platform owns a sufficiently large theme park. 

Governor DeSantis is expected to sign it into law, as he called for laws like this. He cited social media de-platforming Donald Trump as  examples of the political bias of what he called “oligarchs in Silicon Valley.” The law is not just about candidates, it also bans “shadow-banning” and cancels cancel culture by prohibiting censoring “journalistic enterprises,” with “censorship” including things like posting “an addendum” to the content, i.e. fact checks.

This law, like similar previous efforts, is mostly performative, as it almost certainly will be found unconstitutional. Indeed, the parallels with a nearly 50 years old compelled speech precedent are uncanny. In 1974, in Miami Herald Publishing Co. v. Tornillo, the Supreme Court struck down another Florida statute that attempted to compel the publication of candidate speech. 

50 Years Ago, Florida's Similar "Right of Reply" Law Was Found Unconstitutional

At the time, Florida had a dusty "right of reply" law on the books, which had not really been used, giving candidates the right to demand that any newspaper who criticized them print a reply to the newspaper's charges, at no cost. The Miami Herald had criticized Florida House candidate Pat Tornillo, and refused to carry Tornillo’s reply. Tornillo sued.

Tornillo lost at the trial court, but found some solace on appeal to the Florida Supreme Court.  The Florida high court held that the law was constitutional, writing that the “statute enhances rather than abridges freedom of speech and press protected by the First Amendment,” much like the proponents of today’s new law argue. 

So off the case went to the US Supreme Court. Proponents of the right of reply raised the same arguments used today—that government action was needed to ensure fairness and accuracy, because “the 'marketplace of ideas' is today a monopoly controlled by the owners of the market.”  

Like today, the proponents argued new technology changed everything. As the Court acknowledged in 1974, “[i]n the past half century a communications revolution has seen the introduction of radio and television into our lives, the promise of a global community through the use of communications satellites, and the specter of a ‘wired’ nation by means of an expanding cable television network with two-way capabilities.”  Today, you might say that a wired nation with two-way communications had arrived in the global community, but you can’t say the Court didn’t consider this concern.

You might wonder why the Florida Legislature would pass a law doomed to failure. Politics, of course.

The Court also accepted that the consolidation of major media meant “the dominant features of a press that has become noncompetitive and enormously powerful and influential in its capacity to manipulate popular opinion and change the course of events,” and acknowledged the development of what the court called “advocacy journalism,” eerily similar to the arguments raised today. 

Paraphrasing the arguments made in favor of the law, the Court wrote “The abuses of bias and manipulative reportage are, likewise, said to be the result of the vast accumulations of unreviewable power in the modern media empires. In effect, it is claimed, the public has lost any ability to respond or to contribute in a meaningful way to the debate on issues,” just like today’s proponents of the Transparency in Technology Act.

The Court was not swayed, not because this was dismissed as an issue, but because government coercion could not be the answer. “However much validity may be found in these arguments, at each point the implementation of a remedy such as an enforceable right of access necessarily calls for some mechanism, either governmental or consensual. If it is governmental coercion, this at once brings about a confrontation with the express provisions of the First Amendment.” There is much to dislike about content moderation practices, but giving the government more control is not the answer.

Even if one should decry the lack of responsibility of the media, the Court recognized “press responsibility is not mandated by the Constitution and like many other virtues it cannot be legislated.”  Accordingly, Miami Herald v. Tornillo reversed the Florida Supreme Court, and held the Florida statute compelling publication of candidates' replies unconstitutional.

Since Tornillo, courts have consistently applied it as binding precedent, including applying Tornillo to social media and internet search engines, the very targets of the Transparency in Technology Act (unless they own a theme park). Indeed, the compelled speech doctrine has even been used to strike down other attempts to counter perceived censorship of conservative speakers.1 

With the strong parallels with Tornillo, you might wonder why the Florida Legislature would pass a law doomed to failure, costing the state the time and expense of defending it in court. Politics, of course. The legislators who passed this bill probably knew it was unconstitutional, but may have seen political value in passing the base-pleasing statute, and blaming the courts when it gets struck down. 

Politics is also the reason for the much-ridiculed exception for theme park owners. It’s actually a problem for the law itself. As the Supreme Court explained in Florida Star v BJF, carve-outs like this make the bill even more susceptible to a First Amendment challenge as under-inclusive.  Theme parks are big business in Florida, and the law’s definition of social media platform would otherwise fit Comcast (which owns Universal Studios' theme parks), Disney, and even Legoland.  Performative legislation is less politically useful if it attacks a key employer and economic driver of your state. The theme park exception has also raised all sorts of amusing possibilities for the big internet companies to address this law by simply purchasing a theme park, which could easily be less expensive than compliance, even with the minimum 25 acres and 1 million visitors/year. Much as Section 230 Land would be high on my own must-visit list, striking the law down is the better solution.

The Control that Large Internet Companies Have on our Public Conversations Is An Important Policy Issue

The law is bad, and the legislature should feel bad for passing it, but this does not mean that the control that the large internet companies have on our public conversations isn’t an important policy issue. As we have explained to courts considering the broader issue, if a candidate for office is suspended or banned from social media during an election, the public needs to know why, and the candidate needs a process to appeal the decision. And this is not just for politicians - more often it is marginalized communities that bear the brunt of bad content moderation decisions. It is critical that the social platform companies provide transparency, accountability and meaningful due process to all impacted speakers, in the US and around the globe, and ensure that the enforcement of their content guidelines is fair, unbiased, proportional, and respectful of all users’ rights. 

This is why EFF and a wide range of non-profit organizations in the internet space worked together to develop the Santa Clara Principles, which call upon social media to (1) publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines; (2) provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension; and (3) provide a meaningful opportunity for timely appeal of any content removal or account suspension. 

  • 1.  Provisions like Transparency in Technology Act’s ban on addendums to posts (such as fact checking or link to authoritative sources) are not covered by the compelled speech doctrine, but rather fail as prior restraints on speech. We need not spend much time on that, as the Supreme Court has roundly rejected prior restraint.
Kurt Opsahl

Facebook Oversight Board Affirms Trump Suspension -- For Now

1 month 1 week ago

Today’s decision from the Facebook Oversight Board regarding the suspension of President Trump’s account — to extend the suspension for six months and require Facebook to reevaluate in light of the platform’s stated policies — may be frustrating to those who had hoped for a definitive ruling. But it is also a careful and needed indictment of Facebook’s opaque and inconsistent moderation approach that offers several recommendations to help Facebook do better, focused especially on consistency and transparency. Consistency and transparency should be the hallmarks of all content decisions. Too often, neither hallmark is met. Perhaps most importantly, the Board affirms that it cannot and should not allow Facebook to avoid its responsibilities to its users.  We agree.

The decision is long, detailed, and worth careful review. In the meantime, here’s our top-level breakdown:

Today’s decision affirms, once again, that no amount of “oversight” can fix the underlying problem.

First, while the Oversight Board rightly refused to make special rules for politicians, rules we have previously opposed, it did endorse special rules and procedures for “influential users” and newsworthy posts. These rules recognize that some users can cause greater harm than others.  On a practical level, every decision to remove a post or suspend an account is highly contextual and requires often highly specific cultural competency. But we agree that special rules for influential users or highly newsworthy content requires even greater transparency and the investment of substantial resources.

Specifically, the Oversight Board explains that Facebook needs to document all of these special decisions well, clearly explain how any newsworthiness allowance applies to influential accounts, clearly explain how it cross checks such decisions including its rationale, standards, and processes of review, and the criteria for determining which pages to include. And Facebook should report error rates and thematic consistency of determinations as compared with its ordinary enforcement procedures.

More broadly, the Oversight Board also correctly notes that Facebook's penalty system is unclear and that it must better explain its strikes and penalties process, and inform users of strikes and penalties levied against them.

We wholeheartedly agree, as the Oversight Board emphasized, that “restrictions on speech are often imposed by or at the behest of powerful state actors against dissenting voices and members of political oppositions” and that  “Facebook must resist pressure from governments to silence their political opposition.” The Oversight Board urged Facebook to treat such requests with special care. We would have also required that all such requests be publicly reported.

The Oversight Board correctly also noted the need for Facebook to collect and preserve removed posts. Such posts are important for preserving the historical record as well as for human rights reporting, investigations, and accountability. 

While today’s decision reflects a notable effort to apply an international human rights framework, we continue to be concerned that an Oversight Board that is US-focused in its composition is not best positioned to help Facebook do better. But the Oversight Board did recognize the international dimension of the issues it confronts, and endorsed the Rabat Plan of Action, from the United Nations Office of the High Commissioner for Human Rights, as a framework for assessing the removal of posts that may incite hostility or violence. It specifically did not apply the First Amendment, even though the events leading to the decision were focused in the US.

Overall, these are good recommendations and we will be watching to see if Facebook takes them seriously. And we appreciate the Oversight Board’s refusal to make Facebook’s tough decisions for it. If anything, though, today’s decision affirms, once again, that no amount of “oversight” can fix the underlying problem: Content moderation is extremely difficult to get right, particularly at Facebook scale.

Corynne McSherry

Proposed New Internet Law in Mauritius Raises Serious Human Rights Concerns

1 month 1 week ago

As debate continues in the U.S. and Europe over how to regulate social media, a number of countries—such as India and Turkey—have imposed stringent rules that threaten free speech, while others, such as Indonesia, are considering them. Now, a new proposal to amend Mauritius’ Information and Communications Technologies Act (ICTA) with provisions to install a proxy server to intercept otherwise secure communications raises serious concerns about freedom of expression in the country.

Mauritius, a democratic parliamentary republic with a population just over 1.2 million, has an Internet penetration rate of roughly 68% and a high rate of social media use. The country’s Constitution guarantees the right to freedom of expression but, in recent years, advocates have observed a backslide in online freedoms.

In 2018, the government amended the ICTA, imposing heavy sentences—as high as ten years in prison—for online messages that “inconvenience” the receiver or reader. The amendment was in turn utilized to file complaints against journalists and media outlets in 2019.

In 2020, as COVID-19 hit the country, the government levied a tax on digital services operating  in the country, defined as any service supplied by “a foreign supplier over the internet or an electronic network which is reliant on the internet; or by a foreign supplier and is dependent on information technology for its supply.”

The latest proposal to amend the ICTA has raised alarm bells amongst local and international free expression advocates, as it would enable government officials who have established instances of “abuse and misuse” to block social media accounts and track down users using their IP addresses.

The amendments are reminiscent of those in India and Turkey in that they seek to regulate foreign social media, but differ in that Mauritius—a far smaller country—lacks the ability to force foreign companies to maintain a local presence. In a paper for a consultation of the amendments, proponents argue:

Legal provisions prove to be relatively effective only in countries where social media platforms have regional offices. Such is not the case for Mauritius. The only practical solution in the local context would be the implementation of a regulatory and operational framework which not only provides for a legal solution to the problem of harmful and illegal online content but also provides for the necessary technical enforcement measures required to handle this issue effectively in a fair, expeditious, autonomous and independent manner.

While some of the concerns raised in the paper—such as the fact that social media companies do not sufficiently moderate content in the country’s local language—are valid, the solutions proposed are disproportionate. 

A Change.org petition calling on local and international supporters to oppose the amendments notes that “Whether human … or AI, the system that will monitor, flag and remove information shared by users will necessarily suffer from conscious or unconscious bias. These biases will either be built into the algorithm itself, or will afflict those who operate the system.” 

Most concerning, however, is that authorities wish to install a local/proxy server that impersonates social media networks to fool devices and web browsers into sending secure information to the local server instead of social media networks, effectively creating an archive of the social media information of all users in Mauritius before resending it to the social media networks’ servers. This plan fails to mention how long the information will be archived, or how user data will be protected from data breaches.

Local free expression advocates are calling on the ICTA authorites to “concentrate their efforts in ethically addressing concerns made by citizens on posts that already exist and which have been deemed harmful.” Supporters are encouraged to sign the Change.org petition or submit comment to the open consultation by emailing socialmediaconsultation@icta.mu before May 5, 2021.









Jillian C. York
Checked
2 hours 37 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed