Schools Can’t Punish Students for Off-Campus Speech, Including Social Media Posts, EFF Tells Supreme Court

2 months 1 week ago
Online Comments Made Outside School Are Fully Protected by the First Amendment

Washington, D.C.—The Electronic Frontier Foundation (EFF) urged the Supreme Court to rule that when students post on social media or speak out online while off campus, they are protected from punishment by school officials under the First Amendment—an important free speech principle amid unprecedented, troubling monitoring and surveillance of students’ online activities outside the classroom.

EFF, joined by the Brennan Center for Justice and the Pennsylvania Center for the First Amendment, said in a brief filed today that a rule the Supreme Court established in the 1960s allowing schools to punish students for what they say on campus in some limited circumstances should not be expanded to let schools regulate what students say in their private lives outside of school, including on social media.

“Like all Americans, students have free speech protections from government censorship and policing,” said EFF Stanton Fellow Naomi Gilens. “In the 1969 case Tinker v. Des Moines, the Supreme Court carved out a narrow exception to this rule, allowing schools to regulate some kinds of speech on campus only in limited circumstances, given the unique characteristics of the school environment. Interpreting that narrow exception to let schools punish students for speech uttered outside of school would dramatically expand schools’ power to police students’ private lives.”

In B.L. v. Mahanoy Area School District, the case before the court, a high school student who failed to make the varsity cheerleading squad posted a Snapchat selfie with text that said, among other things, “fuck cheer.” She shared the post over the weekend and outside school grounds—but one of her Snapchat connections took a screen shot and shared it with the cheerleading coaches, who suspended B.L. from the J.V. squad. The student and her family sued the school.

In a victory for free speech, the U.S. Court of Appeals for the Third Circuit issued a historic decision in the case, holding that the school’s limited power to punish students for disruptive speech doesn’t apply to off-campus speech, even if that speech is shared on social media and finds its way into school via other students’ smartphones or devices.

EFF also explained that protecting students’ off-campus speech, including on social media, is critical given the central role that the Internet and social media play in young people’s lives today. Not only do students use social media to vent their daily frustrations, as the student in this case did, but students also use social media to engage in politics and advocacy, from promoting candidates during the 2020 election to advocating for action on climate change and gun violence. Expanding schools’ ability to punish students would chill students from engaging online with issues they care about—an outcome that is antithetical to the values underlying the First Amendment.

“The Supreme Court should uphold the Third Circuit ruling and guarantee that schools can’t chill children and young people from speaking out in their private lives, whether at a protest, in an op-ed, in a private conversation, or online, including on social media,” said Gilens.

For the brief:
https://www.eff.org/document/eff-amicus-brief-bl-v-mahanoy

For more on free speech:
https://www.eff.org/issues/free-speech

 

Contact:  NaomiGilensFrank Stanton Fellownaomi@eff.org
Karen Gullo

Google Is Testing Its Controversial New Ad Targeting Tech in Millions of Browsers. Here’s What We Know.

2 months 1 week ago

Update, April 9, 2021 : We've launched Am I FLoCed, a new site that will tell you whether your Chrome browser has been turned into a guinea pig for Federated Learning of Cohorts or FLoC, Google’s latest targeted advertising experiment. 

Today, Google launched an “origin trial” of Federated Learning of Cohorts (aka FLoC), its experimental new technology for targeting ads. A switch has silently been flipped in millions of instances of Google Chrome: those browsers will begin sorting their users into groups based on behavior, then sharing group labels with third-party trackers and advertisers around the web. A random set of users have been selected for the trial, and they can currently only opt out by disabling third-party cookies.

Although Google announced this was coming, the company has been sparse with details about the trial until now. We’ve pored over blog posts, mailing lists, draft web standards, and Chromium’s source code to figure out exactly what’s going on.

EFF has already written that FLoC is a terrible idea.  Google’s launch of this trial—without notice to the individuals who will be part of the test, much less their consent—is a concrete breach of user trust in service of a technology that should not exist.

Below we describe how this trial will work, and some of the most important technical details we’ve learned so far.

FLoC is supposed to replace cookies. In the trial, it will supplement them.

Google designed FLoC to help advertisers target ads once third-party cookies go away. During the trial, trackers will be able to collect FLoC IDs in addition to third-party cookies. 

That means all the trackers who currently monitor your behavior across a fraction of the web using cookies will now receive your FLoC cohort ID as well. The cohort ID is a direct reflection of your behavior across the web. This could supplement the behavioral profiles that many trackers already maintain.

The trial will affect up to 5% of Chrome users worldwide.

We’ve been told that the trial is currently deployed to 0.5% of Chrome users in some regions—for now, that means Australia, Brazil, Canada, India, Indonesia, Japan, Mexico, New Zealand, the Philippines, and the U.S. Users in eligible regions will be chosen completely at random, regardless of most ad and privacy settings. Only users who have turned off third-party cookies in Chrome will be opted out by default.

Furthermore, the team behind FLoC has requested that Google bump up the sample to 5% of users, so that ad tech companies can better train models using the new data. If that request is granted, tens or hundreds of millions more users will be enrolled in the trial.

Users have been enrolled in the trial automatically. There is no dedicated opt-out (yet).

As described above, a random portion of Chrome users will be enrolled in the trial without notice, much less consent. Those users will not be asked to opt in. In the current version of Chrome, users can only opt out of the trial by turning off all third-party cookies.

Future versions of Chrome will add dedicated controls for Google’s “privacy sandbox,” including FLoC. But it’s not clear when these settings will go live, and in the meantime, users wishing to turn off FLoC must turn off third-party cookies as well.

Turning off third-party cookies is not a bad idea in general. After all, cookies are at the heart of the privacy problems that Google says it wants to address. But turning them off altogether is a crude countermeasure, and it breaks many conveniences (like single sign-on) that web users rely on. Many privacy-conscious users of Chrome employ more targeted tools, including extensions like Privacy Badger, to prevent cookie-based tracking. Unfortunately, Chrome extensions cannot yet control whether a user exposes a FLoC ID.

Websites aren’t being asked to opt in, either.

FLoC calculates a label based on your browsing history. For the trial, Google will default to using every website that serves ads—which is the majority of sites on the web. Sites can opt out of being included in FLoC calculations by sending an HTTP header, but some hosting providers don’t give their customers direct control of headers. Many site owners may not be aware of the trial at all.

This is an issue because it means that sites lose some control over how their visitors’ data is processed. Right now, a site administrator has to make a conscious decision to include code from an advertiser on their page. Sites can, at least in theory, choose to partner with advertisers based on their privacy policies. But now, information about a user’s visit to that site will be wrapped up in their FLoC ID, which will be made widely available (more on that in the next section). Even if a website has a strong privacy policy and relationships with responsible advertisers, a visit there may affect how trackers see you in other contexts.

Each user’s FLoC ID—the label that reflects their past week’s browsing history—will be available to any website or tracker who wants it.

Anyone can sign up for Chrome’s origin trial. After that, it can access FLoC IDs for users who have been chosen for the trial whenever it can run JavaScript. This includes the vast ecosystem of nameless advertisers to whom your browser connects whenever you visit most ad-serving sites. If you’re part of the trial, dozens of companies may be able to gather your FLoC ID from each site you visit.

There will be over 33,000 possible cohorts.

One of the most important portions of the FLoC specification left undefined is exactly how many cohorts there are. Google ran a preliminary experiment with 8-bit cohort IDs, which meant there were just 256 possible groups. This limited the amount of information trackers could learn from a user’s cohort ID. 

However, an examination of the latest version of Chrome reveals that the live version of FLoC uses 50-bit cohort identifiers. The cohorts are then batched together into 33,872 total cohorts, over 100 times more than in Google’s first experiment. Google has said that it will ensure “thousands” of people are grouped into each cohort, so nobody can be identified using their cohort alone. But cohort IDs will still expose lots of new information—around 15 bits—and will give fingerprinters a massive leg up.

The trial will likely last until July.

Any tracker, advertiser, or other third party can sign up through Google’s Origin Trial portal to begin collecting FLoCs from users. The page currently indicates that the trial may last until July 13. Google has also made it clear that the exact details of the technology—including how cohorts are calculated—will be subject to change, and we could see several iterations of the FLoC grouping algorithm between now and then.

Google plans to audit FLoC for correlations with “sensitive categories.” It’s still missing the bigger picture.

Google has pledged to make sure that cohorts aren’t too tightly correlated with “sensitive categories” like race, sexuality, or medical conditions. In order to monitor this, Google plans to collect data about which sites are visited by users in each cohort. It has released a whitepaper describing its approach. 

We’re glad to see a specific proposal, but the whitepaper sidesteps the most pressing issues. The question Google should address is "can you target people in vulnerable groups;" the whitepaper reduces this to "can you target people who visited a specific site.” This is a dangerous oversimplification. Rather than working on the hard problem, Google has chosen to focus on an easier version that it believes it can solve. Meanwhile, it’s failed to address FLoC’s worst potential harms.

During the trial, any user who has turned on “Chrome Sync” (letting Google collect their browsing history), and who has not disabled any of several default sharing settings, will now share their cohort ID attached to their browsing history with Google. 

Google will then check to see if each user visited any sites that it considers part of a “sensitive category.” For example, WebMD might be labelled in the “medical” category, or PornHub in the “adult” category. If too many users in one cohort have visited a particular kind of “sensitive” site, Google will block that cohort. Any users that are part of “sensitive” cohorts will be placed into an “empty” cohort instead. Of course, trackers will still be able to see that said users are part of the “empty” cohort, revealing that they were originally classified as some kind of “sensitive.”

For the origin trial, Google is relying on its massive cache of personalized browsing data to perform the audit. In the future, Google plans to use other privacy-preserving technology to do the same thing without knowing individuals’ browsing history.

Regardless of how Google does it, this plan won't solve the bigger issues with FLoC, discrimination, and predatory targeting. The proposal rests on the assumption that people in “sensitive categories” will visit specific “sensitive” websites, and that people who aren’t in those groups will not visit said sites. But behavior correlates with demographics in unintuitive ways. It's highly likely that certain demographics are going to visit a different subset of the web than other demographics are, and that such behavior will not be captured by Google’s “sensitive sites” framing. For example, people with depression may exhibit similar browsing behaviors, but not necessarily via something as explicit and direct as, for example, visiting “depression.org.” Meanwhile, tracking companies are well-equipped to gather traffic from millions of users, link it to data about demographics or behavior, and decode which cohorts are linked to which sensitive traits. Google’s website-based system, as proposed, has no way of stopping that.

As we said before, “Google can choose to dismantle the old scaffolding for surveillance without replacing it with something new and uniquely harmful.” Google has failed to address the harms of FLoC, or even to convince us that they can be addressed. Instead, it's running a test that will share new data about millions of unsuspecting users. This is another step in the wrong direction.

Bennett Cyphers

Crowdfunding Legal Fees Is Not a Crime

2 months 1 week ago

A piece in USA Today describes how a number of Capitol Hill rioters are utilizing online fundraising platforms to raise funds to cover legal fees, only to find their accounts shut down. This is prompting an online discussion not only about when and how tech companies should shutter online accounts for those accused of illegal activity but also about what financial services should be available to those accused of a crime.

The piece seemingly conflates the crowdfunding of legal fees with “crowdfunding hate,” an argument that needs some unpacking.

The justification for banning right-wing extremists from using online platforms to raise funds for a legal defense is made from a concern that such donations will be used for activities that promote hate, violence, or racial intolerance. And yet, the determination of what constitutes such activities is meant to be left to online platforms and service providers. That is, the behavior that people are demanding that these companies regulate is not, in most cases, illegal. Without guidance from the law, the companies are left to make subjective decisions about who should be allowed to use their services—decisions that are increasingly informed by public pressure.

There is a long history of corporations denying services to a wide range of actors, and we’ve documented what happens when corporations are left to decide who is or isn’t worthy of raising funds. 

In 2010, WikiLeaks suffered an extra-legal financial blockade spurred on by unofficial government pressure, though they had not been charged with any crime in the United States. In other instances, online booksellers, fetish-oriented websites, and most recently, Pornhub, have fallen prey to the moral values of individual platforms and service providers.

In fact, those most often affected by the whims of payment processors are some of the most marginalized groups and individuals. Sex workers have reported being systematically redlined by PayPal, crowdfunding sites, and even Visa and MasterCard—even when attempting to use those services for ordinary transactions unrelated to their work.

Payment companies are also notoriously overbroad in their attempts to apply restrictions related to United States international sanctions, banning individuals for so much as referencing the name of a sanctioned country like Syria, Sudan, or Iran.

The Right to a Fair Trial

In the United States, every state operates what is called an “Interest on Lawyer Trust Account” (IOLTA) program. Lawyers are required to create a separate trust account to hold their clients’ funds, which then can be used to cover the cost of legal services. This money is held separately from the attorney’s own money. In fact, a common way individuals other than the defendant contribute money to someone’s legal defense is by contributing directly to the IOLTA account. In some circumstances, these accounts are pooled and the bank’s interest payments help support legal aid to the poor and to improve the justice system. But, IOLTA accounts are not ideal: the defendant must have already secured a lawyer before using that attorney’s IOLTA account.  Moreover, it can be complex to directly transfer money to the account, and attorneys may ask that money be sent through direct transfer using a routing number or by mailing a check.  For individuals who are still seeking legal aid, or for anyone who wants to set up online donations, crowdfunding platforms have filled a vital gap—creating an online fund that can then be sent to the IOLTA account once the attorney is retained. 

While it’s understandable that, in the wake of the Capitol Hill riot, people are grasping for ways to limit the spread of right-wing extremism, the demand that companies prevent individuals from raising funds for a legal defense is deeply troubling. Our adversarial judicial system is built on the idea that people—even guilty people—should have access to a fair trial with adequate representation. In a criminal case, that ideal is often elusive, with the government having nigh unlimited resources, especially for a high profile case, while most defendants have few resources at all.  

Demanding that companies deny individuals the ability to effectively raise funds for a defense attorney tilts the playing field from the start. People would be outraged if, for instance, protesters detained during last summer’s Black Lives Matter demonstrations were de-platformed by fundraising sites. But that’s precisely what can happen when these decisions are left to corporations with one eye on the bottom line.

Make no mistake: many of the Capitol Hill rioters violated the law, and will likely—and rightfully—face legal consequences for their actions. What was so reprehensible about the Capitol Hill riot was that it attempted to undermine one of the foundational pillars of our democracy: a fair and just election. But just as democratic elections are fundamental to our society, so is a functional and fair judicial system. When we erect barriers between a defendant and the ability to pay for adequate legal representation, we jeopardize that judicial system. And while many abhor the actions of the Capitol Hill rioters, we must remember that these systems will not only be used against people whose actions we find abhorrent. Indeed, whenever we analyze the censorship decisions of tech companies, we must remember that those most likely to suffer the consequences are those already marginalized in our society.

rainey Reitman

Local Franchising, Big Cities, and Fiber Broadband

2 months 1 week ago

In 2005, the Federal Communications Commission (FCC) made a foundational decision on how broadband competition policy would work with the entry of fiber to the home. In short, the FCC concluded that competition was growing, government policy was unnecessary in deference to market forces, and that the era of communications monopoly was rapidly ending. The very next year at the request of companies like Verizon and AT&T, some states, including California, passed laws that consolidated local “franchises” into single statewide franchises, making the same assumption that the FCC did.

When you look at communities in the aggregate, you would be hard-pressed to find a single large American city where you couldn’t turn a profit with fiber.

They were wrong. To explore just how wrong, EFF and the Technology Law and Policy Center have published our newest white paper on the effects of these decisions. The research digs into New York, which decided to retain power at the local level and see what we can learn from the state as we look to the future. The big takeaway is that large cities that do not have local franchise authority are losing out because they lack the negotiating leverage needed to push private fiber to all city residents, particularly low-income residents.  

What are Franchises and Why Did They Change in 2006?

Franchises are basically the negotiated agreement between broadband carriers and local communities on how access will be provisioned—essentially, a license to do business in the area. These franchises exist because Internet service providers (ISPs) need access to the taxpayer-funded rights of way infrastructure such as roads, sewers, and other means of traveling throughout a community (and notably it would be impossible for ISPs to try to bypass the existing public infrastructure). ISPs also benefit because these agreements conveniently lay out a roadmap for deploying broadband access. The public interest goal of a franchise is to come to an agreement that is fair to the taxpayer for creating the infrastructure.

Earlier franchises were used by the cable television industry to secure a monopoly in a city in exchange for widespread deployment. The city would agree to the monopoly as long as the cable company built out to everyone. This is very similar to the original agreement AT&T secured from the federal government back in the telephone monopoly era. The cable TV monopoly status was, in turn, used to secure preferential financing to fund the construction of the coaxial cable infrastructure that we still mostly use today (although much of that infrastructure has been converted to hybrid fiber/cable, with the cable part still connected to residential dwellings). The cable industry made the argument to the banks when securing funding that because it had no competitors in a community, it would be able to pay back corporate debt quite easily. That part, at least, worked very well; cable is widespread today in cities. Congress later abolished monopolies in franchising in 1992 with the passage of the Cable Television Consumer Protection and Competition Act, which also forced negotiation between cable and television broadcasters.

In 2005, Verizon, which was set to launch their FiOS fiber network, led a lobbying effort in DC to rethink local franchising power. The broadband industry as a whole was hoping for a simpler process than having to negotiate city by city (like cable). After securing a massive deregulatory gift from the FCC’s decision to classify broadband as an information service not subject to federal competition policy of the 90s—or, as we found out years later in the courts, to net neutrality—the broadband industry wanted new rules for fiber to the home. 

Local governments without full local franchise power are extraordinarily constrained when it comes to pushing private service providers.

The industry argument was that we were no longer in the monopoly era of communications (spoiler alert: we are very much in a monopoly era for broadband access), and that competition would do the work to meet the policy goals of universal, affordable, and open Internet services. Congress came very close to agreeing to eliminate franchise authority and take it away from local communities in 2006, but the bill was stalled in the Senate by a bipartisan pair of Senators who wanted net neutrality protections. In response to their loss in Congress, Verizon and others took their argument to the states. Just months later they secured statewide franchising in California. They failed to secure the same change in New York. The industry’s failure there gives us the opportunity to compare two very large states and big cities to see how both approaches have played out for communities.

New York Shows That Local Franchising Works for Big Cities

Local governments without full local franchise power are extraordinarily constrained when it comes to pushing private service providers. With franchise power, local problems are hashed out during negotiations, when communities and companies are considering the mutual benefit of expanded private broadband services in exchange for access to the taxpayer-funded public rights of way. Without it, those negotiations never take place.

As Verizon lobbied to eliminate local franchising, New York’s state legislature studied the issue thoroughly through their state public utilities commission (PUC). The PUC’s research noted that local power promotes local competition and addresses antitrust concerns with communications infrastructure and that competition requires special attention to promote new, smaller entrants. Public interest regulation tailored for large incumbents to prevent discrimination was concluded to be best done at the local level. After their research, New York decided not to eliminate local franchising, and has stayed the course despite 16 years of lobbying by the big ISPs.

The benefits of this decision are clearly illustrated in New York City, which understood that its massive population, wealthy communities, business sector, and density would allow Verizon to deliver fiber to every single home for a profit. This was signed into a franchise in 2008. When Verizon discontinued its fiber service expansion in 2010, the city reminded the company that they had an agreement. Verizon tried to argue that the law's requirement that service “pass” a home—commonly understood as meaning connecting the home—just meant that fiber was somewhere near the house, and argued that wireless broadband was the same. The city decided to take Verizon to court to enforce their franchise in 2014. While the litigation was lengthy, in November 2020 the city secured a settlement from Verizon to build another 500,000 fiber to the home connections to low income communities. Compare that to California big cities like Oakland and Los Angeles, where studies are showing rampant digital redlining of fiber against low-income people, particularly for black neighborhoods.

Big Cities Should Be 100 Percent Fibered, But They Need Their Power Back 

Carriers often want policymakers to think about broadband deployment as an isolated house-by-house effort, rather than community-wide or regional efforts. They do this to justify the digital redlining of fiber that happens across the country in cities that lack power to stop it. 

Our research shows that when you narrow in on a per household basis, you find that some homes aren’t profitable to connect in isolation. But when you look at communities in the aggregate, you would be hard-pressed to find a single large American city where you couldn’t turn a profit with fiber. That only comes from averaging out your costs and averaging out your profits—and laws that prohibit socioeconomic discrimination in broadband deployment force averaging rather than isolating. If states want more private fiber to be expanded in their big cities, it appears clear that allowing cities to negotiate on their own behalf is one powerful option that needs to be restored, particularly in California. It won’t solve the entire problem, but it can be a piece of how we get fiber to every home and business. 

For a more in-depth look at the issue, read more in the whitepaper

Ernesto Falcon

Stupid Patent of the Month: Telehealth Robots Say Goodbye

2 months 2 weeks ago

Before COVID-19, people living in rural and isolated areas urgently needed to access health care services remotely; now we all do. Thanks to decades of innovation in computing and telecommunications, more essential health care services are available electronically than ever before. But there’s no guarantee they will always be as accessible and affordable as they are today. Because the Patent Office keeps granting patents on old ways of using networked computers, it keeps gambling with the public’s future access to telehealth technology.

 The term “telehealth” includes any way of using electronic information and telecommunications technology to provide or administer healthcare and services related to health care, like electronic recordkeeping. It’s broader than “telemedicine,” which refers only to ways of using telecommunications to replace clinical care, like videoconferencing can. Both have become increasingly essential in the wake of COVID-19. According to one recently-published study, telemedicine services grew by more than 1000% in March and more than 4000% in April of 2020. Although people are going to doctors in person again, the demand for telemedicine is still expected to grow.

Though the urgent need for telehealth is relatively new, the ability to make and use it is not. Our government has been developing and deploying these technologies for more than sixty years with early projects through the U.S. Space Program, including one involving the Tohono O’odham Indian Nation that demonstrated the feasibility of providing medical care via telecommunication in the 1970s. 

With so many pioneering advances in the past, it should be challenging to get new telehealth patents today. It should be even more challenging since the Supreme Court’s 2014 decision in Alice v. CLS Bank that generic computer technology cannot make an abstract idea eligible for patent protection. Unfortunately, our latest Stupid Patent of the Month shows the Patent Office is not doing its job. Instead of granting patents on new and useful inventions, it is rubber-stamping applications on anything under the sun without regard for the requirements of patent law or needs of the public during the present health crisis.

U.S. Patent No. 10,882,190 (’190 patent) wins March 2021’s Stupid Patent of the Month for the “Protocol for a Remotely Controlled Videoconferencing Robot,” granted on December 16, 2020. The owner is Teladoc Health, a publicly-traded, multinational telehealth provider. Since launching in 2002, Teladoc has  acquired numerous smaller entities. And in the middle of 2020, Teladoc acquired InTouch Technologies, along with its massive portfolio of telehealth patents and patent applications. Teladoc has not waited for COVID-19 to abate to assert several of its newly-acquired patents against smaller rival Amwell, a telemedicine provider with about 700 employees based in Boston.

The ’190 Patent has one claim for a “robot system that communicates through a broadband network.” The system comprises multiple “robots,” but for all the science fiction fantasies that word conjures, these robots can be any kind of computing device with a camera, monitor, and broadband interface. Whatever these robots are, they’re not claimed as the invention. The claimed invention is a system where at least one robot “generates a goodbye command that terminates a communication session” with another robot and “relinquishes control.”

A network of computers that can start and stop communicating with each other is not a patent-eligible invention; it’s just a computer network. As a result, there’s nothing in the ’190 Patent claim that could qualify as an invention attributable to the applicant rather than prior computing and telecommunications advances. But as we’ve explained, the Patent Office under former Director Andrei Iancu changed the rules to make it practically impossible for examiners to reject patents claiming generic computer system even when Supreme Court precedent requires it. 

The Patent Office should have rejected the ’190 Patent application for other reasons too. The application was filed in January 13, 2020, but that was not its priority date. If it were, the Patent Office could have considered technology from 2019 when deciding whether the application claimed something truly novel. But because the ’190 Patent was filed as a “continuation” of an older application filed on December 9, 2003 (and granted as a patent in 2010), it is treated as if filed on the same day, and gets the same early priority date. That means the Patent Office could only compare the application filed in 2020 to technology that existed before 2003, and had to ignore all intervening advances in the field. Scholars have reported on the outrageous continuation system for years. As long as it persists, patent owners will be able to extend their patent monopolies beyond twenty years.

Continuation abuse aside, the Patent Office had plenty of pre-2003 prior art to consider.

For example, Columbia University had for years deployed more than 400 home telemedicine units that provided synchronous videoconferencing over standard telephone lines, secure Web-based messaging and clinical data review, and access to Web-based educational materials. But there’s no sign the Patent Office considered that or anything else. As early as 1995, there were prior art robot systems in which multiple users could take turns controlling robots remotely. 

Yet there’s no indication the Patent Office considered these prior art references or any others. In fact, there’s no sign the ’190 Patent got any substantive examination at all. The Patent Office granted the ’190 Patent in less than a year without leaving a trace of its reasoning. The examiner did not even issue a non-final rejection, which had long been standard practice for the vast majority of patent applications. Although we advocated for more rigorous patent examination procedures, we have rarely seen records of an examination as deficient as this.

The job of the Patent Office is not to simply grant patent applications on demand, but to examine patent applications so that only those compliant with the law become granted patents. When the Patent Office fails at this task, the public’s access to technology falters. Given the pressing need for telehealth access, we need a Director who will focus on getting examiners to apply the law correctly instead of pushing them to grant as many applications as possible.

Alex Moss

Even with Changes, the Revised PACT Act Will Lead to More Online Censorship

2 months 2 weeks ago

Among the dozens of bills introduced last Congress to amend a key internet law that protects online services and internet users, the Platform Accountability and Consumer Transparency Act (PACT Act) was perhaps the only serious attempt to tackle the problem of a handful of dominant online services hosting people’s expression online.

Despite the PACT Act’s good intentions, EFF could not support the original version because it created a censorship regime by conditioning the legal protections of 47 U.S.C. § 230 (“Section 230”) on a platform’s ability to remove user-generated content that others claimed was unlawful. It also placed a number of other burdensome obligations on online services.

To their credit, the PACT Act’s authors—Sens. Brian Schatz (D-HI) and John Thune (R-SD)—listened to EFF and others’ criticism of the bill and amended the text before introducing an updated version earlier this month. The updated PACT Act, however, contains the same fundamental flaws as the original: creating a legal regime that rewards platforms for over-censoring users’ speech. Because of that, EFF remains opposed to the bill.

Notwithstanding our opposition, we agree with the PACT Act’s sponsors that internet users currently suffer from the vagaries of Facebook, Google, and Twitter’s content moderation policies. Those platforms have repeatedly failed to address harmful content on their services. But forcing all services hosting user-generated content to increase their content moderation doesn’t address Facebook, Google, and Twitter’s dominance—in fact, it only helps cement their status. This is because only well-resourced platforms will be able to meet the PACT Act’s requirements, notwithstanding the bill’s attempt to treat smaller platforms differently.

The way to address Facebook, Google, and Twitter’s dominance is to enact meaningful antitrust, competition, and interoperability reforms that reduce those services’ outsized influence on internet users’ expression.

Notice and Takedown Regimes Result in Censoring Those With Less Power

As with its earlier version, the revised PACT Act’s main change to Section 230 involves conditioning the law’s protections on whether online services remove content when they receive a judicial order finding that the content is illegal. As we’ve said before, this proposal, on its face, sounds sensible. There is likely to be little value in hosting user-generated content that a court has determined is illegal.

But the new PACT Act still fails to provide sufficient safeguards to stop takedowns from being abused by parties who are trying to remove other users’ speech that they do not like.

In fairness, it appears that Sens. Schatz and Thune heard EFF’s concerns with the previous version, as the new PACT Act requires that any takedown orders be issued by courts after both sides have litigated a case. The new language also adds additional steps for takedowns based on default judgments, a scenario in which the defendant never shows up to defend against the suit. The bill also increases the time a service would have to respond to a notice, from 24 hours to 4 days in the case of large platforms.

These marginal changes, however, fail to grapple with the free expression challenges of trying to implement a notice and takedown regime where a platform’s legal risk is bound up with the risk of being liable for a particular user’s speech.

The new PACT Act still fails to require that takedown notices be based on final court orders or adjudications that have found content to be unlawful or unprotected by the First Amendment. Courts issue preliminary orders that they sometimes later reverse. In the context of litigation about whether certain speech is legal, final orders issued by lower courts are often reversed by appellate courts. The PACT Act should have limited takedown notices to final orders in which all appeals have been exhausted. It didn’t, and is thus a recipe for taking down lots of lawful expression.

there is very little incentive for platforms to do anything other than remove user-generated content in response to a takedown notice, regardless of whether the notices are legitimate

More fundamentally, however, the PACT Act’s new version glosses over the reality that by linking Section 230’s protections to a service’s ability to quickly remove users’ speech, there is very little incentive for platforms to do anything other than remove user-generated content in response to a takedown notice, regardless of whether the notices are legitimate.

To put it another way: the PACT Act places all the legal risk on a service when it fails to comply with a takedown demand. If that happens, the service will lose Section 230’s protections and be treated as if it were the publisher or speaker of the content. The safest course for the intermediary will be to avoid that legal risk and always remove the user-generated content, even if it comes at the expense of censoring large volumes of legitimate expression.

The new PACT Act’s safeguards around default judgments are unlikely to prevent abusive takedowns. The bill gives an online service provider 10 days from receiving a notice to intervene in the underlying lawsuit and move to vacate the default judgment. If a court finds that the default judgment was “sought fraudulently,” the online service could seek reimbursements of its legal costs and attorney’s fees.

This first assumes that platforms will have the resources required to intervene in federal and state courts across the country in response to suspect takedown notices. But a great number of online services hosting user-generated speech are run by individuals, nonprofits, or small businesses. They can’t afford to pay lawyers to fight back against these notices.

The provision also assumes services will have the incentive to fight back on their users’ behalf. The reality is that it’s always easier to remove the speech than to fight back. Finally, the ability of a provider to recoup its legal costs depends on a court finding that an individual fraudulently sought the original default judgment, a very high burden that would require evidence and detailed findings by a judge.

Indeed, the PACT Act’s anti-abuse provision seems to be even less effective than the one contained in the Digital Millennium Copyright Act, which requires copyright holders to consider fair use before sending a takedown notice. We know well that the DMCA is a censorship regime rife with overbroad and abusive takedowns, even though the law has an anti-abuse provision. We would expect the PACT Act’s takedown regime to be similarly abused.

Who stands to lose here? Everyday internet users, whose expression is targeted by those with the resources to obtain judicial orders and send takedown notices.

We understand and appreciate the sponsors’ goal of trying to help internet users who are the victims of abuse, harassment, and other illegality online. But any attempts to protect those individuals should be calibrated in a way that would prevent new types of abuse, by those who seek to remove lawful expression.

More Transparency and Accountability Is Needed, But It Shouldn’t Be Legally Mandated

EFF continues to push platforms to adopt a human rights framing for their content moderation decisions. That includes providing users with adequate notice and being transparent about their moderation decisions. It is commendable that lawmakers also want platforms to be more  transparent and responsive to users, but we do not support legally mandating those practices.

Just like its earlier version, the new PACT Act compels services to disclose their content moderation rules, build a system for responding to complaints about user-generated content, and publish transparency reports about their moderation decisions. The bill would also require large platforms to operate a toll-free phone number to receive complaints about user-generated content. And as before, platforms that fail to implement these mandates would be subject to investigation and enforcement actions by the Federal Trade Commission.

As we said about the original PACT Act, mandating services to publish their policies and transparency reports on pain of enforcement by a federal law enforcement agency creates significant First Amendment concerns. It would intrude on the editorial discretion of services and compel them to speak.

The new version of the PACT Act doesn’t address those constitutional concerns. Instead, it changes how frequently a service has to publish a transparency report, from four times a year to twice a year. It also creates new exceptions for small businesses, which would not have to publish transparency reports, and for individual providers, which would be exempt from the new requirements.

As others have observed, however, the carveouts may prove illusory because the bill’s definition of a small business is vague and the cap is set far too low. And small businesses that meet the exception would still have to set up systems to respond to complaints about content on their platform, and permit appeals by users who believe the original complaint was wrong or mistaken. For a number of online services—potentially including local newspapers with comment sections—implementing these systems will be expensive and burdensome. They may simply choose not to host user-generated content at all.

Reducing Large Service’s Dominance Should Be the Focus, Rather Than Mandating That Every Service Moderate User Content Like Large Platforms

We get that given the current dominance of a few online services that host so much of our digital speech, communities, and other expression, it’s natural to want new laws that make those services more accountable to their users for their content moderation decision. This is particularly true because, as we have repeatedly said, content moderation at scale is impossible to do well.

Yet if we want to break away from the concentration of Facebook, Google, and Twitter hosting so much of our speech, we should avoid passing laws that assume we will be forever beholden to these services’ moderation practices. The PACT Act unfortunately makes this assumption, and it demands that every online service adopt content moderation and transparency policies resembling the major online platforms around today.

we should avoid passing laws that assume we will be forever beholden to these services’ moderation practices

Standardizing content moderation will only ensure those platforms with the resources and ability to meet the PACT Act’s legal requirements will survive. And our guess is those platforms will look a lot more like Facebook, Google, and Twitter rather than being diverse. They aren’t likely to have different or innovative content moderation models that might serve internet users better.

Instead of trying to legally mandate certain content moderation practices, lawmakers must tackle Facebook, Google, and Twitter’s dominance and the resulting lack of competition head on by updating antitrust law, and embracing pro-competitive policies. EFF is ready and willing to support those legislative efforts.

Aaron Mackey

Dystopia Prime: Amazon Subjects Its Drivers to Biometric Surveillance

2 months 2 weeks ago

Some high-tech surveillance is so dangerous to privacy that companies must never deploy it against a person without their voluntary opt-in consent. It comes as little surprise that Amazon, the company that brought you Ring doorbell cameras and Rekognition face surveillance, has a tenuous understanding of both privacy and consent. Earlier this week, Motherboard revealed the company’s cruel “take it or leave” demand to its 75,000 delivery drivers: submit to biometric surveillance or lose your job.

Amazon’s “Privacy Policy for Vehicle Camera Technology” states it may collect “face image and biometric information.” The company uses this information, among other things, to verify driver identity, and to provide “real-time in-vehicle alerts” about driver behaviors such as potentially distracted driving. This sensitive information collected by “safety cameras” mounted in delivery vehicle cabins is stored for as long as 30 days and available to Amazon on request. The company’s “Vehicle Technology and Biometric Consent” document states: “As a condition of delivering Amazon packages, you consent to the use of the Technology and collection of data and information from the Technology by Amazon …” Likewise, the company’s “Photos Use and Biometric Information Retention Policy” states: “Amazon … require[s] that users of the Amazon delivery application provide a photo for identification purposes. Amazon may derive from your stored photo a scan of your face geometry or similar biometric data …”

According to an Amazon contractor who spoke to Motherboard: “I had one driver who refused to sign. It’s a heart-breaking conversation when someone tells you that you’re their favorite person they have ever worked for, but Amazon just micromanages them too much.” 

According to another Amazon driver, who spoke to Thomson Reuters Foundation last month about this new surveillance program: “We are out here working all day, trying our best already. The cameras are just another way to control us.”

The new Amazon system, called Driveri, is built by a company called Netradyne. It combines always-on cameras, pointed at both the driver and the road, with real-time AI analysis of the footage. Five U.S. Senators recently sent Amazon a letter raising privacy and other concerns about Driveri, and seeking information about it.

In a recent blog, we detailed why biometric surveillance technologies, such as face recognition, must never be deployed without informed and freely given opt-in consent. As we explained, we cannot change our biometrics, and it is extraordinarily difficult to hide them from other people. With advances in technology, it is easier every day for companies to collect our biometrics, rapidly identify us, build dossiers about us and our movements, and sell all this information to others.

Fortunately, Illinois enacted its Biometric Information Privacy Act (BIPA). This critical law requires companies to obtain opt-in consent before collecting a person’s biometrics or using them in a new way. It also establishes a deletion deadline. People whose BIPA rights are violated may enforce the law with their own private right of action. We oppose efforts, past and present, to exempt workplaces from BIPA's scope. A federal bill to extend BIPA  nationwide was introduced last year by U.S. Sens. Jeff Merkley and Bernie Sanders.

Of course, when Amazon says to its drivers, “give us your biometrics or you’re fired” – that’s not consent. That’s coercion.  

Adam Schwartz

Free as in Climbing: Rock Climber’s Open Data Project Threatened by Bogus Copyright Claims

2 months 2 weeks ago

Rock climbers have a tradition of sharing “beta”—helpful information about a route—with other climbers. Giving beta is both useful and a form of community-building within this popular sport. Given that strong tradition of sharing, we were disappointed to learn that the owners of an important community website, MountainProject.com, were abusing copyright to try to shut down another site OpenBeta.io. The good news is that OpenBeta’s creator is not backing down—and EFF is standing with him.

Viet Nguyen, a climber and coder, created OpenBeta to bring open source software tools to the climbing community. He used Mountain Project, a website where climbers can post information about climbing routes, as a source of user-posted data about climbs, including their location, ratings, route descriptions, and the names of first ascensionists. Using this data, Nguyen created free, publicly available interfaces (APIs) that others can use to discover new insights about climbing—anything from mapping favorite crags to analyzing the relative difficulty of routes in different regions—using software of their own.

Rock climbers get a lot of practice at falling hard, taking a moment to recover, and continuing to climb. Mountain Project should take a lesson from their community: dust off, change your approach, and keep climbing.

The Mountain Project website is built on users’ contributions of information about climbs. Building on users’ contributions, Mountain Project offers search tools, “classic climbs” lists, climbing news links, and other content. But although the site runs on the contributions of its users, Mountain Project’s owners apparently want to control who can use those contributions, and how. They sent a cease-and-desist letter to Mr. Nguyen, claiming to “own[] all rights and interests in the user-generated work” posted to the site, and demanding that he stop using it in OpenBeta. They also sent a DMCA request to GitHub to take down the OpenBeta code repository.

As we explain in our response, these copyright claims are absurd. First, climbers who posted their own beta and other information to Mountain Project may be surprised to learn that the website is claiming to “own” their posts, especially since the site’s Terms of Use say just the opposite: “you own Your Content.”

As is typical for sites that host user-generated content, Mountain Project doesn’t ask its users to hand over copyright in their posts, but rather to give the site a “non-exclusive” license to use what they posted. Mountain Project’s owners are effectively usurping their users’ rights in order to threaten a community member.

And even if Mountain Project had a legal interest in the content, OpenBeta didn’t infringe on it. Facts, like the names and locations of climbing routes, can’t be copyrighted in the first place. And although copyright might apply to climbers’ own route descriptions, OpenBeta’s use is a fair use. As we explained in our letter:

The original purpose of the material was to contribute to the general knowledge of the climbing community. The OpenBeta data files do something more: Mr. Nguyen uses it to help others to learn about Machine Learning, making climbing maps, and otherwise using software to generate new insights about rock climbing.

In other words, a fair use.

Rock climbers get a lot of practice at falling hard, taking a moment to recover, and continuing to climb. Mountain Project blew it here by making legally bogus threats against OpenBeta. We hope they take a lesson from their community: dust off, change your approach, and keep climbing.

Mitch Stoltz

Facebook’s Pitch to Congress: Section 230 for Me, But not for Thee

2 months 2 weeks ago

As Mark Zuckerberg tries to sell Congress on Facebook’s preferred method of amending the federal law that serves as a key pillar of the internet, lawmakers must see it for what it really is: a self-serving and cynical effort to cement the company’s dominance.

In prepared testimony submitted to the U.S. House of Representatives Energy and Commerce Committee before a Thursday hearing, Zuckerberg proposes amending 47 U.S.C. § 230 (“Section 230”), the federal law that generally protects online services and users from liability for hosting user-generated content that others believe is unlawful.

The vague and ill-defined proposal calls for lawmakers to condition Section 230’s legal protections on whether services can show “that they have systems in place for identifying unlawful content and removing it.” According to Zuckerberg, this revised law would not create liability if a particular piece of unlawful content fell through the cracks. Instead, the law would impose a duty of care on platforms to have adequate “systems in place” with respect to how they review, moderate, and remove user-generated content.

Zuckerberg’s proposal calls for the creation of a “third party,” whatever that means, which would establish the best practices for identifying and removing user-generated content. He suggests that this entity could create different standards for smaller platforms. The proposal also asks Congress to require that online services be more transparent about their content moderation policies and more accountable to their users.

An Anti-Competitive Wedge

The proposal is an explicit plea to create a legal regime that only Facebook, and perhaps a few other dominant online services, could meet. Zuckerberg is asking Congress to change the law to ensure that Facebook never faces significant competition, and that its billions of users remain locked into its service for the foreseeable future.

It’s galling that at the same time Zuckerberg praises Section 230 for creating “the conditions for the Internet to thrive, for platforms to empower billions of people to express themselves online,” he simultaneously calls on Congress to change the law to prevent any innovation or competition that could disrupt Facebook’s market position. Zuckerberg is admitting that after Facebook has benefited from Section 230, he doesn’t want any other competitor to do the same. Rather than take up Facebook’s proposal, Congress should instead advance meaningful competition and antitrust reforms to curtail the platform’s dominance.

Moreover, Zuckerberg’s proposal comes just before a congressional hearing that is ostensibly about the problems Facebook has created. These problems exist precisely because of Facebook’s dominance, anti-competitive behavior, and terrible privacy and content moderation practices. So in response to Facebook’s significant failures, Zuckerberg is telling Congress that Facebook is the solution. Congress should respond: Absolutely not.

A Flawed Proposal

On the merits, Zuckerberg’s proposal — though light on specifics — is problematic for several reasons.

First, the proposal overlooks that the vast majority of online services that host user-generated content do not have the technical, legal, or human resources to create systems that could identify and remove unlawful content. As Mike Masnick at TechDirt recently wrote, the internet is made up of far more diverse and less-resourced services than Facebook. Congress must recognize that the legal rules it sets for online services will apply to all of them. Zuckerberg proposes that the required “adequate systems” be “proportionate to platform size;” but size is only one factor that might correlate to an intermediary’s ability to implement such systems. By punishing growth, a size-scaled system would also discourage the development of nonprofit intermediary models that might compete with and replace those that profit greatly off of their users’ data. What would actually be necessary is an assessment of whether each individual intermediary, based on its numerous characteristics, has provided adequate systems. This is essentially a legal negligence standard – asking the question “Has the intermediary acted reasonably?” – and such standards have historically and legally been found to be insufficiently protective of freedom of speech.

Second, Zuckerberg’s proposal seems to require affirmative pre-screening and filtering of content as an “adequate system.” As we have written, filtering requirements are inherently privacy invasive and almost always rely on faulty and nontransparent and unaccountable automation. And of course, they are extremely burdensome, even at a small scale.

Third, the standards under Zuckerberg’s proposal would be unworkable in practice and result in even greater online censorship. Content moderation at scale is impossible to do perfectly and  nearly impossible to do well. Automated tools and human reviewers make scores of mistakes that result in improper removal of users’ content. If services are required by law to have systems that remove users’ content, the result will be a world in which much greater volumes of user speech will be removed, as services would rather censor their users than risk losing their legal protections.

Fourth, the proposal would not even address the problems Facebook is now being called out for. Zuckerberg calls for Section 230 protections to be conditioned on having systems in place to remove “unlawful content”; but most of the examples he addresses elsewhere in his testimony are not illegal. Hate and violence, misinformation, and community standards for groups are largely protected speech. Platforms like Facebook may and should want to actively moderate such content. But the speech is not usually “illegal,” a narrow subset of speech unprotected by the First Amendment.

Fifth, Zuckerberg calls for a “third party” to define the “adequate systems” an intermediary must adopt. We saw a similar proposal recently with the original version of the EARN IT Act. We opposed a standards-setting body there because it was going to be dominated by law enforcement officials who desire to break end-to-end encryption. Although Zuckerberg does not identify the membership or composition of his proposed third party, we worry that any entity created to address online content moderation could similarly be captured by special interests who do not represent internet users.

Transparency, Yes Please

We appreciate that Zuckerberg is calling on online services to be more transparent and responsive to user concerns about content moderation. EFF has been actively involved in an effort to push these services to adopt a human rights framing for content moderation that includes adequate notice to its users and transparency about the platform’s practices. Yet we do not believe that any requirement to adopt these practices should be linked to Section 230’s protections. That’s why we’ve previously opposed legislation like the PACT Act, an initial version of which compelled transparency reporting. It’s also worth noting that Facebook lags behind its peers on issues of transparency and accountability for censoring its users’ speech, a 2019 EFF review found.

Zuckerberg’s proposal to rewrite Section 230 joins a long list of efforts to overhaul the law. As we have said, we analyze every fully formed proposal on its merits. Some of the proposed changes start from a place of good faith in trying to address legitimate harms that occur online. But Zuckerberg’s proposal isn’t made in good faith. Congress should reject it and move on to doing the real, detailed work that it has to do before it can change Section 230.

Aaron Mackey

Statement on the Re-election of Richard Stallman to the FSF Board

2 months 2 weeks ago

We at EFF are profoundly disappointed to hear of the re-election of Richard Stallman to a leadership position at the Free Software Foundation, after a series of serious accusations of misconduct led to his resignation as president and board member of the FSF in 2019. We are also disappointed that this was done despite no discernible steps taken by him to be accountable for, much less make amends for, his past actions or those who have been harmed by them. Finally, we are also disturbed by the secretive process of his re-election, and how it was belatedly conveyed to FSF’s staff and supporters.

Stallman’s re-election sends a wrong and hurtful message to free software movement, as well as those who have left that movement because of Stallman’s previous behavior.

Free software is a vital component of an open and just technological society: its key institutions and individuals cannot place misguided feelings of loyalty above their commitment to that cause. The movement for digital freedom is larger than any one individual contributor, regardless of their role. Indeed, we hope that this moment can be an opportunity to bring in new leaders and new ideas to the free software movement.

We urge the voting members of the FSF1 to call a special meeting to reconsider this decision, and we also call on Stallman to step down: for the benefit of the organization, the values it represents, and the diversity and long-term viability of the free software movement as a whole. 

  • 1. Note: If you donate to FSF, you just become a non-voting Associate Member. Voting Members are a separate, and much smaller, category of member.
Danny O'Brien

Facebook Treats Punk Rockers Like Crazy Conspiracy Theorists, Kicks Them Offline

2 months 2 weeks ago

Facebook announced last year that it would be banning followers of QAnon, the conspiracy theorists that allege that a cabal of satanic pedophiles is plotting against former U.S. president Donald Trump. It seemed like a case of good riddance to bad rubbish.

Members of an Oakland-based punk rock band called Adrenochrome were taken completely by surprise when Facebook disabled their band page, along with all three of their personal accounts, as well as a page for a booking business run by the band’s singer, Gina Marie, and drummer Brianne.

Marie had no reason to think that Facebook’s content moderation battle with QAnon would affect her. The strange word (which refers to oxidized adrenaline) was popularized by Hunter Thompson in two books from the 1970s. Marie and her bandmates, who didn’t even know about QAnon when they named their band years ago, picked the name as a shout-out to a song by a British band from the 80’s, Sisters of Mercy. They were surprised as anyone that in the past few years, QAnon followers copied Hunter Thompson’s (fictional) idea that adrenochrome is an intoxicating substance, and gave this obscure chemical a central place in their ideology.

The four Adrenochrome band members had nothing to do with the QAnon conspiracy theory and didn’t discuss it online, other than receiving occasional (unsolicited and unwanted) Facebook messages from QAnon followers confused about their band name.

But on Jan. 29, without warning, Facebook shut down not just the Adrenochrome band page, but the personal pages of the three band members who had Facebook accounts, including Marie, and the page for the booking business.  

“I had 2,300 friends on Facebook, a lot of people I’d met on tour,” Marie said. “Some of these people I don’t know how to reach anymore. I had wedding photos, and baby photos, that I didn’t have copies of anywhere else.”

False Positives

The QAnon conspiracy theory became bafflingly widespread. Any website host—whether it’s comments on the tiniest blog, or a big social media site—is within its rights to moderate that QAnon-related content and the users who spread it. Can Facebook really be blamed for catching a few innocent users in the net that it aimed at QAnon?

Yes, actually, it can. We know that content moderation, at scale, is impossible to do perfectly. That’s why we advocate companies following the Santa Clara Principles: a short list of best practices, that include numbers (publish them), notice (provide it to users in a meaningful way), and appeal (a fair path to human review).

Facebook didn’t give Marie and her bandmates any reason that her page went down, leaving them to just assume it was related to their band’s name. They also didn’t provide any mechanism at all for appeal. All she got was a notice (screenshot below) telling her that her account was disabled, and that it would be deleted permanently within 30 days. The screenshot said “if you think your account was disabled by mistake, you can submit more information via the Help Center.” But Marie wasn’t able to even log in to the Help Center to provide this information.

Ultimately, Marie reached out to EFF and Facebook restored her account on February 16, after we appealed to them directly. But then, within hours, Facebook disabled it again. On February 28, after we again asked Facebook to restore her account, it was restored.

We asked Facebook why the account went down, and they said only that “these users were impacted by a false positive for harmful conspiracy theories.” That was the first time Marie had been given any reason for losing access to her friends and photos.

That should have been the end of it, but on March 5 Marie’s account was disabled for a third time. She was sent the exact same message, with no option to appeal. Once more we intervened, and got her account back—we hope, this time, for good.

This isn’t a happy ending. First, users shouldn’t have to reach out to an advocacy group in the first place to get help in challenging a basic account disabling. One hand wasn’t talking to the other, and Facebook couldn’t seem to stop this wrongful account termination.

Second, Facebook still hasn’t provided any meaningful ability to appeal—or even any real notice, something they explicitly promised to provide in our 2019 “Who Has Your Back?” report

Facebook is the largest social network online. They have the resources to set the standard for content moderation, but they're not even doing the basics. Following the Santa Clara Principles—Numbers, Notice, and Appeal—would be a good start. 

Joe Mullin

Pasco County’s Sheriff Must End Its Targeted Child Harassment Program

2 months 3 weeks ago

In September 2020, the Tampa Bay Times revealed a destructive “data-driven” policing program run by the Pasco County, Florida Sheriff's Office. The program is misleadingly called “Intelligence-Led Policing” (ILP), but in reality, it's nothing more than targeted child harassment by police. Young people's school grades and absences, minor infractions, and even instances where they are a victim of crime are used to inform a bogus rubric and point system, based on a formula that intends to "prevent future crimes"—essentially labeling youths as a potential future criminals.

Below is a page from the ILP’s pseudoscientific manual. Once a juvenile is tagged with this label, police show up at their home and harass their entire family. As one former deputy described the program to reporters, the objective was to “make their lives miserable until they move or sue.”

Screen capture from the manual of the Intelligence-Led Policing program.

The fault lies not just with the Pasco County Sheriff’s Office, which built this system and uses it to hound youth. The system also functions with the help of public schools and child welfare agencies that collect data about kids for purposes of providing them with important educational and social services, and then hand this data over to the Sheriff. This is an egregious abuse of trust. As a reaction to this publicized relationship between the schools and the police, the Charles and Lynn Schusterman Family Philanthropies organization cut nearly in half the money it had intended to give Pasco County schools, pulling the remaining $1.7 million. The organization explained, reasonably, that the program was contradictory to their values. 

It’s not hard to see why an organization would pull grant funding. The program, as well as many other data-driven or predictive policing models, creates a self-fulfilling feedback loop from which it is nearly impossible for a child or their family to escape. Anyone, when put under a microscope by police, might accumulate citations, fines, and even arrests. And that’s what this program does: it takes families, often ones that are already struggling, and puts them under the aggressive eye of police who expect their lives to become even more miserable. 

Some of the stories that emerged from news reporting illustrate the harms that getting stuck in an enforcement loop can have on people. After one 15-year-old was arrested for stealing bicycles out of a garage, the algorithm continuously dispatched police to harass him and his family. Over the span of five months, police went to his home 21 times. They also showed up at his gym and his parent’s place of work. The Tampa Bay Times revealed that since 2015, the sheriff's office has made more than 12,500 similar preemptive visits to people. 

These visits often resulted in other, unrelated fines and arrests that further victimized families and added to the likelihood that they would be visited and harassed again. In one incident, the mother of a targeted teenager was issued a $2,500 fine for having chickens in the backyard. In another incident, a father was arrested because his 17-year-old was smoking a cigarette. These behaviors occur in all neighborhoods, across all economic strata—but only marginalized people, who live under near constant police scrutiny, face penalization.

The Sheriff’s Office and the school district have claimed that the program is intended to identify at-risk youth in need of state intervention or mentorship. But in at least five places in the Sheriff’s ILP manual (including the one below), the program’s actual stated purpose is to identify potential future offenders. 

Screen capture from the manual of the Intelligence-Led Policing program.

The ILP manual lists characteristics that the program considers as so-called “criminogenic risk factors”; these are the factors unscientifically believed to lead to future criminality. The characteristics include being a victim of a crime, unspecified “low intelligence,” “antisocial parents,” and being “socio-economically deprived.” The ILP manual explains the program’s purpose is to identify youth “destined to a life of crime.” 

This is absurd. No one is destined to a life of crime. Having bad grades or economic struggles does not make a person into a criminal suspect, or make them deserving of police harassment. 

In response to the program, a number of civil liberties groups have stepped up to try to stop the data sharing between the school district and the police, and end the ILP program. Color of Change launched a petition allowing people to tell the Superintendent of schools to stop sharing data with the police. The Institute for Justice also launched a lawsuit against the Sheriff's Office, asserting the First, Fourth, and Fourteenth Amendment rights of those that have been victimized by the program. We anticipate more advocacy to come against this program. 

In response to recent news about the program, Florida Republican Congressmember Matt Gaetz called on Governor DeSantis to remove the sheriff, Chris Nocco, from his post. 

Florida state senator Audrey Gibson has also introduced SB 808, which seeks to place a few limits on this kind of Intelligence-Led Policing. Most importantly, it would require notice to a person that they are listed and an opportunity to appeal that listing. The bill would also require departments that use these programs to adopt guidelines to address data processing, program goals, and number and length of visits. Further, the bill would mandate documentation of visits, including audio and video (presumably from police body worn cameras), and records on the demographics of visited youths.  Although this bill may mitigate some of the program’s more egregious potential abuses, it’s not nearly enough to prevent all of the harms ILP generates. Predictive policing programs that rob people of their presumption of innocence, and open them up to harassment and surveillance, should be banned

EFF has been an outspoken critic of police use of gang databases, which often have subjective and racist criteria for inclusion and which open up an individual to unfair police harassment and surveillance. Once put on the list, people often can be harassed for years without any knowledge of how to get remove themselves. Such opaque conditions exist in Pasco County as well. 

EFF is working with a coalition of local, statewide, and national organizations that are trying to dismantle this harmful ILP program. Attempts to predict crime and sniff out future criminals are not new; they’ve been fodder for science fiction writers, criminologists, and detectives for over a century. But then and now, no one cannot predict crime, they can only create targets through excessive surveillance. “Data” and “intelligence” too often are buzzwords that imply a police initiative is objective and immune from human biases. But when fallible and biased individuals, including school administrators and police, determine who is and is not a future criminal based on exam grades or supposedly “antisocial” behavior, the “intelligence” system will only serve to replicate pre-existing racial and class hierarchies. 

Matthew Guariglia

Video Hearing Tuesday: EFF Tells California Lawmakers to Crack Down on License Plate Data Collection

2 months 3 weeks ago
SB 210 Would Require Data Destruction Within 24 Hours, Among Other Reforms

Sacramento – On Tuesday, March 23, at 1:30 pm PT, the Electronic Frontier Foundation (EFF) will urge California senators to crack down on location tracking by passing SB 210, a bill that would require the destruction of automatically collected license plate data within 24 hours of collection, among other robust reforms. You can watch the hearing on the California Senate website.

EFF is a co-sponsor of SB 210, which was introduced by State Sen. Scott Weiner earlier this year. The bill is aimed at combatting unbridled data collection by police departments using automated license plate readers (ALPRs) installed both in fixed locations like streetlights and on patrol vehicles. This data is uploaded with GPS and time and date information into a searchable database, which means police can search the historical travel patterns of anyone caught in the ALPRs wide net.

Last year, following a request from Sen. Wiener and EFF, the California State Auditor completed an investigation of four California law enforcement jurisdictions, finding that all four agencies were failing to establish policies that respect privacy and civil liberties as required by current law. EFF Director of Investigations Dave Maass, who has led a public records campaign to gather records on ALPR from more than 70 agencies statewide, will testify at Tuesday’s hearing, explaining how SB 210 can help restrict the massive data collection and protect Californians from intrusive surveillance.

WHAT:
Hearing on California SB 210

WHEN:
1:30 pm PT/4:30 pm ET
March 23

WHO:
EFF Director of Investigations Dave Maass

WHERE:
https://www.senate.ca.gov/

Contact:  DaveMaassDirector of Investigationsdm@eff.org RebeccaJeschkeMedia Relations Director and Digital Rights Analystpress@eff.org
Rebecca Jeschke

EFF Joins Effort to Restrict Automated License Plate Readers in California

2 months 3 weeks ago

One year ago, the California State Auditor released a damning report on the use of automated license plate readers (ALPRs) by local law enforcement agencies that confirmed concerns EFF has raised for years. Police are using these camera systems to collect enormous amounts of sensitive data on Californians' travel patterns. Yet they often haven't followed the basic requirements of a 2015 state law, S.B. 34, passed to protect privacy and civil liberties from ALPRs. While the auditor only conducted a deep-dive into four jurisdictions—Los Angeles, Fresno, Sacramento County and Marin County—all were found to be noncompliant. Investigators concluded that the problem was likely widespread among the hundreds of local agencies using the technology.

This legislative session, State Sen. Scott Wiener has introduced the License Plate Privacy Act (S.B. 210), a bill that would address many of these deficiencies by strengthening the law with additional requirements and safeguards. EFF is proud to co-sponsor this legislation alongside our ally, the Media Alliance

Police install ALPR cameras in fixed locations, such as streetlights and overpasses, to capture the license plates of passing cars. They also install them on patrol cars, allowing police to "grid" neighborhoods—driving up and down every block in order to gather information on parked vehicles. This data is uploaded along with GPS and time-date information to a searchable database. 

The result? With just a few keystrokes, police can search the historical travel patterns of a vehicle or identify vehicles that visited certain locations. Police can also add vehicles to a "Hot list," which is essentially a watch list that alerts them whenever a targeted vehicle is caught on camera. If a patrol car has an ALPR, the officer will be notified whenever they pass a vehicle on the watch list. However, by default, ALPRs collect data on everyone, regardless of whether you have a connection to a crime. 

That means a lot of surveillance for no justifiable reason. EFF's own research found that agencies collected immense amounts of data on drivers, but only a small fraction (less than 1%) of those license plates are actually part of an active investigation. In addition, police agencies often share access to this data with hundreds of law enforcement agencies nationwide, most of which have no need for unfettered access to Californians' data. The California State Auditor reached the same conclusions, noting that agencies could not justify why they were collecting so much data, and why they needed to hold onto it for so long (often years).

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2F3162bEHvos0%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

S.B. 34 was designed to regulate this technology by requiring agencies to have detailed privacy and usage policies. The auditor, however, found that the Los Angeles Police Department had failed to create such a policy, despite telling a legislative committee it was fully compliant. Others in the audit had policies, but those policies did not meet the legally mandated criteria. As EFF has noted, in many cases, agencies use a boilerplate template generated by the company Lexipol that is itself inadequate. 

S.B. 210 builds upon the 2015 law by adding new requirements, such as: 

  • Requires agencies to delete data after 24 hours (at a maximum) unless a license plate is on a hot list (i.e. that it is connected to a public safety interest). 
  • Requires annual audits of the searches of the ALPR database and to ensure that data has been deleted.
  • Requires the California Attorney General to draft a model policy that agencies should use. 
  • Prohibits public agencies from accessing databases with ALPR information that is more than 24 hours old and not on a hot list.

"It’s troubling, to say the least, that so many California law enforcement agencies are harvesting massive amounts of ALPR data, retaining that data for no reason, and recklessly distributing it to other agencies around the country, including ICE," Sen. Wiener said. "ALPR data is truly the Wild West in California, and this legislation will bring much-needed privacy protections.”

EFF's Atlas of Surveillance has identified more than 250 California agencies using ALPRs or accessing ALPR data, and that number seems to grow each month. For more information on automated license plate readers, check out EFF's Street-Level Surveillance hub and Data Driven project.

Dave Maass

AT&T’s HBO Max Deal Was Never Free

2 months 3 weeks ago

When it launched HBO Max, it was discovered that usage of the service would not count against the data caps of AT&T customers, a practice known as “zero-rating.” This means that people on limited data plans could watch as much HBO Max content as they wished without incurring overage fees. AT&T just declared that it would stop this practice, citing California’s net neutrality law as a reason. No matter what spin the telecom giant offers, this does not mean something “free” was taken away. That deal was never free to begin with.

It should be noted that net neutrality doesn’t prevent companies from zero rating in a non-discriminatory way. If AT&T wanted to zero rate all video streaming services, it could. What net neutrality laws prevent is ISPs from using their control over Internet access to advantage its own content or charging services for special access to its customer base. In the case of HBO Max and zero rating, since AT&T owns HBO Max, it costs them nothing to zero rate HBO Max. Other services had to pay for the same treatment or be disadvantaged when AT&T customers chose HBO Max to avoid overage fees.

This is why AT&T is claiming that it’s being forced to stop offering a “free” service because of California’s net neutrality rule. Rather than admit that the wireless industry knows zero rating can be used to shape traffic and user behavior and that perhaps users should determine the entire Internet experience, they want to turn this consumer victory into a defeat. But this basic consumer protection is long overdue having only taken this long because of former FCC Chairman Ajit Pai’s decision to abandon net neutrality and terminate investigations into AT&T’s unlawful practice in 2017, which prompted California to pass S.B. 822 in the first place.

You Already Paid AT&T to Offer the HBO Max Deal

American Internet services—mobile and to the home—are vastly more expensive than they should be. We pay more for worse services than in many other countries and practices like zero rating are part of that.

A comprehensive study by Epicenter.works showed that after zero rating was banned in the EU, consumers received cheaper mobile data over the years. This is because if the ISP can not do things like drive users towards its verticals through artificial scarcity schemes like data caps, it will need to raise its caps and be less willing to penalize usage of its network simply for using the service they purchased in order to appeal to customers. In fact, the infrastructure being laid out for modern wireless, fiber optics, has so much capacity that data caps really make no sense if the market was more competitive.

It is also very important to understand that the cost of moving data is getting cheaper and easier. As we move to fiber-backed infrastructure, the cost of moving data is coming down, speeds are going up exponentially, and the congestion challenges of the early days of the iPhone are a distant memory.

As a result, even though moving data is cheaper, AT&T prices haven’t changed accordingly. Profits for the companies grow, but consumers aren’t seeing prices that match the lowering cost of data. You have essentially paid the price of a real unlimited Internet plan for one with data caps, which continue to exist so that telecom companies can charge more for unlimited plans and collect overage fees. We know problem isn’t actual capacity, since AT&T lifted data caps at the start of the COVID-19 pandemic. If data caps and related data scarcity schemes were necessary for the operation of the network, then a time when usage is on the double-digit rise should have meant AT&T needed to keep its data caps intact and enforce them to keep things running. They didn’t because fiber-connected towers have more than enough capacity to handle growth, unlike older non-fibered cable systems who now throttle uploads.

AT&T’s Zero Rating Favored Big Tech and Was Anticompetitive

Competition among video streaming services is fierce and should be protected and enhanced. User-generated content on things like Twitch and YouTube, premium content from Netflix, Disney+, or Amazon Prime are all competing for your attention and eyeballs. AT&T wanted to give HBO a leg up by simply making the other services either more expensive via a data cap or to have them pay AT&T to be exempt so even if you were not watching AT&T’s product, money was coming to them. Such a structure makes is impossible for a small independent content creator to be competitive as they lack the resources to pay for an exemption and would need to provide content compelling enough for AT&T customers to pay extra to watch.

Furthermore, as the Epicenter.works study discovered, it took a lot of resources from Internet companies to obtain a zero rating exemption making it something only the Googles, Facebooks, and similarly large Internet companies could regularly engage in but not medium to small companies. AT&T doesn’t mind that because it just means more ways to extract rents from more players on the Internet despite being fully compensated by users for an unfettered Internet experience.

Low-Income Advocates Fought Hard to Ban AT&T’s Zero Rating

During the debate in California, AT&T attempted to reframe its zero-rating practice as “free data” and came awfully close to convincing Sacramento to leave it alone. But advocates representing the low-income residents of California came out in strong support of the California net neutrality law's zero-rating provisions. Studies by the Pew Research Center showed that when income is limited, consumers opt to use only mobile phones for Internet access as opposed to both wireline and wireless service. Groups like the Western Center on Law and Poverty pointed out that for these low-income users, AT&T was giving them a lesser Internet and not equal access to higher-income users.

And that is the ultimate point of net neutrality, to ensure everyone has equal access to the Internet that is free from ISP gatekeeper decisions. When you take into consideration that AT&T is one of the most indebted companies on planet Earth, it starts to make sense why in the absence of federal net neutrality, AT&T started to seek out any and every way to nickel and dime everything that touches its network. But with California’s law starting to come online, users finally have a law that will stand against the effort to convert the Internet into cable television. Whether or not we have federal protection, it seems clear that the states are proving right now that they can be an effective backstop and the work in preserving a free and open Internet will continue not just in DC but in the remaining 49 states.

Ernesto Falcon

EFF Members: We Want to Hear From You!

2 months 3 weeks ago

For the first time, the Electronic Frontier Foundation is reaching out to its vast community to improve its membership program and outreach.

Today current EFF members and other donors from the past year will receive an email inviting them to tell us how to better serve our supporters. Public support has powered EFF's initiatives to defend digital privacy, security, and free expression for decades, so it's fitting that EFF members will help shape our future.

As a staunch privacy advocate, EFF intentionally minimizes the amount of information that we collect about donors. Our standard is considered fundraising blasphemy, but we believe that this level of respect helps build the kind of relationships we intend to keep with EFF's members. It also means that we need your help to learn how people view EFF's positions, how they want to interact with us, and what drives our members to keep supporting EFF's work. We hope that by soliciting your feedback, EFF can do its best to remain engaging and effective.

We are asking people to complete our short survey by Friday, April 16. As one might expect from EFF, all responses are voluntary and recorded anonymously. Furthermore, this information won't be shared outside of EFF in keeping with the spirit of our privacy policy. We take your feedback seriously and look forward to hearing from you!

EFF has stood alongside tech users throughout its storied 30-year history, and we know it's only possible with the help of individuals like you. Our mission has never been more urgent as the world leans increasingly on technology to stay connected and informed. If you're a current EFF member or recent donor and did not find our invitation in your inbox, please send a note to membership@eff.org. Now keep on fighting for the future of digital freedom!

Aaron Jue

Twitter, Trump, and Tough Decisions: EU Freedom of Expression and the Digital Services Act

2 months 3 weeks ago

This blog post was co-written by Dr. Aleksandra Kuczerawy (Senior Fellow and Researcher at KU Leuven) and inspired by her publication at Verfassungsblog.

Suspension of Trump’s Social Media Accounts: Controversial, but not unprecedented

The suspension of the social media accounts of former U.S. President Donald Trump by Twitter, Facebook, Instagram, Snapchat, and others sparked a lot of controversy not only in the U.S, but also in Europe. German Chancellor Angela Merkel considered the move, which is not unprecedented, "problematic." The EU Commissioner for the internal market, Thierry Breton, found it “perplexing” that Twitter’s CEO Jack Dorsey could simply pull the plug on POTUS’s loudspeaker “without any checks and balances.” Some went a step further and proposed new rules seeking to prevent platforms from removing content that national laws deem legitimate: a recent proposal by the Polish government would ban social media companies from deleting content unless the content is illegal under Polish law. As a result, non-illegal hate speech—for example, insults directed at LGBTQ+ groups—could no longer be removed by social media platforms based on their community standards.

All these comments were articulated using the argument that without intervention by governments, freedom of expression rights would be at risk. But does the lockout from certain social media channels actually constitute an interference with or even a violation of free expression rights in Europe?

Freedom of Expression: Negative and Positive Obligations

The right to freedom of expression is embodied in the European Convention of Human Rights: everyone has the right to freedom of expression (Article 10(1) ECHR). Freedom of expression in Article 10 ECHR, interestingly, is a compound freedom. This means that Article 10 includes the right to hold and express opinions, to impart information and ideas, and to seek and receive information, even if they are not explicitly listed in the provision. Yet, this right is not absolute. Restrictions could take the form of ‘formalities, conditions, restrictions or penalties’ (para. 2), and are permissible if they comply with three conditions: They must be (1) prescribed by law, (2) introduced for protection of one of the listed legitimate aims, and (3) necessary in a democratic society. Legitimate grounds that could justify interference include national security, territorial integrity or public safety, and the prevention of disorder or crime.

Similar to the U.S, the right to freedom of expression is a negative right; that is to say, states cannot place undue restrictions on expression. Accordingly, it prevents only government restrictions on speech and not action by private companies. However, in Europe the right also entails a positive obligation. States are required to also protect the right from interference by others, including by private companies or individuals. Extending the scope of the ECHR to private relationships between individuals is referred to as the “horizontal effect.” According to the interpretation of the European Court of Human Rights (ECtHR), the horizontal effect is indirect, meaning that individuals can enforce human rights provisions against other individuals only indirectly, by relying on the positive obligations of the State. If the State fails to protect the right from interference by others, the ECtHR may attribute this interference to the State. The ECtHR specifically found the positive obligation present in relation to the right to freedom of expression (e.g. Dink v. Turkey). The duty to protect the right to freedom of expression involves an obligation for governments to promote this right and to provide for an environment where it can be effectively exercised without being unduly curtailed. Examples include cases of states’ failure to implement measures protecting journalists against unlawful violent attacks (Özgür Gündem v. Turkey), or failure to enact legislation resulting in refusal to broadcast by a commercial television company (Verein gegen Tierfabriken Schweiz v. Switzerland).

No Must-Carry, No Freedom of Forum

The doctrine of positive obligations and the horizontal effect of the ECHR could support the argument that rules may be necessary to prevent arbitrary decisions by platforms to remove content (or ban users). 

However, it does not support the argument that platforms have an obligation to host all the (legal) content of their users. The European Court of Human Rights (ECtHR) elucidated that Article 10 ECHR does not provide a “freedom of forum” for the exercise of the right to freedom of expression. This means that Article 10 ECHR does not guarantee any right to have one’s content broadcasted on any particular private forum. Private platforms, such as social media companies like Twitter or Facebook, therefore, cannot be forced to carry content by third parties, even if that content is not actually illegal. This makes sense: it is hard to imagine that a platform for dog owners would be forced to allow cat pictures (despite what internet cat overlords might think about that). A positive obligation by platforms to do so would lead to an interference with the freedom to conduct business under the EU Charter of Fundamental Rights and, potentially, the right to private property under the ECHR (Article 1 of Protocol 1 to the ECHR).

Viable Alternatives                       

In a case concerning prohibition to distribute leaflets in a private shopping center (Appleby and others v. the UK), the Court did not consider lack of the State’s protection as a failure to comply with positive obligation to protect Article 10 ECHR. This was because the Court considered that a lack of protection did not destroy the “essence” of the right to freedom of expression. However, the Court did not entirely exclude that “a positive obligation could arise for the State to protect the enjoyment of the Convention rights by regulating property rights.” The Court examined such a conflict in the Swedish case Khurshid Mustafa & Tarzibachi, which involved the termination of a tenancy agreement because of the tenants’ refusal to dismantle a satellite dish installed to receive television programs from the tenants’ native country. To decide which right takes precedence in particular circumstances, the property right of the landlord or the right to access information by the tenant, the Court conducted a test of “viable alternatives.” This test basically analyzes if parties were able to exercise their right to freedom of expression through alternative means. While in Appleby such alternative expression opportunities existed, in Tarzibachi, the existence of information alternatives functionally equivalent to a satellite dish could not be demonstrated. Noting that the applicant’s right to freedom of information was not sufficiently considered in the national proceeding, the Court concluded that Sweden failed in its positive obligation to protect that right.

What does this mean for Trump’s ban on Twitter and Facebook? Clearly, as the then-President of the U.S., Trump had ample opportunities to communicate his message to the world, whether through a broadcaster or an official press conference, or other social media platforms. While those alternatives might, in terms of impact or outreach, not be equivalent to the most popular social media platforms, it can hardly be argued that the essence of the right to freedom of expression was destroyed. For an ex-President, some expression opportunities might be limited but Trump’s options still put him in advantage in comparison with an average user deplatformed by Twitter or Facebook. Such bans do happen, whether for clear violations of the Terms and Conditions or the most absurd reasons, but they rarely reach similar levels of controversy.

Hate Speech and Incitement to Violence

Article 10 ECHR protects expressions that offend, shock, or disturb. The scope for restrictions on political speech is narrow and requires strict scrutiny. However, hate speech and incitement to violence do not constitute an expression worthy of protection (see here). The ECHR does not provide a specific definition of hate speech but instead prefers a case-by-case approach. Moreover, per Article 17 ECHR, the Convention does not protect activity aimed at the destruction of any of the rights and freedoms contained in the Convention. This provision has been interpreted to exclude protection of speech that endangers free operation of democratic institutions or attempts to destroy the stability and effectiveness of a democratic system. It goes beyond the scope of this blog post to analyze if Trump’s tweets and posts actually fall within this category of expression.

National and EU Legislation Require Proactive Stance

The critical statements by EU politicians following the decision to ban Trump’s account are not exactly consistent with a general trend in Europe in recent years. For some time now, European politicians and the EU have been trying to convince online platforms to “do more” to police the content of their users. National laws such as the German NetzDG, the Austrian KoPlG and the unconstitutional French Avia Bill all require more effective moderation of online spaces. This means, more and faster removals. Under the threat of high fines, these laws require platforms to limit dissemination of illegal content as well as harmful content, such as disinformation. In an attempt to catch up with national legislation, the EU has been steadily introducing mechanisms encouraging online platforms to (more or less) voluntarily moderate content, for example the 2016 Code of Conduct on hate speech, the 2018 Code of Practice on Disinformation, the update to the AVMS Directive and the proposal on Terrorist Content Regulation.

Can the EU Digital Services Act Help?

One would think that Twitter’s proactive approach, in light of these initiatives, would be appreciated. The somewhat confusing political reaction has led to questions whether the recently proposed Digital Services Act (DSA) would address the problem of powerful platforms making arbitrary decisions about speech they allow online.

The DSA is the most significant reform of Europe’s internet legislation, the e-Commerce Directive, that the EU has undertaken in twenty years. It aims at rebalancing the responsibilities of users, platforms and public authorities according to European values. If done right, the Digital Services Act could offer solutions to complex issues like transparency failures, privatized content moderation, and gatekeeper-dominated markets. And the EU Commission’s draft Proposal got several things right: mandatory general monitoring of users is not a policy option and liability for speech still rests with the speaker, and not with platforms that host what users post or share online. At least as a principle. The introduction of special type and size-oriented obligations for online platforms, including the very large ones, seems to be the right approach. It is also in line with the proposal for a Digital Markets Act (DMA), which presented a new standard for large platforms that act as gatekeepers in an attempt to create a fairer and more competitive market for online platforms in the EU.

It’s noteworthy that the DSA includes mechanisms to encourage online platforms to conduct voluntary monitoring and moderation of the hosted content. Article 6, in particular, introduces an EU version of Section 230’s good samaritan principle: providers of online intermediary services should not face liability solely because they carry out voluntary own-initiative investigations or other activities aimed at detecting, identifying and removing, or disabling access to, illegal content. There is a risk that such an encouragement could lead to more private censorship and over-removal of content. As explained in the preamble of the DSA, such voluntary actions can lead to awareness about illegal activity and thus trigger liability consequences (in the EU, knowledge of illegality deprives platforms of liability immunity).

At the same time, the DSA clearly states its goal to ensure more protection for fundamental rights online. Recital 22, in particular, explains that the “removal or disabling of access should be undertaken in the observance of the principle of freedom of expression.” How could the DSA ensure more protection to the right to freedom of expression, and what would it mean for banned accounts? Would it privilege certain actors?

Regulation of Process, Not of Speech

The DSA’s contribution to more effective protection of the freedom of expression comes in the form of procedural safeguards. These strengthen due process, clarify notice and take down procedures, improve transparency of the decision making and ensure redress mechanism for removal or blocking decisions. It will not prohibit Twitter from introducing its own internal rules, but will require that the rules are clear and unambiguous and applied in a proportionate manner (Article 12). Any blocked user would also have to be informed about the reasons for blocking and possibilities to appeal the decision, e.g. through internal complaint-handling mechanisms, out-of-court dispute settlement, and judicial redress (Article 15). 

The main goal of the DSA is thus to regulate the process and not to regulate the speech. Adding these safeguards could have an overall positive effect on the enjoyment of the right to freedom of expression. This positive effect would be achieved without introducing any must-carry rules for certain types of content (e.g. speech by heads of states) that could potentially interfere with other rights and interests at stake. The safeguards would not necessarily help Donald Trump—platforms will be still able to delete or block on the basis of their own internal rules or on the basis of a notice. But the new rules would give him access to procedural remedies.

The DSA sets out that online platforms must handle complaints submitted through their internal complaint-handling system in a timely, diligent and objective manner. It also acknowledges that platforms make mistakes when deciding whether a user’s conduct is illegal or a piece of information illegal or against terms of service: following the suggestion by EFF, users who face content removal or account suspension will be given the option to demonstrate that the platform’s decision was unwarranted, in which case the online platform must reverse its decision and thus reinstate the content or account (Art 17(3)).

A Public-Private Censorship Model?

There are a number of problematic issues under the DSA that should be addressed by the EU legislator. For example, the provision on notice and action mechanism (Article 14) states that properly substantiated notices automatically give rise to actual knowledge of the content in question. As host providers only benefit from limited liability for third party content when they expeditiously remove illegal content they know of, platforms will have no other choice than to follow up by content blocking actions to escape the liability threat. Even though the DSA requires notices to elaborate on “the reasons why the information in question is illegal content” (Article 14(3)), it does not mean that the stated reason will in fact always be correct. Mistakes, even in good faith, can also happen on the side of the notifying users. As a result, attaching actual knowledge to every compliant notice may become problematic. Instead of safeguarding freedom of expression, it could lead to misuse and overblocking of highly contextual content and, if not well-balanced, could turn the Digital Services Act into a censorship machine. 

There are also open questions about how platforms should assess what is proportionate when enforcing their own terms of service, how much pressure there will be from public authorities to remove content, and whether that clashes with the freedom to receive and impart information and ideas without interference by public authority. 

For example, Article 12 provides that providers of intermediary services have to include information about content restrictions and are required to act in a “diligent, objective, and proportionate manner when enforcing their own terms and conditions "(Article 12). Would platforms conduct any proportionality tests or just use it to justify any decision they take? Moreover, how does the requirement of proportionate enforcement interplay with mandatory platform measures to avoid both the distribution of manifestly illegal content and the issuance of manifestly unfounded notices? Under Article 20, online platforms are compelled to issue warnings to users and time-limited suspensions in such cases.

It is the right approach to subject the freedom of contract of platform service providers to compliance with certain minimum procedural standards. However, it is wrong to push (large) platforms into an active position and make them quasi-law enforcers under the threat of liability for third party content or high fines. If platforms have to remove accounts (“shall suspend,” Article 20); have to effectively mitigate risks (“shall put in place mitigation measures,” Article 27 - notably, Article 26 refers to freedom of expression being a protected risk); and have to inform law enforcement authorities about certain types of content (“shall promptly inform,” Article 21), there is a risk that there will not be much freedom left at some platforms to “hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.” There are reasons to doubt that the Commission’s sympathy for a co-regulatory approach in the form of EU Commission “guidelines” on how to mitigate systemic risks on online platforms (Article 27(3)) will give enough orientation to platforms for when to act and when not to act. 

It will now be up to the EU Parliament and the Council to strike a fair balance between the rights anchored in the Fundamental Rights Charter, including freedom of expression.

Christoph Schmon

An Antitrust Exemption for News Media Won’t Take Us Back to the Time Before Big Tech

2 months 3 weeks ago

Something is rotten online. Facebook and Google dominate the market on online advertising, depleting the resources needed by any other company reliant on serving digital content. For news media, the confluence of an increasingly digital world with Google and Facebook’s siphoning of online advertising revenue has been catastrophic. Unfortunately, giving news media an antitrust exemption to bargain as a group with Facebook and Google will not be the time machine that brings news media back to where it was.

Last week, the U.S. House of Representatives’ Judiciary Committee held a hearing called “Reviving Competition, Part 2: Saving the Free and Diverse Press.” There was a lot going on during the hearing, a lot of it irrelevant to the very real problems faced by the increasingly concentrated media ecosystem, or the death of small, local, and independent news.

Leaving aside the detours, the real subject of the hearing was the Journalism Competition and Preservation Act, which would give an exemption to publishers and broadcasters from antitrust laws, allowing them to form a unified bloc for negotiations with tech companies. The idea is that news media is struggling—present tense. The problem is that news media has struggled, past tense. Allowing this exemption will not bring back the papers that have been shut down, the journalists who have been laid off, or unwind the media mergers that have occurred in the meantime.

During the hearing, the argument was made that this was a lifeline to keep news media afloat while more substantive changes to the law were made—changes that would decrease the power of Big Tech. The other argument made was that such an exemption would revitalize local press by giving a path to profitability. It was also stated that the exemption would be time-limited and could apply only to certain smaller publishers.

But such an exemption doesn’t answer the question of where journalists are supposed to get startup funds to build new outlets to replace the ones that are gone. It does not answer the question of just who will negotiate, as so few small outlets exist now. It does not answer the question of why the hedge funds, private equity ghouls, and giant media near-monopolies should get to reap the benefits of this new exemption when they have already benefitted from Big Tech’s ad takeover, snapping up and gutting news outlets at bargain prices. It does not propose a way to stop these companies from using whatever is negotiated under this exemption as a jumping-off point for their own negotiations.

The Australia Model

Mentioned many times during this hearing was the recent news from Australia, where Google and Facebook faced off on the home turf of News Corporation, the Murdoch empire that also controls Fox News, Sky News, and leading newspapers in many countries. There are important lessons to take away from that.

First, we have to be honest about the state of media. News is important, yes. It’s a public good. Reporters are doing something they feel called to do. However, few truly small, independent media operations exist right now. And in the case of certain companies—like the ones owned by the Murdochs or the Sulzbergers—it would be a mistake to assume the ills of the industry are actually being visited upon them: or that catering to their needs will trickle down to the rest of the journalistic ecosystem.

The fundamental innovation behind Australia’s law is that it would create a direct conduit of revenue from (explicitly) Google and Facebook to media institutions, who could engage in collective bargaining to set rates for the tech companies use of their material. Those institutions would include local, small and non-profit media as well as the giants, and rules about treating all news services (including those not in the bargaining).

The challenges with this approach began before it even passed into law. Alphabet struck its own deal with Murdoch; Facebook shut out (clumsily) every Australian news site from its service in protest.

The Australian proposal isn’t as bad as the EU’s “link tax” plan, which explicitly tied aid for news media to an expansionist view of copyright. Trying to use copyright to fight Big Tech doesn’t hurt them, they just put up filters, but it does make it harder for others to properly cite older reporting or to build on and comment on events happening around the world. But t struggles with similar problems: the companies can still, it seems, separate the big players from the small. And, ultimately, the real enforcement relies on Big Tech needing external news services to profit. The tech giant’s profit center is not “stealing” news media content: it’s “stealing” its advertising. That Google in Europe, and Facebook in Australia, were willing and able to simply decline to host news media demonstrates this. And the only solution in this model for that asymmetry (as the EU eventually determined) is to compel the tech giants to carry particular companies’ news--which raises broader questions about compelling speech, free expression and regulation of the marketplace.

The bill does have some ideas that are missing in American legislation. A call for transparency around how companies choose to promote and rank news stories is one we echo. A prohibition on using the algorithm to retaliate against companies trying to avoid their services or avoid forking over revenue to tech companies is also a smart move.  But ultimately, Australia is trying to solve a problem by freezing that problem in time. Right now, media of all kinds feels dependent on Big Tech. Only one of the consequences of that dependency is an unbalanced bargaining stance between the two when it comes to media clawing back some advertising revenue from the tech giants. Australia’s law fixes that problem, at the cost of making media explicitly and statutorily dependent on Facebook and Google. It takes two monopolistic markets and ties them together in the law, with a fake “deal” revolving around an incorrect assumption that Big Tech has profited by siphoning precious news snippets from the existing media.

The United States shouldn’t look to the Australian model to solve its problems – mainly because Australia lacks the one compelling solution that the United States possesses. The Australian government identified that the crisis in news comes from monopolistic behavior, but could not take the obvious step of breaking up the monopolies – at least on the tech side – because those companies are based elsewhere. They are, of course, based in the United States, where they can, and should, be dealt with as monopolies, with their power and scope greatly reduced.

What Are We Really Trying to Do Here?

There is a fair amount of gallows humor among journalists these days. There are jokes about how everyone will either work for the BuzzPostTimes or GoogBook. There are jokes about how newsletters are just the latest iteration of the infamous “pivot to video” which brought down so many companies (and which Facebook should absolutely not be let off the hook for encouraging). There are a few bold experiments out there, but, it should be noted, they are often operating outside the realm of Facebook and Google advertising that is the target of this bill. The days where newspapers raked in money from classifieds and advertising are clearly over. This exemption doesn’t fix that. And it doesn’t even throw a lifeline where it is most needed.

Media consolidation is reaching its zenith. As is Big Tech power. Even as part of a larger package, this proposal won’t do what it is meant to. Instead, Congress should focus its attention on making the sweeping changes to antitrust law we so desperately need. Figure out how to curb these oligopolies’ power. Think beyond just breaking them up to what regulations will prevent this from happening again. Figure out how to help news media in this century, rather than trying to return them to the last one.

Katharine Trendacosta

Additional Regulations Approved for the California Consumer Privacy Act

2 months 3 weeks ago

The California Attorney General recently published new regulations that implement the California Consumer Privacy Act (CCPA), a law that takes some important steps to empower consumer choice. What stands out the most in the new regulations is the explicit prohibitions around deceitful user interfaces (Section 999.315h) when the user exercises their CCPA right to opt-out from sale of their personal information.

Dark Patterns” are defined by the user experience (UX) researcher who coined the term, Harry Brignull, as “tricks used in websites and apps that make you buy or sign up for things that you didn't mean to.” In this context, dark patterns can be used to undermine the CCPA’s right to opt-out. With this new regulation, it prohibits companies from burdening consumers with confusing language or unnecessary steps. EFF provided comments to encourage adoption of this proposed regulation.

The CCPA does not currently mandate the right to opt-in, that is, a more proactive legal rule that a business cannot sell a consumer’s personal information unless the consumer gives permission. Having to retroactively go through multiple screens of opting out burdens the consumer. The current CCPA rule is opt-out. With that comes the need to prohibit businesses from stopping consumers from exercising that right, by banning dark patterns.

The new CCPA regulations also encourage widespread adoption of a standardized privacy icon to convey the opt-out process. This icon was designed by Carnegie Mellon University’s Cylab and the University of Michigan’s School of Information. Even though providing a universal icon could potentially help users see their options to exercise their CCPA rights, we hope that this ongoing conversation is informed by web accessibility. Confusing language, entangled and layered user interfaces, tiny lettering, and other dark pattern tactics are tied to the conversation of making information accessible and clear for the user. We also believe readability should be considered as well, where language is crafted for everyone's understanding. For example, EFF explicitly advocated for the ban of double negatives, a common writing tactic deployed in dark patterns.

We hope to see more consumer empowering regulation in the future. Especially where it concerns consumers who don’t want their data shared or sold at any capacity or in any context online.

Alexis Hancock

EFF’s Crowd-Sourced Atlas of Surveillance Project Honored with Award for Advancing Public’s Right to Know About Police Spying

2 months 3 weeks ago
Partnering with University of Nevada, Reno Reynolds School of Journalism Students, EFF Collects and Aggregates Data about Police Surveillance

SAN FRANCISCO—The Electronic Frontier Foundation (EFF) is pleased to announce it has received the James Madison Freedom of Information Award for Electronic Access for its groundbreaking, crowd-sourced Atlas of Surveillance, the largest-ever collection of searchable data on the use of surveillance technologies by law enforcement agencies across the country.

The Atlas, launched in July, contains data on more than 7,000 surveillance programs—including facial recognition, drones, and automated license plate readers—operated by thousands of local police departments and sheriffs' offices nationwide. With a clickable U.S map and a searchable database of cities and technologies, the Atlas sheds light on the devices and systems being purchased locally, often without residents’ knowledge or any oversight, to surveil people and neighborhoods.

EFF shares the award, presented by the Northern California Chapter of the Society of Professional Journalists, with students and faculty members at University of Nevada, Reno Reynolds School of Journalism (RSJ). Over the course of two years, hundreds of students have researched and collected public records, news articles, and open datasets to build the Atlas of Surveillance database. The project also compiles for the first time research collected by news organizations, nonprofits, and academics, including the ACLU and the Center for the Study of the Drone at Bard College.

“Law enforcement agencies around the country have collected more and more advanced surveillance systems to gather information en masse on the public. But details about which police departments have acquired what systems had never been aggregated before into a single place,” said EFF Director of Investigations Dave Maass, who leads the project. “When the Reynolds School approached EFF about working together with as many as 150 students each semester on a project, the Atlas of Surveillance was born.”

“Thanks to Dave Maass’ leadership and Reynolds School of Journalism students’ enthusiasm, we were able to visualize the data and present the seriousness of the issue,” said Gi Yun, Director of the Center for Advanced Media Studies and Professor at RSJ. “It is our hope to be able to continue the project and provide valuable information to the public.”

Students working on the Atlas have also generated three special reports, including a comprehensive guide to surveillance along the U.S. border, an investigation into the growing trend of real-time crime centers, and, most recently, a deep dive into surveillance on university campuses. “Scholars Under Surveillance: How Campus Police Use High Tech to Spy on Students,” released in February, uncovered records showing public safety offices are acquiring a surprising number of surveillance technologies more common to metropolitan areas that experience high levels of violent crime.

The project has served as a training ground for the next generation of reporters and advocates—students learned how to gather data, file FOIA requests, search public meeting documents, and read news articles about surveillance with a skeptical eye. In turn, the Atlas of Surveillance now serves as a key resource for local and national reporters investigating law enforcement in the wake of last summer’s marches and the Black-led movement against police violence.

“Police have long used surveillance tactics to observe and undermine civil rights movements, from the 1960s to the Black Panthers to Black Lives Matter,” said Taylor Johnson, a 2021 senior at Reynolds who is building datasets of surveillance technology in Atlanta Detroit and Pittsburgh in collaboration with EFF and Data 4 Black Lives. “It is our role to finally right the wrongs of the past and do better. That begins with transparency, which the Atlas of Surveillance provides. Through transparency and advocacy, blocks built up by racism can begin to be dismantled."

Reynolds students who put more than 120 hours into the project include Johnson, Madison Vialpando, Christian Romero, Matthew King, Dominique Hall, Javier Hernandez, Jessica Romo, Hailey Rodis, Olivia Ali, Dylan Kubeny, and Jayme Sileo. In addition, student volunteers Tiffany Jing and Zoe Wheatcroft were invaluable in getting this project off the ground.

“Police surveillance technology poses an incredible threat to our 4th Amendment rights, so tracking the use of this technology is vital to protecting those rights,” said Madison Vialpando, a 2020 RSJ graduate who worked on the project over three semesters and is now pursuing a masters in cybersecurity. “Working on the Atlas of Surveillance project with the Electronic Frontier Foundation has been one of the most rewarding and profound experiences that I have ever had.”

The annual James Madison Freedom of Information Awards recognizes Northern California individuals and organizations who have made significant contributions to advancing freedom of information and/or expression in the spirit of James Madison, the creative force behind the First Amendment. SPJ NorCal presents the awards near Madison’s birthday (March 16) and National Sunshine Week.

For more on the awards: https://spjnorcal.org/2021/03/16/spj-norcal-36th-annual-james-madison-freedom-of-information-awards/

Contact:  DaveMaassDirector of Investigationsdm@eff.org
Karen Gullo
Checked
2 hours 18 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed