Digital Rights Updates with EFFector 33.5

3 weeks 6 days ago

Want the latest news on your digital rights? Then you're in luck! Version 33, issue 5 of EFFector, our monthly-ish email newsletter, is out now! Catch up on rising issues in online security, privacy, and free expression with EFF by reading our newsletter or listening to the new audio version below.

Listen on YouTube

EFFECTOR 33.05 - Apple's Plan to "Think Different" about Encryption opens a backdoor to your private life

Make sure you never miss an issue by signing up by email to receive EFFector as soon as it's posted! Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and now listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

When It Comes to Antitrust, It’s All Connected

3 weeks 6 days ago

A knife was stuck in antitrust in the 1980s and it bled out for the next 40 years. By the 1990s, the orthodox view of antitrust went like this: horizontal monopolies are bad, but vertical monopolies are efficient. In other words, it was bad for consumers when one company was the single source for a good or service, but if a company wanted to own every step in the chain, that was fine. Good, even.

Congress is concerned with Big Tech and has a number of bills aimed at keeping those companies in check. But just focusing on Google, Apple, Facebook, Amazon, and Microsoft won’t fix the problem we find ourselves in. Monopoly is at the heart of today’s business model. For everything.

In tech startups, companies run in the red for years, seeking to flood the zone, undercut the prices of their competitors, and buy up newcomers, until they are the last ones standing. For years, one of Uber's main goals was the destruction of Lyft. A series of leaks and PR disasters kept Uber from succeeding, but it is not the only company pursuing this tactic. Think about how many food delivery apps there used to be. And now think about how many have been bought up and merged with each other.

For internet service providers (ISPs), being a local monopoly is the goal. When Frontier went bankrupt, the public filings revealed that the ISP saw its monopoly territory as a bankable asset. That’s because, as internet access becomes a necessity for everyday life, a monopoly can guarantee a profit. They can also gouge us on prices, deliver worse service for more money, and avoid upgrading their services since there is no better option for consumers to choose.

In the world of books, movies, music, and television there are vanishingly few suppliers. Just the other week, publisher Hachette bought Workman Publishing. The fewer publishers there are, the more power they have to force libraries and schools into terrible contracts regarding e-books, giving second-class access to a public good. Disney continues to buy up properties and studios. After buying 21st Century Fox, Disney had 38% of the U.S. box office share in 2019. That means that over a third of the movie market reflected a single company’s point of view.

The larger these companies get, the harder it is for anyone to compete with them. The internet promised to open up opportunities, and businesses’ defensive move was to grow too big to compete with.

That’s the horizontal view. The vertical view is equally distressing. If you want an audiobook, Amazon has locked in exclusive deals for many of the most desired titles. If you want to watch movies or TV digitally, you are likely watching it on a subscription streaming service owned by the same company that made the content. And in the case of Comcast and AT&T, you could be getting it all on a subpar, capped internet service that you pay too much for and is, again, owned by the same company that owns the streaming service and the content.

The chain is too long and the links too big. In order to actually, permanently fix the problem being caused by a lack of competition in technology, we need laws that apply to all of these facets, not simply the social media services.

We’ve already seen the new administration change law to consider harms to ordinary people beyond just paying higher prices. Now let’s see it move beyond Facebook, Google, Apple, and Amazon, to include major ISPs and other abusive monopolists, and companies that wield monopoly power over narrower but important facets of the internet economy.

Katharine Trendacosta

Chicago Inspector General: Using ShotSpotter Does Not Justify Crime Fighting Utility

4 weeks 1 day ago

ˀThe Chicago Office of the Inspector General (OIG) has released a highly critical report on the Chicago Police Department’s use of ShotSpotter, a surveillance technology that relies on a combination of artificial intelligence and human “acoustic experts” to purportedly identify and locate gunshots based on a network of high-powered microphones located on some of the city’s streets. The OIG report finds that “police responses to ShotSpotter alerts rarely produce evidence of a gun-related crime, rarely give rise to investigatory stops, and even less frequently lead to the recovery of gun crime-related evidence during an investigatory stop.” This indicates that the technology is ineffective at fighting gun crime and inaccurate. This finding is based on the OIG’s quantitative analysis of more than 50,000 records over a 17-month period from the Chicago Police Department (CPD) and the city’s 911 dispatch center.

Even worse, the OIG report finds a pattern of CPD officers detaining and frisking civilians—a dangerous and humiliating intrusion on bodily autonomy and freedom of movement—based at least in part on “aggregate results of the ShotSpotter system.” This is police harassment of Chicago’s already over-policed Black community, and the erosion of the presumption of innocence for people who live in areas where ShotSpotter sensors are active. This finding is based on the OIG’s qualitative analysis of a random sample of officer-written investigatory stop reports (ISRs).

The scathing report comes just days after the AP reported that a 65-year-old Chicago man named Michael Williams was held for 11 months in pre-trial detention based on scant evidence produced by ShotSpotter. Williams’ case was dismissed two months after his defense attorney subpoenaed ShotSpotter. This and another recent report also show how ShotSpotter company officials have changed the projected location and designation of supposed gun shots in a way that makes them more consistent with police narratives.

There are more reasons why EFF opposes police use of ShotSpotter. The technology is all too often over-deployed in majority Black and Latinx neighborhoods. Also, people in public places—for example, having a quiet conversation on a deserted street—are often entitled to a reasonable expectation of privacy, without microphones unexpectedly recording their conversations. But in at least two criminal trials, one in Massachusetts and one in California, prosecutors tried to introduce audio of voices from these high-powered microphones. In the California case, People v. Johnson, the court admitted it into evidence. In the Massachusetts case, Commonwealth v. Denison, the court did not, ruling that a recording of “oral communication” is prohibited “interception” under the Massachusetts Wiretap Act.

Most disturbingly, ShotSpotter endangers the lives and physical safety of people who live in the neighborhoods to which police are dispatched based on false reports of a gunshot. Because of the uneven deployment of ShotSpotter sensors, these residents are disproportionately Black and Latinx. An officer expecting a civilian with a gun is more likely to draw and fire their own gun, even if there was in fact no gunshot. In the words of the Chicago OIG: “there are real and potential costs associated with use of the system, including … the risk that CPD members dispatched as a result of a ShotSpotter alert may respond to incidents with little contextual information about what they will find there—raising the specter of poorly informed decision-making by responding members.”

The Chicago OIG report is also significant because it signals growing municipal skepticism of ShotSpotter technology. We hope more cities will join Charlotte, North Carolina, and San Antonio, Texas, in canceling their contracts with ShotSpotter—which is currently deployed in over 100 U.S. cities. Chicago itself has just renewed its ShotSpotter contract, which cost the city $33 million between August 20, 2018 and August 19, 2021.

According to EFF's Atlas of Surveillance, at least 100 cities in the United States use some kind of acoustic gunshot detection, including ShotSpotter.

The Technology Is Not Effective at Fighting Gun Violence

The OIG report’s findings are very clear. Despite what the ShotSpotter marketing team would have you believe about their technology’s effectiveness, the vast majority of ShotSpotter alerts cannot be connected with any verifiable shooting incident. According to the OIG, just 9% of ShotSpotter alerts with a reported disposition (4,556 of 41,830) indicate evidence of a gun-related criminal offense. Similarly, just 2% of all Shotspotter alerts (1,056 of 50,176) correlate to an officer-written ISR.

Likewise, a 2021 report by the MacArthur Justice Center, quoted by the OIG, found that 86% of incidents in which CPD officers responded to a ShotSpotter alert did not result in the completion of a case report. In only 9% of CPD responses to ShotSpotter alerts is there any indication that a gun-related criminal offense occurred.

As Deputy Inspector General for Public Safety Deborah Witzburg said about this report, “It’s not about technological accuracy, it’s about operational value.” She added:

“If the Department is to continue to invest in technology which sends CPD members into potentially dangerous situations with little information––and about which there are important community concerns–– it should be able to demonstrate the benefit of its use in combatting violent crime. The data we analyzed plainly doesn’t do that. Meanwhile, the very presence of this technology is changing the way CPD members interact with members of Chicago’s communities. We hope that this analysis will equip stakeholders to make well-informed decisions about the ongoing use of ShotSpotter technology.”

The Technology Is Used to Justify Illegal Police Harassment and Erode the Presumption of Innocence 

Cross referencing the officer-written ISRs with ShotSpotter alerts, the OIG found a pattern of police conducting stop-and-frisks of civilians based at least in part on aggregate ShotSpotter data. This means police are deciding who to stop based on their supposed proximity to large numbers of alerts. Even when there are no specific alerts the police are responding to, the concentration of previous alerts in a specific area often works its way into police justification for stopping and searching a person.

The Fourth Amendment limits police stop-and-frisks. In Terry v. Ohio (1968), the Supreme Court held that police need “reasonable suspicion” of crime to initiate a brief investigative detention of a suspect, and need reasonable suspicion of weapon possession to conduct a pat-down frisk of that suspect. One judicially-approved factor that can give rise to reasonable suspicion, in conjunction with other factors, is a suspect’s presence in a so-called “high crime area.”

In light of the OIG and MacArthur reports, which show that the overwhelming majority of ShotSpotter “alerts” do not lead to any evidence of a gun, aggregate ShotSpotter data cannot reasonably be used as evidence that an area is high in crime. Therefore, courts should hold that it violates the Fourth Amendment for police to stop or frisk a civilian based on any consideration of aggregate ShotSpotter alerts in the area.

Specific cases highlighted in the OIG report demonstrate the way that aggregate ShotSpotter data, used as a blank check for stops and searches, erodes civil liberties and the presumption of innocence. In one case, for example, police wrongly used the prevalence of ShotSpotter alerts in the area, plus a bulge in a person’s hoodie pocket, to stop and pat them down, after they practiced their First Amendment right to give police the middle finger.

Cities Should Stop Using ShotSpotter

Far too often, police departments spend tens of millions of dollars on surveillance technologies that endanger civilians, disparately burden BIPOC, and invade everyone’s privacy. Some departments hope to look proactive and innovative when assuaging public fears of crime. Others seek to justify the way they are already policing, by “tech washing” practices and deployments that result in racial discrimination. Like predictive policing, police departments use ShotSpotter and its aura as a “cutting-edge” Silicon Valley company to claim their failed age-old tactics are actually new and innovative. All the while, no one is getting any safer.

The Chicago OIG report demonstrates that ShotSpotter “alerts” are unreliable and contribute to wrongful stop-and-frisks. It may not recommend that cities stop using ShotSpotter—but EFF certainly will, and we think that is the ultimate lesson that can be learned from this report. 

Matthew Guariglia

OnlyFans Content Creators Are the Latest Victims of Financial Censorship 

4 weeks 1 day ago

Update (8/26/21): Victory! OnlyFans has reversed course and suspended its plans to ban sexually explicit content, saying it has “secured assurances necessary” from banking partners and payout providers to enable it to continue to serve all creators. We hope that financial institutions will take note that it is unacceptable to censor constitutionally protected legal speech by threatening to shut down access to financial services. EFF continues to actively fight financial censorship.  

OnlyFans recently announced it would ban sexually explicit content, citing pressure from “banking partners and payout providers.” This is the latest example of a troubling pattern of financial intermediaries censoring constitutionally protected legal speech by shutting down accounts—or threatening to do so.  

OnlyFans is a subscription site that allows artists, performers and other content creators to monetize their creative works—and it has become a go-to platform for independent creators of adult content. The ban on sexually explicit content has been met by an outcry from many creators who have used the platform to safely earn an income in the adult industry.

This is just the latest example of censorship by financial intermediaries. Intermediaries have cut off access to financial services for independent booksellers, social networks, adult video websites, and whistleblower websites, regardless of whether those targeted were trading in First Amendment-protected speech. By cutting off these critical services, financial intermediaries force businesses to adhere to their moral and political standards.  

It is not surprising that, faced with the choice of losing access to financial services or banning explicit content, OnlyFans would choose its payment processors over its users. For many businesses, losing access to financial services seriously disrupts operations and may have existential consequences. 

As EFF has explained, access to the financial system is a necessary precondition for the operations of nearly every Internet intermediary, including content hosts and platforms. The structure of the electronic payment economy makes these payment systems a natural chokepoint for controlling online content. Indeed, in one case, a federal appeals court analogized shutting down financial services for a business to “killing a person by cutting off his oxygen supply.” In that case, Backpage.com, LLC v. Dart, the Seventh Circuit found that a sheriff had violated the First Amendment by strongly encouraging payment processors to cut off financial services to a classified advertising website.  

There has been some movement in Washington to fight financial censorship. Earlier this year, the Office of the Comptroller of the Currency finalized its Fair Access to Financial Services rule, which would have prevented banks from refusing to serve entire classes of customers they find politically or morally unsavory. But the rule was put on hold with the change of administrations in January.

Content moderation is a complex topic, and EFF has written about the implications of censorship by companies closer to the bottom of the technical stack. But content creators should not lose their financial lifelines based on the whims and moral standards of a few dominant and unaccountable financial institutions. 

Marta Belcher

ACLU Advocate Reining in Government Use of Face Surveillance, Champion of Privacy Rights Research, and Data Security Trainer Protecting Black Communities Named Recipients of EFF’s Pioneer Award

4 weeks 2 days ago
Virtual Ceremony September 16 Will Honor Kade Crockford, Pam Dixon, and Matt Mitchell

San Francisco—The Electronic Frontier Foundation (EFF) is honored to announce that Kade Crockford, Director of the Technology for Liberty Program at the ACLU of Massachusetts, Pam Dixon, executive director and founder of World Privacy Forum, and Matt Mitchell, founder of CryptoHarlem, are recipients of the 2021 Pioneer Award for their work, in the U.S. and across the globe, uncovering and challenging government and corporate surveillance on communities.

The awards will be presented at a virtual ceremony on September 16 starting at 5 pm PT. The keynote speakers this year will be science fiction authors Annalee Newitz and Charlie Jane Anders, hosts of the award-winning podcast “Our Opinions Are Correct.” The ceremony will stream live and free on Twitch, YouTube, Facebook, and Twitter. Audience members are encouraged to give a $10 suggested donation. EFF is supported by small donors around the world and you can become an official member at https://eff.org/PAC-join. To register for the ceremony: https://www.eff.org/PAC-register

Activist Kade Crockford is a leader in educating the public about and campaigning against mass electronic surveillance. At the ACLU of Massachusetts, they direct the Technology for Liberty Project, which focuses on ensuring that technology strengthens rights to free speech and expression and is not used to impede our civil liberties, especially privacy rights. Crockford focuses on how surveillance systems harm vulnerable populations targeted by law enforcement—people of color, Muslims, immigrants, and dissidents. Under Crockford’s leadership, the Technology for Liberty Project has used public record requests to shine a light on how state and local law enforcement agencies use technology to surveil communities. Crockford oversaw the filing of over 400 public record requests in 2019 and 2020 seeking information about the use of facial recognition across the state, collecting over 1,400 government documents. They led successful efforts in Massachusetts to organize local support for bans on government use of face surveillance, convincing local police chiefs that the technology endangered privacy in their communities. Crockford worked with seven Massachusetts cities to enact preemptive bans against the technology and, in June 2020, working with youth immigrants’ rights organizers, succeeded in getting facial recognition banned in Boston, the second largest city in the world to do so at the time. Massachusetts lawmakers have credited Crockford for shepherding efforts to pass a police reform bill that reins on how police in the state can use facial recognition. They also led a project to file public record requests with every Massachusetts District Attorney and the state Attorney General to reveal how local prosecutors were using administrative subpoena, secretly and with no judicial review or oversight, to obtain people’s cell phone and internet records. Kade has written for The NationThe GuardianThe Boston Globe, WBUR, and many other publications, and runs the dedicated privacy website www.PrivacySOS.org.

Author and researcher Pam Dixon has championed privacy for more than two decades and is a pioneer in examining, documenting, and analyzing how data is utilized in ways that impact multiple aspects of our lives, from finances and health information to identity, among other areas. Dixon founded the World Privacy Forum in 2003, a leading public interest group researching consumer privacy and data, with a focus on documenting and analyzing how individuals’ data interacts within complex data ecosystems and the consequences of those interactions. She has worked extensively on privacy and data governance in the U.S., EU, India, Africa, and Asia. Dixon worked in India for a year researching and publishing peer-reviewed research on India’s Aadhaar identity system, which was cited twice in the Supreme Court of India’s landmark Aadhaar decision. She works with the UN and WHO on data governance, and with OECD in its One AI Expert Group. She was named a global leader in digital identity, which included her work in Africa on identity ecosystems. She is co-chair of the Data for Development Workgroup at the Center for Global Development, where she is working to bring attention to inequities faced by less wealthy countries with fragile data infrastructures when dealing with data privacy standards created by, and reflecting the priorities of, wealthy countries. Dixon co-authored a report in 2021 calling for a more inclusive approach to data governance and privacy standards in low- and middle-income countries. Her ongoing work in the area of health privacy is extensive, including her work bringing medical identity theft to public attention for the first time, which led to the creation of new protections for patients. She has presented her work on privacy and complex data ecosystems to the Royal Society, and most recently to the National Academy of Sciences.

Matt Mitchell is the founder of CryptoHarlem and a tech fellow for the BUILD program at the Ford Foundation. He is recognized as a leading voice in protecting Black communities from surveillance. Under his leadership, CryptoHarlem provides workshops on digital surveillance and a space for Black people in Harlem, who are over policed and heavily surveilled, to learn about digital security, encryption, privacy, cryptology tools, and more. He is a well-known security researcher, operational security trainer, and data journalist whose work raising awareness about privacy, providing tools for digital security, and mobilizing people to turn information into action has broken new ground. His work, Mitchell says, is informed by the recognition that there’s a digital version of “stop and frisk” which can be more dangerous for people of color than the physical version, and that using social media has unique risks for the Black community, which is subject to many forms of street level and online surveillance. CryptoHarlem has worked with the Movement for Black Lives to create a guide for protestors, organizers, and activists during the 2020 protests against police brutality following the murder of George Floyd. Last year he was selected as a WIRED 25, a list of scientists, technologists, and artists working to make things better. In 2017 he was selected as a Vice Motherboard Human of The Year for his work protecting marginalized groups. As a technology fellow at the Ford Foundation, Mitchell develops digital security training, technical assistance offerings, and safety and security measures for the foundation’s grantee partners. Mitchell has also worked as an independent digital security/countersurveillance trainer for media and humanitarian-focused private security firms. His personal work focuses on marginalized, aggressively monitored, over-policed populations in the United States.  Previously, Mitchell worked as a data journalist at The New York Times and a developer at CNN, Time Inc, NewsOne/InteractiveOne/TVOne/RadioOne, AOL/Huffington Post, and Essence Magazine.

“EFF has been fighting mass surveillance since its founding 31 years ago, and we’ve seen the stakes rise as corporations, governments, and law enforcement increasingly use technology to gather personal information, pinpoint our locations, secretly track our online activities, and target marginalized communities,” said EFF Executive Director Cindy Cohn. “Our honorees are working across the globe and on the ground in local communities to defend online privacy and provide information, research, and training to empower people to defend themselves. Technology is a double-edged sword—it helps us build community, and can also be used to violate our rights to free speech and to freely associate with each other without government spying. We are honoring Kade Crockford, Pam Dixon, and Matt Mitchell for their vision and dedication to the idea that we can challenge and disrupt technology-enabled surveillance.”

Awarded every year since 1992, EFF’s Pioneer Award Ceremony recognize the leaders who are extending freedom and innovation on the electronic frontier. Previous honorees have included Malkia Cyril, William Gibson, danah boyd, Aaron Swartz, and Chelsea Manning.

For past Pioneer Award Winners:
https://www.eff.org/pioneer/past-winners

To register for this event:
https://www.eff.org/PAC-register

Contact:  KarenGulloAnalyst, Senior Media Relations Specialistkaren@eff.org
Karen Gullo

New Writing and Management Role on EFF's Fundraising Team

1 month ago

Calling all writers! If you are passionate about civil liberties and technology, we have an awesome opportunity for you. We are hiring for a newly-created role of Associate Director of Institutional Support. This senior role will manage the messaging and strategy behind EFF’s foundation grants and corporate support. It’s a chance to join the fun, fearless team that introduces funders to the work EFF does. The role supervises one direct report and will ideally work from our San Francisco office. 

EFF has amazing benefits and offers a flexible work environment. We also prioritize diversity of life experience and perspective, and intentionally seek applicants from a wide range of backgrounds. 

If you’re a storyteller and strategist who loves to roll up your sleeves on grant applications and you thrive in collaborative environments, this could be the perfect role for you.

If you’re interested, please apply today! We’re asking for applicants to get their applications in by September 4, 2021. Want to learn more about the role? Send questions to rainey@eff.org.

Click here to apply, and please help spread the word by sharing this role on social media. 

rainey Reitman

EFF Joins Global Coalition Asking Apple CEO Tim Cook to Stop Phone-Scanning

1 month ago

EFF has joined the Center for Democracy and Technology (CDT) and more than 90 other organizations to send a letter urging Apple CEO Tim Cook to stop the company’s plans to weaken privacy and security on Apple’s iPhones and other products.

JOIN THE NATIONWIDE PROTEST

TELL APPLE: DON'T SCAN OUR PHONES

“Though these capabilities are intended to protect children and to reduce the spread of child sexual abuse material (CSAM), we are concerned that they will be used to censor protected speech, threaten the privacy and security of people around the world, and have disastrous consequences for many children,” the letter states. 

As we’ve explained in Deeplinks blog posts, Apple’s planned phone-scanning system opens the door to broader abuses. It decreases privacy for all iCloud photo users, and the parental notification system is a shift away from strong end-to-end encryption. It will tempt liberal democratic regimes to increase surveillance, and likely bring even great pressures from regimes that already have online censorship ensconced in law.

We’re proud to join with organizations around the world in opposing this change, including CDT, ACLU, PEN America, Access Now, Privacy International, Derechos Digitales, and many others. If you haven’t already, please sign EFF’s petition opposing Apple’s phone surveillance. 

JOIN THE NATIONWIDE PROTEST

TELL APPLE: DON'T SCAN OUR PHONES

Further reading: 

Joe Mullin

Illinois Bought Invasive Phone Location Data From Banned Broker Safegraph

1 month ago

The Illinois Department of Transportation (IDOT) purchased access to precise geolocation data about over 40% of the state’s population from Safegraph, the controversial data broker recently banned from Google’s app store. The details of this transaction are described in publicly-available documents obtained by EFF

In an agreement signed in January 2019, IDOT paid $49,500 for access to two years’ worth of raw location data. The dataset consisted of over 50 million “pings” per day from over 5 million monthly-active users. Each data point contained precise latitude and longitude, a timestamp, a device type, and a so-called “anonymized” device identifier.

Excerpt from agreement describing data provided by Safegraph to IDOT

Taken together, these data points can easily be used to trace the precise movements of millions of identifiable people. Although Safegraph claimed its device identifiers were “anonymized,” in practice, location data traces are trivially easy to link to real-world identities.

In a response to a public records request, IDOT said that it did not store or process the data directly; instead, it hired contracting firm Resource Systems Group, Inc (RSG) to analyze the data on its behalf. The contracts with RSG and Safegraph are part of a larger effort by IDOT to create a “statewide travel demand model.” IDOT intends to use this model to analyze trends in travel across the state and project future growth.

An RSG slide summarizes the volume of data acquired from Safegraph

As smartphones have proliferated, governments around the country increasingly rely on granular location data derived from mobile apps. Federal law enforcement, military, and immigration agencies have garnered headlines for purchasing bulk phone app location data from companies like X-Mode and Venntel. But many other kinds of government agencies also patronize location data brokers, including the CDC, the Federal Highway Administration, and dozens of state and local transportation authorities. 

Safegraph discloses that it acquires location data from smartphone apps, other data brokers, and government agencies, but not which ones. Since it’s extremely difficult to determine which mobile applications transmit data to particular data brokers (and often impossible to know which data brokers sell data to each other), it is highly likely that the vast majority of users whom Safegraph tracks are unaware of their inclusion in its dataset.

“It is a lot of data”

IDOT filed an initial Procurement Justification seeking raw location data from smart phone apps in 2018. In the request, IDOT laid out the characteristics of the dataset it intended to buy. The agency specifically requested “disaggregate[d] (device-specific)” data from within Illinois and a “50 mile buffer of the state.” It wanted more than 1.3 million monthly active users, or at least 10% of the state’s population, with an average of 125 location pings per day from each user. IDOT also requested that the GPS pings be accurate to within 10 meters on average.

Safegraph’s dataset generally exceeded IDOT’s requirements. IDOT wanted to monitor at least 10% of the state’s population, and Safegraph offered 42%. Also, while IDOT only requested one month’s worth of data for $50,000, Safegraph offered two years of data for the same price: one year of historical data, plus one year of new data “updated at a regular cadence.” As a result, IDOT received received precise location traces for more than 5 million people, for two years, for less than a penny per person. On the other hand, Safegraph was only able to provide an average of 56 pings per day, less than the requested 125. But as the company assured the agency, that still represented over 50 million data points per day—to quote the agreement, “It is a lot of data.”

Excerpt from the January 2019 agreement explaining Safegraph’s dataset

Who is Safegraph?

Safegraph is led by Auren Hoffman, a veteran of the data broker industry. In 2006, he founded Rapleaf, a controversial company that aimed to quantify the reputation of users on platforms like Ebay by linking their online and offline activity into a single profile. Over time, Rapleaf evolved into a more traditional data broker. It was later acquired by TowerData, a company that sold behavioral and demographic data tied to email addresses. In 2102, Hoffman left to run Rapleaf spinoff Liveramp, an “identity resolution” and marketing data company that was bought by data broker titan Acxiom in 2014. In 2016, Hoffman departed Acxiom to found Safegraph.

Early on, Safegraph sold bulk access to raw geolocation data through its “Movement Panel” product. It collected data via third-party code embedded directly in apps, as well as from the “bidstream.” Gathering bidstream data is a controversial practice that involves harvesting personal information from billions of “bid requests” broadcast by ad networks during real-time bidding.

In 2019, Safegraph spun off a sister brand, Veraset. Since then, Safegraph has tried to present a marginally more privacy-conscious image on its own website: the company’s “products” page mainly lists services that aggregate data about places, not individual devices. Safegraph says it acquires much of its location data from Veraset, thus delegating the distasteful task of actually collecting the data to its smaller sibling. (The exact nature of the relationship between Safegraph and Veraset is unclear.) 

Meanwhile, Veraset appears to have inherited the main portion of Safegraph’s raw data-selling business, including the “Movement Data” product that IDOT purchased. Veraset sells bulk, precise location data about individual devices to governments, hedge funds, real-estate investors, advertisers, other data brokers, and more. On the data broker clearinghouse Datarade, Veraset boasts that it has “the largest, deepest, and most broadly available movement dataset” for the United States. It also offers samples of precise GPS traces tied to advertising IDs. Neither Safegraph nor Veraset disclose the sources of their data beyond vague categories like “mobile applications” and “data compilers”.

One of many IDOT data relationships

IDOT’s purchase from Safegraph was part of a larger project by the agency to model individuals’ transportation patterns. IDOT also worked with HERE Data LLC, another location data broker, and Replica, the company spun off of Google’s Sidewalk Labs. According to IDOT, HERE acquires location data primarily from vehicle navigation services. HERE is owned by a consortium of automakers including BMW, Volkswagen, and Mercedes, and gathers data from connected vehicles under those brands. Replica has been cagey about its data sources, but reports using “mobile location data” as well as “private” sources for real estate and credit transactions. 

As noted above, IDOT did not process the data directly. Instead, it shared the raw data with RSG, which was tasked with deriving useful insights for the transportation agency. A memo from RSG to IDOT, dated June 19, 2018, specifically requested that IDOT purchase bulk location data gathered from smartphone apps for RSG to analyze. RSG is a prolific consultant in transportation planning. Its website claims it has worked with “most” major transportation agencies in the U.S. and lists the Federal Highway Administration, the U.S. Department of Transportation, the NY Metropolitan Transportation Authority, the Florida Department of Transportation, and many others as clients.

A Toxic Pipeline

It is no comfort that IDOT did not acquire or process the raw data itself. Its payment to Safegraph normalizes and props up the dangerous market for phone app location data—harvested from millions of Illinois residents who never seriously considered that this sensitive data about them was being collected, aggregated, and shared.

This particular brand of data-sharing is a growing trend around the country. Data brokers vacuum granular locational data from users’ phones with no accountability, and state and local governments help them monetize it. In some cases, agencies mandate that tech companies share traffic data, as in the case of ride-sharing. In the last decade, this toxic pipeline has aligned government interests with data brokers’, and makes it less likely that those same governments will pass laws that crack down on the corporate exploitation of personal data. 

Federal laws (like the Fourth Amendment) and state laws (like California’s Electronic Communications Privacy Act) prevent governments from seizing sensitive personal information from personal devices or companies without a warrant. But many government agencies claim that no laws restrict them from purchasing that same data on the open market. We disagree: laws that protect our data privacy from government surveillance have no such “bill me later” exception from the warrant requirement. We expect courts will reject this governmental overreach (unless police evade judicial review by means of evidence laundering). In the meantime, we support legislation to ban such purchases, including the Fourth Amendment Is Not For Sale Act. We also urge app stores to kick out apps that harvest users’ location data—just as Google kicked out Safegraph.

When data flows from a broker to a government transportation agency, this greatly increases the likelihood of further data flow to law enforcement or immigration agencies. This sort of precise, identifiable location data needs far stronger protections at every level—whether in the hands of governments or private entities. But at the moment, third-party aggregators can and do sell their data to government agencies with near-zero accountability. 

IDOT and SafeGraph might argue that the agency is just obtaining traffic patterns. But the data used for these traffic patterns sheds light on all sorts of private activity—from attendance at a protest and trips to hospitals or churches to where you eat lunch and with whom. Even if it’s done for supposedly innocuous ends, the acquisition of large quantities of granular location data about people is too dangerous.

Agencies tempted to use big data about real people should acquire the minimum information necessary to accomplish their goals. Governments must demand detailed information on the provenance of any personal data that they handle, and refuse to do business with companies like Safegraph that buy, sell, or aggregate sensitive phone app location data from users who have not provided real consent to its collection. The interlocking industries of ad tech and data brokers are responsible for rampant privacy harms, and civic governments must not “green wash” these harms in the name of energy efficiency or transportation planning. As a society, we need safeguards in place to ensure that partnerships between tech and government do not cost us more than we gain.

Bennett Cyphers

How LGBTQ+ Content is Censored Under the Guise of "Sexually Explicit"

1 month ago

The latest news from Apple—that the company will open up a backdoor in its efforts to combat child sexual abuse imagery (CSAM)—has us rightly concerned about the privacy impacts of such a decision.

JOIN THE NATIONWIDE PROTEST

TELL APPLE: DON'T SCAN OUR PHONES

As always, some groups will be subject to potentially more harm than others. One of the features of Apple’s new plan is designed to provide notifications to minor iPhone users who are enrolled in a Family Plan when they either receive or attempt to send a photo via iMessage that Apple’s machine learning classifier defines as “sexually explicit.” If the minor child is under 13 years of age and chooses to send or receive the content, their parent will be notified and the image saved to the parental controls section of their phone for the parent to view later. Children between 13-17 will also receive a warning, but the parent will not be notified.

While this feature is intended to protect children from abuse, Apple doesn’t seem to have considered the ways in which it could enable abuse. This new feature assumes that parents are benevolent protectors, but for many children, that isn't the case: parents can also be the abuser, or may have more traditional or restrictive ideas of acceptable exploration than their children. While it's understandable to want to protect children from abuse, using machine learning classifiers to decide what is or is not sexual in nature may very well result in children being shamed or discouraged from seeking out information about their sexuality.

As Apple’s product FAQ explains, the feature will use on-device machine learning to determine which content is sexually explicit—machine learning that is proprietary and not open to public or even civil society review.

The trouble with this is that there’s a long history of non-sexual content—and particularly, LGBTQ+ content—being classified by machine learning algorithms (as well as human moderators) as “sexually explicit.” As Kendra Albert and Afsaneh Rigot pointed out in a recent piece for Wired, "Attempts to limit sexually explicit speech tend to (accidentally or on purpose) harm LGBTQ people more."

From filtering software company Netsweeper to Google News, Tumblr, YouTube and PayPal, tech companies don’t have a good track record when it comes to differentiating between pornography and art, educational, or community-oriented content. A recent paper from scholar Ari Ezra Waldman demonstrates this, arguing that "content moderation for 'sexual activity' is an assemblage of social forces that resembles oppressive anti-vice campaigns from the middle of the last century in which 'disorderly conduct', 'vagrancy', 'lewdness', and other vague morality statutes were disproportionately enforced against queer behavior in public."

On top of that, Apple itself has a history in over-defining “obscenity.” Apple TV has limited content for being too “adult,” and its App Store has placed prohibitions on sexual content—as well as on gay hookup and dating apps in certain markets, such as China, Saudi Arabia, the United Arab Emirates, and Turkey

Thus far, Apple says that their new feature is limited to “sexually explicit” content, but as these examples show, that’s a broad area that—without clear parameters—can easily catch important content in the net.

Right now, Apple’s intention is to roll out this feature only in the U.S.—which is good, at least, because different countries and cultures have highly different beliefs around what is and is not sexually explicit. 

But even in the U.S., no company is going to satisfy everyone when it comes to defining, via an algorithm, what photos are sexually explicit. Are breast cancer awareness images sexually explicit? Facebook has said so in the past. Are shirtless photos of trans men who’ve had top surgery sexually explicit? Instagram isn’t sure. Is a photo documenting sexual or physical violence or abuse sexually explicit? In some cases like these, the answers aren’t clear, and Apple wading into the debate, and tattling on children who may share or receive the images, will likely only produce more frustration, and more confusion.

JOIN THE NATIONWIDE PROTEST

TELL APPLE: DON'T SCAN OUR PHONES

Read further on this topic:

Jillian C. York

Jewel v. NSA: Americans (Still) Deserve Their Day in Court

1 month ago

With little explanation, the Ninth Circuit today affirmed the district court’s decision dismissing our landmark challenge to the US government’s mass communications surveillance, Jewel v. NSA. Needless to say, we are extremely disappointed.  Today’s decision renders government mass surveillance programs essentially unreviewable by U.S. courts, since no individual will be able to prove with the certainty the Ninth Circuit required that they were particularly spied upon.  This hurdle is insurmountable, especially when such programs are shrouded in secrecy, and the procedures for confronting that secrecy are disregarded by the courts.

Though we filed our our landmark Jewel v. NSA case in 2008, no court has yet ruled on the merits – whether the mass spying on the Internet and phone communications of millions of Americans violates U.S. constitutional and statutory law. Instead, despite the enormous amount of direct and circumstantial evidence showing our clients’ communications swept up by the NSA dragnet surveillance, along with those of millions of other Americans, the trial and appeals courts still found that the plaintiffs lacked legal “standing” to challenge the practices.

As we said in our brief to the Ninth Circuit, this dismissal “hands the keys to the courthouse to the Executive, making it impossible to bring any litigation challenging the legality of such surveillance without the Executive’s permission.  It blinds the courts to what the Executive has admitted: the NSA has engaged in mass surveillance of domestic communications carried by the nation’s leading telecommunications companies, and this surveillance touches the communications and records of millions of innocent Americans.”

This fight has been long and hard. But we remain determined to ensure that the network we all increasingly rely on in our daily lives—for communicating with our families, working, participating in community and political activities, shopping, and browsing—is not also an instrument subjecting all of our actions to NSA mass surveillance. We are evaluating the options for moving the case forward so that Americans can indeed have their day in court.

Related Cases: Jewel v. NSA
David Greene

Speak Out Against Apple’s Mass Surveillance Plans

1 month ago

Mass surveillance is not an acceptable crime-fighting strategy, no matter how well-intentioned the spying. If you’re upset about Apple’s recent announcement that the next version of iOS will install surveillance software in every iPhone, we need you to speak out about it.

SIGN THE PETITION

Tell Apple: Don't Scan Our Phones

Last year, EFF supporters spoke out and stopped the EARN IT bill, a government scheme that could have enabled the scanning of every message online. We need to harness that same energy to let Apple know that its plan to enable the scanning of photos on every iPhone is unacceptable. 

Apple plans to install two scanning systems on all of its phones. One system will scan photos uploaded to iCloud and compare them to a database of child abuse images maintained by various entities, including the National Center for Missing and Exploited Children (NCMEC), a quasi-governmental agency created by Congress to help law enforcement investigate crimes against children. The other system, which operates when parents opt into it, will examine iMessages sent by minors and compare them to an algorithm that looks for any type of “sexually explicit” material. If an explicit image is detected, the phone will notify either the user and possibly the user’s parent, depending on age.

These combined systems are a danger to our privacy and security. The iPhone scanning harms privacy for all iCloud photo users, continuously scanning user photos to compare them to a secret government-created database of child abuse images. The parental notification scanner uses on-device machine learning to scan messages, then informs a third party, which breaks the promise of end-to-end encryption.  

Apple’s surveillance plans don’t account for abusive parents, much less authoritarian governments that will push to expand it. Don’t let Apple betray its users.

SIGN THE PETITION

Tell Apple: Don't Scan Our Phones

Further Reading: 

Joe Mullin

Facebook’s Attack on Research is Everyone's Problem

1 month 1 week ago

Facebook recently banned the accounts of several New York University (NYU) researchers who run Ad Observer, an accountability project that tracks paid disinformation, from its platform. This has major implications: not just for transparency, but for user autonomy and the fight for interoperable software.

Ad Observer is a free/open source browser extension used to collect Facebook ads for independent scrutiny. Facebook has long opposed the project, but its latest decision to attack Laura Edelson and her team is a powerful new blow to transparency. Worse, Facebook has spun this bullying as  defending user privacy. This “privacywashing” is a dangerous practice that muddies the waters about where real privacy threats come from. And to make matters worse, the company has been gilding such excuses with legally indefensible claims about the enforceability of its terms of service. 

Taken as a whole, Facebook’s sordid war on Ad Observer and accountability is a perfect illustration of how the company warps the narrative around user rights. Facebook is framing the conflict as one between transparency and privacy, implying that a user’s choice to share information about their own experience on the platform is an unacceptable security risk. This is disingenuous and wrong. 

This story is a parable about the need for data autonomy, protection, and transparency—and how Competitive Compatibility (AKA “comcom” or “adversarial interoperability”) should play a role in securing them.

What is Ad Observer?

Facebook’s ad-targeting tools are the heart of its business, yet for users on the platform they are shrouded in secrecy. Facebook collects information on users from a vast and growing array of sources, then categorizes each user with hundreds or thousands of tags based on their perceived interests or lifestyle. The company then sells the ability to use these categories to reach users through micro-targeted ads. User categories can be weirdly specific, cover sensitive interests, and be used in discriminatory ways, yet according to a 2019 Pew survey 74% of users weren’t even aware these categories exist.

To unveil how political ads use this system, ProPublica launched its Political Ad Collector project in 2017. Anyone could participate by installing a browser extension called “Ad Observer,” which copies (or “scrapes”) the ads they see along with information provided under each ad’s “Why am I seeing this ad?” link. The tool then submits this information to researchers behind the project, which as of last year was NYU Engineering’s Cybersecurity for Democracy.

The extension never included any personally identifying information—simply data about how advertisers target users. In aggregate, however, the information shared by thousands of Ad Observer users revealed how advertisers use the platform’s surveillance-based ad targeting tools. 

This improved transparency is important to better understand how misinformation spreads online, and Facebook’s own practices for addressing it. While Facebook claims it “do[es]n’t allow misinformation in [its] ads”, it has been hesitant to block false political ads, and it continues to provide tools that enable fringe interests to shape public debate and scam users. For example, two groups were found to be funding the majority of antivaccine ads on the platform in 2019. More recently, the U.S. Surgeon General spoke out on the platform’s role in misinformation during the COVID-19 pandemic—and just this week Facebook stopped a Russian advertising agency from using the platform to spread misinformation about COVID-19 vaccines. Everyone from oil and gas companies to political campaigns has used Facebook to push their own twisted narratives and erode public discourse.

Revealing the secrets behind this surveillance-based ecosystem to public scrutiny is the first step in reclaiming our public discourse. Content moderation at scale is notoriously difficult, and it’s unsurprising that Facebook has failed again and again. But given the right tools, researchers, journalists, and members of the public can monitor ads themselves to shed light on misinformation campaigns. Just in the past year Ad Observer has yielded important insights, including how political campaigns and major corporations buy the right to propagate misinformation on the platform.

Facebook does maintain its own “Ad Library” and research portal. The former has been unreliable and difficult to use without offering information about targeting based on user categories; the latter comes swathed in secrecy and requires researchers to allow Facebook to suppress their findings. Facebook’s attacks on the NYU research team speak volumes about the company’s real “privacy” priority: defending the secrecy of its paying customers—the shadowy operators pouring millions into paid disinformation campaigns.

This isn’t the first time Facebook has attempted to crush the Ad Observer project. In January 2019, Facebook made critical changes to the way its website works, temporarily preventing Ad Observer and other tools from gathering data about how ads are targeted. Then, on the eve of the hotly contested 2020 U.S. national elections, Facebook sent a dire legal threat to the NYU researchers, demanding the project cease operation and delete all collected data. Facebook took the position that any data collection through “automated means” (like web scraping) is against the site's terms of service. But hidden behind the jargon is the simple truth that “scraping” is no different than a user copying and pasting. Automation here is just a matter of convenience, with no unique or additional information being revealed. Any data collected by a browser plugin is already, rightfully, available to the user of the browser. The only potential issue with plugins “scraping” data is if it happens without a user’s consent, which has never been the case with Ad Observer. 

Another issue EFF emphasized at the time is that Facebook has a history of dubious legal claims that such violations of service terms are violations of the Computer Fraud and Abuse Act (CFAA). That is, if you copy and paste content from any of the company’s services in an automated way (without its blessing), Facebook thinks you are committing a federal crime. If this outrageous interpretation of the law were to hold, it would have a debilitating impact on the efforts of journalists, researchers, archivists, and everyday users. Fortunately, a recent U.S. Supreme Court decision dealt a blow to this interpretation of the CFAA.  

Last time around, Facebook’s attack on Ad Observer generated enough public backlash that it seemed Facebook was going to do the sensible thing and back down from its fight with the researchers. Last week however, it turned out that this was not the case.

Facebook’s Bogus Justifications 

Facebook’s Product Management Director, Mike Clark, published a blog post defending the company’s decision to ban the NYU researchers from the platform. Clark’s message mirrored the rationale offered back in October by then-Advertising Integrity Chair Rob Leathern (who has since left for Google). These company spokespeople have made misleading claims about the privacy risk that Ad Observer posed, and then used these smears to accuse the NYU team of violating Facebook users’ privacy. The only thing that was being “violated” was Facebook’s secrecy, which allowed it to make claims about fighting paid disinformation without subjecting them to public scrutiny. 

Secrecy is not privacy. A secret is something no one else knows. Privacy is when you get to decide who knows information about you. Since Ad Observer users made an informed choice to share the information about the ads Facebook showed them, the project is perfectly compatible with privacy. In fact, the project exemplifies how to do selective data sharing for public interest reasons in a way that respects user consent.

It’s clear that Ad Observer poses no privacy risks to its users. Information about the extension is available in an FAQ and privacy policy, both of which accurately and comprehensively describe how the tool worked. Mozilla thoroughly reviewed the extension’s open source code independently before recommending it to users. That’s something Facebook itself could have done, if it was genuinely worried about what information the plugin was gathering.

In Clark’s post defending Facebook’s war on accountability, he claimed that the company had no choice but to shut down Ad Observer, thanks to a “consent decree” with the Federal Trade Commission (FTC). This order imposed after the Cambridge Analytica scandal requires the company to strictly monitor third-party apps on the platform. This excuse was obviously not true, as a casual reading of the consent decree makes clear. If there was any doubt, it was erased when the FTC's acting director of the Bureau of Consumer Protection, Sam Levine, published an open letter to Mark Zuckerberg calling this invocation of the consent decree “misleading,” adding that nothing in the FTC’s order bars Facebook from permitting good-faith research. Levine added, "[W]e hope that the company is not invoking privacy – much less the FTC consent order – as a pretext to advance other aims." This shamed Facebook into a humiliating climbdown in which it admitted that the consent decree did not force them to disable the researchers' accounts.

Facebook’s anti-Ad Observer spin relies on both overt and implicit tactics of deception. It’s not just false claims about FTC orders—there’s also subtler work, like publishing a blog post about the affair entitled “Research Cannot Be the Justification for Compromising People’s Privacy,” which invoked the infamous Cambridge Analytica scandal of 2018. This seeks to muddy any distinction between the actions of a sleazy for-profit disinformation outfit with those of a scrappy band of academic transparency researchers.

Let’s be clear; Cambridge Analytica is nothing like Ad Observer. Cambridge Analytica did its dirty work by deceiving users, tricking them into using a “personality quiz” app that siphoned away both their personal data and that of their Facebook “friends,” using a feature provided by the Facebook API. This information was packaged and sold to political campaigns as a devastating, AI-powered, Big Data mind-control ray, and saw extensive use in the 2016 US presidential election. Cambridge Analytica gathered this data and attempted to weaponize it by using Facebook's own developer tools (tools that were already known to leak data), without meaningful user consent and with no public scrutiny. The slimy practices of the Cambridge Analytica firm bear absolutely no resemblance to the efforts of the NYU researchers, who have prioritized consent and transparency in all aspects of their project.

An Innovation-Killing Pretext

Facebook has shown that it can’t be trusted to present the facts about Ad Observer in good faith. The company has conflated Cambridge Analytica’s deceptive tactics with NYU’s public interest research; it’s conflated violating its terms of service with violating federal cybersecurity law; and it’s conflated the privacy of its users with secrecy for its paying advertisers. 

Mark Zuckerberg has claimed he supports an “information fiduciary” relationship with users. This is the idea that companies should be obligated to protect the user information they collect. That would be great, but not all fiduciaries are equal. A sound information fiduciary system would safeguard users’ true control over how they share this information in the first place. For Facebook to be a true information fiduciary, it would have to protect users from unnecessary data collection by first parties like Facebook itself. Instead, Facebook says it has a duty to protect user data from the users themselves.

Even some Facebookers are disappointed with their company’s secrecy and anti-accountability measures. According to a New York Times report, there’s a raging internal debate about transparency after Facebook dismantled the team responsible for its content tracing tool CrowdTangle. According to interviewees, there’s a sizable internal faction at Facebook that sees the value of sharing how the platform operates (warts and all), and a cadre of senior execs who want to bury this information. (Facebook disputes this.) Combine this with Facebook’s attack on public research, and you get a picture of a company that wants to burnish its reputation by hiding its sins from the billions of people who rely on it under the guise of privacy, instead of owning those mistakes and making amends for them. 

Facebook’s reputation-laundering spills out into its relationship with app developers. The company routinely uses privacy-washing as a pretext to kill external projects, a practice so widespread it’s got a special name: “getting Sherlocked.” Last year EFF weighed in on another case where Facebook abused the CFAA to demand that the “Friendly browser” cease operation. Friendly allows users to control the appearance of Facebook while they use it, and doesn’t collect any user data or make use of Facebook’s API. Nevertheless, the company sent dire legal threats to its developers, which EFF countered in a letter that demolished the company’s legal claims. This pattern played out again recently with the open source Instagram app Barinsta, which received a cease and desist notice from the company.

When developers go against Facebook, the company uses all of its leverage as a platform to respond full tilt. Facebook doesn’t just kill your competing project: they deplatform you, burden you with legal threats, and brick any of your hardware that requires a Facebook login

What to do

Facebook is facing a vast amount of public backlash (again!). Several U.S. senators sent Zuckerberg a letter asking him to clarify the company’s actions. Over 200 academics signed a letter in solidarity with Laura Edelson and the other banned researchers. One simple remedy is clearly necessary: Facebook must reinstate all of the accounts of the NYU research team. Management should also listen to the workers at Facebook calling for greater transparency, and furthermore cease all CFAA legal threats to not just researchers, but anyone accessing their own information in an automated way.

This Ad Observer saga provides even more evidence that users cannot trust Facebook to act as an impartial and publicly accountable platform on its own. That’s why we need tools to take that choice out of Facebook’s hands. Ad Observer is a prime example of competitive compatibility—grassroots interoperability without permission. To prevent further misuse of the CFAA to shut down interoperability, courts and legislators must make it clear that anti-hacking laws don’t apply to competitive compatibility. Furthermore, platforms as big as Facebook should be obligated to loosen their grip on user information, and open up automated access to basic, useful data that users and competitors need. Legislation like the ACCESS Act would do just that, which is why we need to make sure it delivers

We need the ability to alter Facebook to suit our needs, even when Facebook’s management and shareholders try to stand in the way.

Rory Mir

Party Like It’s 1979: The OG Antitrust Is Back, Baby!

1 month 1 week ago

President Biden’s July 9 Executive Order on Promoting Competition in the American Economy is a highly technical, 72-part, fine-grained memo on how to address the ways market concentration harms our lives as workers, citizens, consumers, and beyond. 

To a casual reader, this may seem like a dry bit of industrial policy, but woven into the new order is a revolutionary idea that has rocked the antitrust world to its very foundations.

The Paradox of Antitrust

US antitrust law has three pillars: the Sherman Act (1890), the Clayton Act (1914), and the FTC Act (1914). Beyond their legal text, these laws have a rich context, including the transcripts of the debates that the bills’ sponsors participated in, explaining why the bills were written. They arose as a response to the industrial conglomerates of the Gilded Age, and their “robber baron” leaders, whose control over huge segments of the economy gave them a frightening amount of power.

Despite this clarity of intent, the True Purpose of Antitrust has been hotly contested in US history. For much of that history, including the seminal breakup of John D. Rockefeller’s Standard Oil in 1911, the ruling antitrust theory was “harmful dominance.” That’s the idea that companies that dominate an industry are potentially dangerous merely because they are dominant. With dominance comes the ability to impose corporate will on workers, suppliers, other industries, people who live near factories, even politicians and regulators.

The election of Ronald Reagan in 1980 saw the rise of a new antitrust theory, based on “consumer welfare.” Consumer welfare advocates argue that monopolies can be efficient, able to deliver better products at lower prices to consumers, and therefore the government does us all a disservice when it indiscriminately takes on monopolies. 

Consumer welfare’s standard-bearer was Judge Robert Bork, who served as Solicitor General in the Nixon administration. Bork was part of the conservative Chicago School of economics, and wrote a seminal work called “The Antitrust Paradox.”

The Antitrust Paradox went beyond arguing that consumer welfare was a better way to do antitrust than harmful dominance. In his book, Bork offers a kind of secret history of American antitrust, arguing that consumer welfare had always been the intention of America’s antitrust laws, and that we’d all been misled by the text of these laws, the debates surrounding their passage, and other obvious ways of interpreting Congress’s intent. 

Bork argued the true goal of antitrust was protecting us as consumers—not as citizens, or workers, or human beings. As consumers, we want better goods and lower prices. So long as a company used its market power to make better products at lower prices, Bork’s theories insisted that the government should butt out.

This is the theory that prevailed for the ensuing 40 years. It spread from economic circles to the government to the judiciary. It got a tailwind thanks to a well-funded campaign that included a hugely successful series of summer seminars attended by 40 percent of federal judges, whose rulings were measurably impacted by the program.

Morning in America

Everyone likes lower prices and better products, but all of us also have interests beyond narrow consumer issues. We live our days as parents, spouses, friends—not just as shoppers. We are workers, or small business owners. We care about our environment and about justice and equity. We want a say in how our world works.

Competition matters, but not just because it can make prices lower or products better. Competition matters because it lets us exercise self-determination. Market concentration means that choices about our culture, our built environment, our workplaces, and our climate are gathered into ever-fewer hands. Businesses with billions of users and dollars get to make unilateral decisions about our lives. The larger a business looms in our life, the more ways it can hurt us

The idea that our governments need to regulate companies beyond the narrow confines of “consumer welfare” never died, and now, 40 years on, it’s coming roaring back.

The FTC’s new chair, Lina Khan, burst upon the antitrust scene in 2017, when, as a Yale Law student, she published Amazon’s Antitrust Paradox, a devastating rebuke to Bork’s Antitrust Paradox, demonstrating how a focus on consumer welfare fails to deliver, even on its own terms. Khan is now one of the nation’s leading antitrust enforcers, along with fellow “consumer welfare” skeptics like Jonathan Kanter (now helming the Department of Justice Antitrust Division) and Tim Wu (the White House’s special assistant to the president for technology and competition policy).

Bombshells in the Fine Print

The Biden antitrust order is full of fine detail; it’s clear that the president’s advisors dug deep into competition issues with public interest groups across a wide variety of subjects. We love to nerd out on esoteric points of competition law as much as the next person, and we like a lot of what this memo says about tech and competition, but even more exciting is the big picture stuff.

When the memo charges the FTC with policing corporate concentration to prevent abuses to “consumer autonomy and consumer privacy,” that’s not just a reassurance that this administration is paying attention to some of our top priorities. It’s a bombshell, because it links antitrust to concerns beyond ensuring that prices stay low. 

Decades of consumer welfarism turned the electronic frontier into a monoculture dominated by “a group of five websites, each consisting of screenshots of text from the other four.” This isn’t the internet we signed up for. That’s finally changing.

We get it, this is esoteric, technical stuff. But if there’s one thing we’ve learned in 30 years’ fighting for a better digital future, it’s that all the important stuff starts out as dull, technical esoterica. From DRM to digital privacy, bossware to broadband, our issues too often rise to the level of broad concern once they’ve grown so harmful that everyone has to pay attention to them.

We are living through a profound shift in the framework that determines what kinds of companies are allowed to exist and what they’re allowed to do. It’s a shift for the better. We know nothing is assured. The future won’t fix itself. But this is an opportunity, and we’re delighted to seize it.

Cory Doctorow

A New Bill Would Protect Indie Video Game Developers and App Developers

1 month 1 week ago

Congress’s recent efforts on antitrust and competition in the tech space have been focused on today’s biggest tech companies, not on setting policy for the sector as a whole. Although Google, Apple, Facebook, and Amazon (and perhaps Microsoft) are the largest companies and therefore the ones generating the bulk of problems, they are not the only tech companies who may be abusing their dominance in a market. Focusing on only those companies threatens to make any gains in competition policy temporary, as happened with the telecom industry. But new legislation introduced by Senators Blumenthal, Blackburn, and Klobuchar takes a broader view, proposing industry-wide changes to app markets that will improve the landscape for independent developers and their customers.

The Open App Markets Act sets out a platform competition policy that embodies a few basic ideas: the owner of an app store should not be allowed to control the prices that app developers can set on other platforms, or to prevent independent developers from communicating with their customers about discounts and other incentives. App store owners should not be able to require developers to use the store owner’s own in-app payment systems. And app store owners who also control the operating system they run on won't be allowed to restrict customers from using alternative app stores.

Importantly, the bill would cover app stores with 50 million or more US users, which includes not just the Apple and Google app stores but also the largest online game stores.

The high-profile case of Epic Games vs Apple has drawn attention to practices such as Apple’s 30% commission on app sales and in-app purchases and its gag rule on advertising lower prices out of the App Store, but Apple is not alone here. Valve, the owners of the Steam platform for PC gaming, has been accused of similar practices in an ongoing antitrust lawsuit by Wolfire Games and a group of Steam users.

Valve Leverages Its Dominance in PC Gaming Against Users and Independent Developers

The video game market has a vibrant independent developer space, but challenges abound for these smaller developers. In order to be successful as a game developer, you have to make something new and interesting to gamers. An innovative new approach to a classic game genre, or a hybrid of game types into something new, can yield new and great games. As a result, there is much less pressure towards mergers and acquisitions in this market in the same way we see in other areas of the technology sector because gamers move on to the next new game, rather than remaining tethered to older games. Start selling bad games, and customers move on to a new company.

But innovation in games depends on some core factors. Developers need a way to access as wide a base of customers as possible, and they need profits from products they produce to keep producing more. When a platform has effectively captured the audience, it can control the profits of the developer in ways that hinder future development while keeping costs above market rates. That is basically the issue with Valve’s Steam today, where Valve enjoys 30% of all revenues generated from sales on its platform while also being used by a supermajority of PC gaming customers.

When Wolfire attempted to sell its own games at a lower cost off of Steam’s platform, Valve told them that they would lose access to the Steam market, effectively telling them that independent developers on Steam are not allowed to offer lower prices elsewhere. But losing access to the core audience on Steam would effectively mean losing the business, and thus Valve is able to use its market power to dictate how games are sold, and at what price. This is a classic monopoly problem that the Open App Markets Act addresses.

The Open App Markets Act Prohibits Platforms from Leveraging Their Dominance Against Independent Developers

A rarity in DC, the Open App Markets Act is only 5 pages long and sets forth easy-to-understand rules designed to promote independent developers. The legislation prohibits covered app stores from controlling independent developers’ ability to communicate with their audiences on the platform about business offers such as a discount or a new means of purchasing a game. It does not prevent a platform from charging a commission for sales or dictate what rates they can charge, leaving that to the competitive process. So Valve’s Steam can enjoy the revenues it collects from developers who use its platform, but it can’t control their ability to sell games through other channels on whatever terms they want.

This has significant relevance should the independent developer make it big. Think of games that skyrocket up to the top, like Valheim’s meteoric rise to 5 million customers while still in the early access phase, or PlayerUnknown Battlegrounds’ (aka PUBG) rise to 13 million copies sold, also during early access. The early access phase is critical to developers needing a revenue infusion to refine and improve their games further until full release. Steam gets its payday from those early sales and the developers benefit from the platform’s audience size. But once the game developer reaches a point where they can simply exist as their own having built a customer base, Valve should have no power to control developers’ pricing. Under the Act, should a developer decide to offer their products at a much lower price than found on Steam, Valve would be prohibited from stopping them, and prices for games will come down without being dependent on the Steam sale.

The Act enforces this new competition policy by empowering the Federal Trade Commission and state attorneys general to bring enforcement lawsuits, and most importantly, also giving independent developers a right to sue a platform for injuries caused by a violation of the new law. The combination of these enforcement mechanisms means a platform would be on notice to avoid conduct that interferes with the business decisions of the developers. It could go a long way toward solving the problems we’re seeing today in both the Apple and Valve stores and many others.

This broadly applicable bill could create positive benefits for independent developers because it will change the behavior of all platforms that carry sufficiently large audiences, which are attractive to developers. More importantly, new competition policy in place would change platforms to focus more on the audiences they can offer developers while removing the incentive to control business decisions of those developers to preserve commission revenues.

Ernesto Falcon

Why Data-Sharing Mandates Are the Wrong Way To Regulate Tech

1 month 1 week ago

The tech companies behind the so-called “sharing economy” have drawn the ire of brick-and-mortar businesses and local governments across the country.

For example, take-out apps such as GrubHub and UberEats have grown into a hundred-billion-dollar industry over the past decade, and received a further boost as many sit-down restaurants converted to only take-out during the pandemic. Small businesses are upset, in part, that these companies are collecting and monetizing data about their customers.

Likewise, ride-sharing services have decimated the highly-regulated taxi industry, replacing it with a larger, more nebulous fleet of personal vehicles carrying passengers around major cities. This makes them harder to regulate and plan around than traditional taxis. Alarmed municipal transportation agencies feel that they do not have the tools they need to monitor and manage ride-sharing.

A common thread runs through these emerging industries: massive volumes of sensitive personal data. Yelp, Grubhub, Uber, Lyft, and many more new companies have inserted themselves in between customers and older, smaller businesses, or have replaced those businesses entirely. The new generation of tech companies collect more data about their users than traditional businesses ever did. A restaurant might know its regular customers, or keep track of its best-selling dishes, but Grubhub can track each user’s searches, devices, and meals at restaurants across the city. Likewise, while traditional taxi services may have logged trip times, origins, and destinations, Uber and Lyft can link each trip to a user’s real-world identity and track supply and demand in real time.

This data is attractive for several reasons. It can be monetized through targeted ads or sold directly to data brokers, and it gives larger companies a competitive advantage over their smaller, less information-hungry peers. It allows tech companies to observe market trends, informing decisions about pricing, worker pay, and whom to buy out next. Sharing-economy corporations have every incentive to collect as much data as possible, and few legal restrictions on doing so. As a result, our interactions with everyday services like restaurants are tracked more closely than ever before.

Legislators want to force tech companies to share data

Several bills in states around the country, including in California and New York City, propose a “solution”: force the tech companies to share some of the data they collect. But these bills are misguided. While they might give small businesses short-term boons, they won’t address the larger systems that have led to corporate concentration in the tech sector. They will further encourage the commoditization of our data as a tool for businesses to battle each other, with user privacy caught in the crossfire.

Normalizing new, indiscriminate data sharing is a problem. Instead, regulators should be thinking of ways to protect consumers by limiting data collection, retention, use, and sharing. Creating new mandates to share data simply puts it in the hands of more businesses. This opens up more ways for government seizure of that data and more targets for hackers. 

We’ve sung the praises of interoperability policy in the past, so how is this different? After all, if Facebook should have to share data with its competitors under something like the ACCESS Act, why shouldn’t UberEats have to share data with restaurants? The difference is who’s in control. Good interoperability policy should put the user front and center: data sharing must only happen with a user’s opt-in consent, and only for purposes that directly benefit the user.

Forcing DoorDash to share information with restaurants, or Uber to share data with cities, doesn’t serve users in any way. And these bills don’t require a user’s opt-in consent for the processing of their data. Instead, these policies would make it so that sharing data with one company means that data will automatically end up in the hands of several downstream parties. Since the United States lacks basic consumer privacy laws, recipients of this data will be free to sell it, otherwise monetize it, or share it with law enforcement or immigration officials. This further erodes what little agency users currently have. 

Regulation should aim to protect user rights

The collection and use of personal data by tech companies is a real problem. And big companies wield their data troves as weapons to beat back competitors. But we should address those problems directly: first, with strong privacy laws governing how businesses process our data; and second, with better antitrust enforcement that puts a stop to harmful conglomeration and anticompetitive behavior.

It’s also okay for regulators to monitor and manage ride-sharing and other services that impact the public by requiring reasonable amounts of aggregated and deidentified data. Uber and Lyft have a well-documented history of deliberately misleading local authorities in order to skirt laws. However, any data-sharing requirements must be limited in scope, and minimize the risks to individual users and their data. For example, rules should carefully consider how much information is actually necessary to achieve specific governmental goals. Often, such information need not be highly granular. Whether the government or a private company is holding information, reidentification is always a real concern—by city transportation agencies, law enforcement, ICE, or any other third parties that purchase or steal data.

Despite what aspiring government contractors may say, agencies should not collect huge amounts of individualized data up front, then figure out what to do with it later. The way to fix bad actors in tech is not to increase non-consensual data sharing—nor to have governments mimic bad actors in tech.

Bennett Cyphers

It’s Time for Google to Resist Geofence Warrants and to Stand Up for Its Affected Users

1 month 1 week ago

EFF would like to thank former intern Haley Amster for drafting this post, and former legal fellow Nathan Sobel for his assistance in editing it.

The Fourth Amendment requires authorities to target search warrants at particular places or things—like a home, a bank deposit box, or a cell phone—and only when there is reason to believe that evidence of a crime will be found there. The Constitution’s drafters put in place these essential limits on government power after suffering under British searches called “general warrants” that gave authorities unlimited discretion to search nearly everyone and everything for evidence of a crime.

Yet today, Google is facilitating the digital equivalent of those colonial-era general warrants. Through the use of geofence warrants (also known as reverse location warrants), federal and state law enforcement officers are routinely requesting that Google search users’ accounts to determine who was in a certain geographic area at a particular time—and then to track individuals outside of that initially specific area and time period.

These warrants are anathema to the Fourth Amendment’s core guarantee largely because, by design, they sweep up people wholly unconnected to the crime under investigation.

For example, in 2020 Florida police obtained a geofence warrant in a burglary investigation that led them to suspect a man who frequently rode his bicycle in the area. Google collected the man’s location history when he used an app on his smartphone to track his rides, a scenario that ultimately led police to suspect him of the crime even though he was innocent.

Google is the linchpin in this unconstitutional scheme. Authorities send Google geofence warrants precisely because Google’s devices, operating system, apps, and other products allow it to collect data from millions of users and to catalog these users’ locations, movements, associations, and other private details of their lives.

Although Google has sometimes pushed back in court on the breadth of some of these warrants, it has largely acquiesced to law enforcement demands—and the number of geofence warrants law enforcement sends to the company has dramatically increased in recent years. This stands in contrast to documented instances of other companies resisting law enforcement requests for user data on Fourth Amendment grounds.

It’s past time for Google to stand up for its users’ privacy and to resist these unlawful warrants. A growing coalition of civil rights and other organizations, led by the Surveillance Technology and Oversight Project, have previously called on Google to do so. We join that coalition’s call for change and further demand that Google:

  • Resist complying with geofence warrants
  • Be much more transparent about the geofence warrants it receives
  • Provide all affected users with notice, and
  • Give users meaningful choice and control over their private data

As explained below, these are the minimum steps Google must take to show that it is committed to its users’ privacy and the Fourth Amendment’s protections against general warrants.

First: Refuse to Comply with Geofence Warrants

EFF calls on Google to stop complying with the geofence warrants it receives. As it stands now, Google appears to have set up an internal system that streamlines, systematizes, and encourages law enforcement’s use of geofence warrants. Google’s practice of complying with geofence warrants despite their unconstitutionality is inconsistent with its stated promise to protect the privacy of its users by “keeping your information safe, treating it responsibly, and putting you in control.” As recently as October, Google’s parent company’s CEO, Sundar Pichai, said that “[p]rivacy is one of the most important areas we invest in as a company,” and in the past, Google has even gone to court to protect its users’ sensitive data from overreaching government legal process. However, Google’s compliance with geofence warrants is incongruent with these platitudes and the company’s past actions.

To live up to its promises, Google should commit to either refusing to comply with these unlawful warrants or to challenging them in court. By refusing to comply, Google would put the burden on law enforcement to demonstrate the legality of its warrant in court. Other companies, and even Google itself, have done this in the past. Google should not defer to law enforcement’s contention that geofence warrants are constitutional, especially given law enforcement’s well-documented history of trying novel surveillance and legal theories that courts later rule to be unconstitutional. And to the extent Google has refused to comply with geofence warrants, it should say so publicly.

Google’s ongoing cooperation is all the more unacceptable given that other companies that collect similar location data from their users, including Microsoft and Garmin, have publicly stated that they would not comply with geofence warrants.

Second: Be Meaningfully Transparent

Even if Google were to stop complying with geofence warrants today, it still must be much more transparent about geofence warrants it has received in the past. Google must break out information and provide further details about geofence warrants in its biannual Transparency Reports.

Google’s Transparency Reports currently document, among other things, the types and volume of law enforcement requests for user data the company receives, but they do not, as of now, break out information about geofence warrants or provide further details about them. With no detailed reporting from Google about the geofence warrants it has received, the public is left to learn about them via leaks to reporters or by combing through court filings.

Here are a few specific ways Google can be more transparent: 

Immediate Transparency Reforms


Google should disclose the following information about all geofence warrants it has received over the last five years and commit to continue doing so moving forward:

  • The amount of geofence warrants Google has received to date, broken out in 6-month increments.
  • The percentage of requests with which it has complied.
  • How many device IDs Google has disclosed per warrant.
  • The duration and geographic area that each geofence warrant covered.

Google should also resist nondisclosure orders and litigate to ensure, if imposed, that the government has made the appropriate showing required by law. If Google is subject to such an order, or the related docket is sealed (prohibiting the company from disclosing the fact it has received some geofence warrants or from providing other details), Google should move to end those orders and to unseal those dockets so it can make details about them public as early as allowable by law.

Long-term Transparency Reforms


Google should also support and seek to provide basic details about court cases and docket numbers for orders authorizing each geofence warrant and docket numbers for any related criminal prosecutions Google is aware of as a result of the geofence warrants. At minimum, Google should disclose details on the agencies seeking geofence warrants, broken down by each federal agency, state-level agencies, and local law enforcement.

Third: Give All Affected Users Notice

Google must start telling its users when their information is caught up in a geofence warrant—even if that information is de-identified. This notice to affected users should state explicitly what information Google produced, in what format, which agency requested it, which court authorized the warrant, and whether Google provided identifying information. Notice to users here is critical: if people aren’t aware of how they are being affected by these warrants, there can’t be meaningful public debate about them.

To the extent the law requires Google to delay notice or not disclose the existence of the warrant, Google should challenge such restrictions so as to only comply with valid ones, and it should provide users with notice as soon as possible.

It does not appear that Google gives notice to every user whose data is requested by law enforcement. Some affected users have said that Google notified them that law enforcement accessed their account via a geofence warrant. But in some of the cases EFF has followed, it appears that Google has not always notified affected users who it identifies in response to these warrants, with no public explanation from Google. Google’s policies state that it gives notice to users before disclosing information, but more clarity is warranted here. Google should publicly state whether its policy is being applied to all users’ information subject to geofence warrants, or only those who they identify to law enforcement.

Fourth: Minimize Data Collection and Give Users Meaningful Choice

Many people do not know, much less understand, how and when Google collects and stores location data. Google must do a better job of explaining its policies and practices to users, not processing user data absent opt-in consent, minimizing the amount of data it collects, deleting retained data users no longer need, and giving users the ability to easily delete their data. 

Well before law enforcement ever comes calling, Google must first ensure it does not collect its users’ location data before obtaining meaningful consent from them. This consent should establish a fair way for users to opt into data collection, as click-through agreements which apply to dozens of services, data types, or uses at once are insufficient. As one judge in a case involving Facebook put it, the logic that merely clicking “I agree” indicates true consent requires everyone “to pretend” that users read every word of these policies “before clicking their acceptance, even though we all know that virtually none of them did.”

Google should also explain exactly what location data it collects from users, when that collection occurs, what purpose it is used for, and how long Google retains that data. This should be clear and understandable, not buried in dense privacy policies or terms of service.

Google should also only be collecting, retaining, and using its customers’ location data for a specific purpose, such as to provide directions on Google Maps or to measure road traffic congestion. Data must not be collected or used for a different purpose, such as for targeted advertising, unless users separately opt in to such use. Beyond notice and consent, Google must minimize its processing of user data, that is, only process user data as reasonably necessary to give users what they asked for. For example, user data should be deleted when it is no longer needed for the specific purpose for which it was initially collected, unless the user specifically requests that the data be saved.

Although Google allows users to manually delete their location data and to set automated deletion schedules, Google should confirm that these tools are not illusory. Recent enforcement actions by state attorneys allege that users cannot fully delete their data, much less fully opt out of having their location data collected at all.

*  *  *

 Google holds a tremendous amount of power over law enforcement’s ability to use geofence warrants. Instead of keeping quiet about them and waiting for defendants in criminal cases to challenge them in court, Google needs to stand up for its users when it comes to revealing their sensitive data to law enforcement.

Aaron Mackey

If You Build It, They Will Come: Apple Has Opened the Backdoor to Increased Surveillance and Censorship Around the World

1 month 1 week ago

Apple’s new program for scanning images sent on iMessage steps back from the company’s prior support for the privacy and security of encrypted messages. The program, initially limited to the United States, narrows the understanding of end-to-end encryption to allow for client-side scanning. While Apple aims at the scourge of child exploitation and abuse, the company has created an infrastructure that is all too easy to redirect to greater surveillance and censorship. The program will undermine Apple’s defense that it can’t comply with the broader demands.

For years, countries around the world have asked for access to and control over encrypted messages, asking technology companies to “nerd harder” when faced with the pushback that access to messages in the clear was incompatible with strong encryption. The Apple child safety message scanning program is currently being rolled out only in the United States. 

JOIN THE NATIONWIDE PROTEST

TELL APPLE: DON'T SCAN OUR PHONES

The United States has not been shy about seeking access to encrypted communications, pressuring the companies to make it easier to obtain data with warrants and to voluntarily turn over data. However, the U.S. faces serious constitutional issues if it wanted to pass a law that required warrantless screening and reporting of content. Even if conducted by a private party, a search ordered by the government is subject to the Fourth Amendment’s protections. Any “warrant” issued for suspicionless mass surveillance would be an unconstitutional general warrant. As the Ninth Circuit Court of Appeals has explained, "Search warrants . . . are fundamentally offensive to the underlying principles of the Fourth Amendment when they are so bountiful and expansive in their language that they constitute a virtual, all-encompassing dragnet[.]" With this new program, Apple has failed to hold a strong policy line against U.S. laws undermining encryption, but there remains a constitutional backstop to some of the worst excesses. But U.S constitutional protection may not necessarily be replicated in every country.

Apple is a global company, with phones and computers in use all over the world, and many governments pressure that comes along with that. Apple has promised it will refuse government “demands to build and deploy government-mandated changes that degrade the privacy of users.” It is good that Apple says it will not, but this is not nearly as strong a protection as saying it cannot, which could not honestly be said about any system of this type. Moreover, if it implements this change, Apple will need to not just fight for privacy, but win in legislatures and courts around the world. To keep its promise, Apple will have to resist the pressure to expand the iMessage scanning program to new countries, to scan for new types of content and to report outside parent-child relationships.  

It is no surprise that authoritarian countries demand companies provide access and control to encrypted messages, often the last best hope for dissidents to organize and communicate. For example, Citizen Lab’s research shows that—right now—China’s unencrypted WeChat service already surveils images and files shared by users, and uses them to train censorship algorithms. “When a message is sent from one WeChat user to another, it passes through a server managed by Tencent (WeChat’s parent company) that detects if the message includes blacklisted keywords before a message is sent to the recipient.” As the Stanford Internet Observatory’s Riana Pfefferkorn explains, this type of technology is a roadmap showing “how a client-side scanning system originally built only for CSAM [Child Sexual Abuse Material] could and would be suborned for censorship and political persecution.” As Apple has found, China, with the world’s biggest market, can be hard to refuse. Other countries are not shy about applying extreme pressure on companies, including arresting local employees of the tech companies. 

But many times potent pressure to access encrypted data also comes from democratic countries that strive to uphold the rule of law, at least at first. If companies fail to hold the line in such countries, the changes made to undermine encryption can easily be replicated by countries with weaker democratic institutions and poor human rights records—often using similar legal language, but with different ideas about public order and state security, as well as what constitutes impermissible content, from obscenity to indecency to political speech. This is very dangerous. These countries, with poor human rights records, will nevertheless contend that they are no different. They are sovereign nations, and will see their public-order needs as equally urgent. They will contend that if Apple is providing access to any nation-state under that state’s local laws, Apple must also provide access to other countries, at least, under the same terms.

'Five Eyes' Countries Will Seek to Scan Messages 

For example, the Five Eyes—an alliance of the intelligence services of Canada, New Zealand, Australia, the United Kingdom, and the United States—warned in 2018 that they will “pursue technological, enforcement, legislative or other measures to achieve lawful access solutions” if the companies didn’t voluntarily provide access to encrypted messages. More recently, the Five Eyes have pivoted from terrorism to the prevention of CSAM as the justification, but the demand for unencrypted access remains the same, and the Five Eyes are unlikely to be satisfied without changes to assist terrorism and criminal investigations too.

The United Kingdom’s Investigatory Powers Act, following through on the Five Eyes’ threat, allows their Secretary of State to issue “technical capacity notices,” which oblige telecommunications operators to make the technical ability of “providing assistance in giving effect to an interception warrant, equipment interference warrant, or a warrant or authorisation for obtaining communications data.” As the UK Parliament considered the IPA, we warned that a “company could be compelled to distribute an update in order to facilitate the execution of an equipment interference warrant, and ordered to refrain from notifying their customers.”

Under the IPA, the Secretary of State must consider “the technical feasibility of complying with the notice.” But the infrastructure needed to roll out Apple’s proposed changes makes it harder to say that additional surveillance is not technically feasible. With Apple’s new program, we worry that the UK might try to compel an update that would expand the current functionality of the iMessage scanning program, with different algorithmic targets and wider reporting. As the iMessage “communication safety” feature is entirely Apple’s own invention, Apple can all too easily change its own criteria for what will be flagged for reporting. Apple may receive an order to adopt its hash matching program for iPhoto into the message pre-screening. Likewise, the criteria for which accounts will apply this scanning, and where positive hits get reported, are wholly within Apple’s control. 

Australia followed suit with its Assistance and Access Act, which likewise allows for requirements to provide technical assistance and capabilities, with the disturbing potential to undermine encryption. While the Act contains some safeguards, a coalition of civil society organizations, tech companies, and trade associations, including EFF and—wait for it—Apple, explained that they were insufficient. 

Indeed, in Apple’s own submission to the Australian government, Apple warned “the government may seek to compel providers to install or test software or equipment, facilitate access to customer equipment, turn over source code, remove forms of electronic protection, modify characteristics of a service, or substitute a service, among other things.” If only Apple would remember that these very techniques could also be used in an attempt to mandate or change the scope of Apple’s scanning program. 

While Canada has yet to adopt an explicit requirement for plain text access, the Canadian government is actively pursuing filtering obligations for various online platforms, which raise the spectre of a more aggressive set of obligations targeting private messaging applications. 

Censorship Regimes Are In Place And Ready to Go

For the Five Eyes, the ask is mostly for surveillance capabilities, but India and Indonesia are already down the slippery slope to content censorship. The Indian government’s new Intermediary Guidelines and Digital Media Ethics Code (“2021 Rules”), in effect earlier this year, directly imposes dangerous requirements for platforms to pre-screen content. Rule 4(4) compels content filtering, requiring that providers “endeavor to deploy technology-based measures,” including automated tools or other mechanisms, to “proactively identify information” that has been forbidden under the Rules. 

India’s defense of the 2021 rules, written in response to the criticism from three UN Special Rapporteurs, was to highlight the very real dangers to children, and skips over the much broader mandate of the scanning and censorship rules. The 2021 Rules impose proactive and automatic enforcement of its content takedown provisions, requiring the proactive blocking of material previously held to be forbidden under Indian law. These laws broadly include those protecting “the sovereignty and integrity of India; security of the State; friendly relations with foreign States; public order; decency or morality.” This is no hypothetical slippery slope—it’s not hard to see how this language could be dangerous to freedom of expression and political dissent. Indeed, India’s track record on its Unlawful Activities Prevention Act, which has reportedly been used to arrest academics, writers and poets for leading rallies and posting political messages on social media, highlight this danger.

It would be no surprise if India claimed that Apple’s scanning program was a great start towards compliance, with a few more tweaks needed to address the 2021 Rules’ wider mandate. Apple has promised to protest any expansion, and could argue in court, as WhatsApp and others have, that the 2021 Rules should be struck down, or that Apple does not fit the definition of a social media intermediary regulated under these 2021 Rules. But the Indian rules illustrate both the governmental desire and the legal backing for pre-screening encrypted content, and Apple’s changes makes it all the easier to slip into this dystopia.

This is, unfortunately, an ever-growing trend. Indonesia, too, has adopted Ministerial Regulation MR5 to require service providers (including “instant messaging” providers) to “ensure” that their system “does not contain any prohibited [information]; and [...] does not facilitate the dissemination of prohibited [information]”. MR5 defines prohibited information as anything that violates any provision of Indonesia’s laws and regulations, or creates “community anxiety” or “disturbance in public order.” MR5 also imposes disproportionate sanctions, including a general blocking of systems for those who fail to ensure there is no prohibited content and information in their systems. Indonesia may also see the iMessage scanning functionality as a tool for compliance with Regulation MR5, and pressure Apple to adopt a broader and more invasive version in their country.

Pressure Will Grow

The pressure to expand Apple’s program to more countries and more types of content will only continue. In fall of 2020, in the European Union, a series of leaked documents from the European Commission foreshadowed an anti-encryption law to the European Parliament, perhaps this year. Fortunately, there is a backstop in the EU. Under the e-commerce directive, EU Member States are not allowed to impose a general obligation to monitor the information that users transmit or store, as stated in the Article 15 of the e-Commerce Directive (2000/31/EC). Indeed, the Court of Justice of the European Union (CJEU) has stated explicitly that intermediaries may not be obliged to monitor their services in a general manner in order to detect and prevent illegal activity of their users. Such an obligation will be incompatible with fairness and proportionality. Despite this, in a leaked internal document published by Politico, the European Commission committed itself to an action plan for mandatory detection of CSAM by relevant online service providers (expected in December 2021) that pointed to client-side scanning as the solution, which can potentially apply to secure private messaging apps, and seizing upon the notion that it preserves the protection of end-to-end encryption.

For governmental policymakers who have been urging companies to nerd harder, wordsmithing harder is just as good. The end result of access to unencrypted communication is the goal, and if that can be achieved in a way that arguably leaves a more narrowly defined end-to-end encryption in place, all the better for them.

All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, the adoption of the iPhoto hash matching to iMessage, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts. Apple has a fully built system just waiting for external pressure to make the necessary changes. China and doubtless other countries already have hashes and content classifiers to identify messages impermissible under their laws, even if they are protected by international human rights law. The abuse cases are easy to imagine: governments that outlaw homosexuality might require a classifier to be trained to restrict apparent LGBTQ+ content, or an authoritarian regime might demand a classifier able to spot popular satirical images or protest flyers.

Now that Apple has built it, they will come. With good intentions, Apple has ​​paved the road to mandated security weakness around the world, enabling and reinforcing the arguments that, should the intentions be good enough, scanning through your personal life and private communications is acceptable. We urge Apple to reconsider and return to the mantra Apple so memorably emblazoned on a billboard at 2019’s CES conference in Las Vegas: What happens on your iPhone, stays on your iPhone.

JOIN THE NATIONWIDE PROTEST

TELL APPLE: DON'T SCAN OUR PHONES

Read further on this topic:

Kurt Opsahl

O (No!) Canada: Fast-Moving Proposal Creates Filtering, Blocking and Reporting Rules—and Speech Police to Enforce Them

1 month 1 week ago

Policymakers around the world are contemplating a wide variety of proposals to address “harmful” online expression. Many of these proposals are dangerously misguided and will inevitably result in the censorship of all kinds of lawful and valuable expression. And one of the most dangerous proposals may be adopted in Canada. How bad is it? As Stanford’s Daphne Keller observes, “It's like a list of the worst ideas around the world.” She’s right.

These ideas include:

  • broad “harmful content” categories that explicitly include speech that is legal but potentially upsetting or hurtful
  • a hair-trigger 24-hour takedown requirement (far too short for reasonable consideration of context and nuance)
  • an effective filtering requirement (the proposal says service providers must take reasonable measures which “may include” filters, but, in practice, compliance will require them)
  • penalties of up to 3 percent of the providers' gross revenues or up to 10 million dollars, whichever is higher
  • mandatory reporting of potentially harmful content (and the users who post it) to law enforcement and national security agencies
  • website blocking (platforms deemed to have violated some of the proposal’s requirements too often might be blocked completely by Canadian ISPs)
  • onerous data-retention obligations

All of this is terrible, but perhaps the most terrifying aspect of the proposal is that it would create a new internet speech czar with broad powers to ensure compliance, and continuously redefine what compliance means.

These powers include the right to enter and inspect any place (other than a home):

“in which they believe on reasonable grounds there is any document, information or any other thing, including computer algorithms and software, relevant to the purpose of verifying compliance and preventing non-compliance  . . . and examine the document, information or thing or remove it for examination or reproduction”; to hold hearing in response to public complaints, and, “do any act or thing . . . necessary to ensure compliance.”

But don’t worry—ISPs can avoid having their doors kicked in by coordinating with the speech police, who will give them "advice" on their content moderation practices. Follow that advice and you may be safe. Ignore it and be prepared to forfeit your computers and millions of dollars.

The potential harms here are vast, and they'll only grow because so much of the regulation is left open. For example, platforms will likely be forced to rely on automated filters to assess and discover "harmful" content on their platforms, and users caught up in these sweeps could end up on file with the local cops—or with Canada’s national security agencies, thanks to the proposed reporting obligations.

Private communications are nominally excluded, but that is cold comfort—the Canadian government may decide, as contemplated by other countries, that encrypted chat groups of various sizes are not ‘private.’ If so, end-to-end encryption will be under further threat, with platforms pressured to undermine the security and integrity of their services in order to fulfill their filtering obligations. And regulators will likely demand that Apple expand its controversial new image assessment tool to address the broad "harmful content" categories covered by the proposal.

In the United States and elsewhere, we have seen how rules like this hurt marginalized groups, both online and offline. Faced with expansive and vague moderation obligations, little time for analysis, and major legal consequences if they guess wrong, companies inevitably overcensor—and users pay the price.

For example, a U.S. law intended to penalize sites that hosted speech related to child sexual abuse and trafficking led large and small internet platforms to censor broad swaths of speech with adult content. The consequences of this censorship have been devastating for marginalized communities and groups that serve them, especially organizations that provide support and services to victims of trafficking and child abuse, sex workers, and groups and individuals promoting sexual freedom. For example, the law prevented sex workers from organizing and utilizing tools that have kept them safe. Taking away online forums, client-screening capabilities, "bad date" lists, and other intra-community safety tips means putting more workers on the street, at higher risk, which leads to increased violence and trafficking. The impact was particularly harmful for trans women of color, who are disproportionately affected by this violence.

Indeed, even “voluntary” content moderation rules are dangerous. For example, policies against hate speech have shut down online conversations about racism and harassment of people of color. Ambiguous “community standards” have prevented Black Lives Matter activists from showing the world the racist messages they receive. Rules against depictions of violence have removed reports about the Syrian war and accounts of human rights abuses of Myanmar's Rohingya. These voices, and the voices of aboriginal women in Australia, Dakota pipeline protestors and many others, are being erased online. Their stories and images of mass arrests, military attacks, racism, and genocide are being flagged for takedown.

The powerless struggle to be heard in the first place; platform censorship ensures they won’t be able to take full advantage of online spaces either.

Professor Michael Geist, who has been doing crucial work covering this and other bad internet proposals coming out of Canada, notes that the government has shown little interest in hearing what Canadians think of the plans. Nonetheless, the government says it is taking comments. We hope Canadians will flood the government with responses.

But it's not just Canadians who need to worry about this. Dangerous proposals in one country have a way of inspiring other nations' policymakers to follow suit—especially if those bad ideas come from widely respected democratic countries like Canada.

Indeed, it seems like the people who drafted this policy themselves looked to other countries for inspiration—but ignored the criticism those other policies have received from human rights defenders, the UN, and a wide range of civil society groups. For example, the content monitoring obligations echo proposals in India and the UK that have been widely criticized by civil society, not to mention three UN Rapporteurs. The Canadian proposal seeks to import the worst aspects of Germany’s Network Enforcement Act, ("NetzDG"), which deputizes private companies to police the internet, following a rushed timeline that precludes any hope of a balanced legal analysis, leading to takedowns of innocuous posts and satirical content. The law has been heavily criticized in Germany and abroad, and experts say it conflicts with the EU’s central internet regulation, the E-Commerce Directive. Canada's proposal also bears a striking similarity to France's "hate speech" law, which was struck down as unconstitutional.

These regulations, like Canada’s, depart significantly from the more sensible, if still imperfect, approach being contemplated in the European Union’s Digital Services Act (DSA). The proposal sets limits on content removal and allows users to challenge censorship decisions. Although it contains some worrying elements that could result in content over-blocking, the DSA doesn’t follow the footsteps of other disastrous European internet legislation that has endangered freedom of expression by forcing platforms to monitor and censor what users say or upload online.

Canada also appears to have lost sight of its trade obligations. In 2018, Canada, the United States and Mexico finalized the USMCA agreement, an updated version of NAFTA. Article 19.17 of the USMCA prohibits treating platforms as the originators of content when determining liability for information harms. But this proposal does precisely that—in multiple ways, a platforms’ legal risk depends on whether it properly identifies and removes harmful content it had no part in creating.

Ironically, perhaps, the proposal would also further entrench the power of U.S. tech giants over social media, because they are the only ones who can afford to comply with these complex and draconian obligations.

Finally, the regulatory scheme would depart from settled human rights norms. Article 19 of the International Covenant on Civil and Political Rights allows states to limit freedom of expression under select circumstances, provided they comply with a three-step test: be prescribed by law; have legitimate aim; and be necessary and proportionate. Limitations must also be interpreted and applied narrowly.

Canada’s proposal falls far short of meeting these criteria. The UN Special Rapporteur on free expression has called upon companies to recognize human rights law as the authoritative global standard for freedom of expression on their platforms. It’s profoundly disappointing to see Canada force companies to violate human rights law instead.

This law is dangerous to internet speech, privacy, security, and competition. We hope our friends in the Great White North agree, and raise their voices to send it to the scrap heap of bad internet ideas from around the globe.

Corynne McSherry

​​What to Do When Schools Use Canvas or Blackboard Logs to Allege Cheating

1 month 1 week ago

Over the past few months, students from all over the country have reached out to EFF and other advocacy organizations because their schools—including teachers and administrators—have made flimsy claims about cheating based on digital logs from online learning platforms that don’t hold up to scrutiny. Such claims were made against over a dozen students at the Dartmouth Geisel School of Medicine, which EFF and the Foundation for Individual Rights in Education (FIRE) criticized for being a misuse, and misunderstanding, of the online learning platform technology. Dartmouth ended that investigation and dismissed all allegations after a media firestorm. If your school is making similar accusations against students, here’s what we recommend.

Students Deserve the Evidence Against Them

Online learning platforms provide a variety of digital logs to teachers and administrators, but those same logs are not always made available to the accused students. This is unfair. True due process for cheating allegations requires that students see the evidence against them, whether that’s videos from proctoring tools, or logs from test-taking or learning management platforms like Canvas or Blackboard.

Schools should use technology to serve students, rather than using it as a tool to discipline them.

It can be difficult to know what logs to ask for, because different online learning platforms call this data by different names. In the case of Canvas, there may be multiple types of logs, depending on whether a student used the platform to take a test or access course materials while studying for it. 

Bottom line: students should be given copies of any logs that are being cited as evidence of cheating, and any logs that may be exculpatory. It’s all too easy for schools to cherry-pick logs that only indicate possible misconduct. With course material access logs, for example, schools often only share (if they share at all) logs that indicate a student’s device accessed material that is relevant to the subject of the test, while dismissing logs that show access of materials that are less relevant, thus hiding evidence that the access was the result of an automated link between the device and platform. Any allegation should start with the student being shown everything that the administration has access to—and we’re calling on learning platforms like Canvas and Blackboard to give students direct access, too.

A sample log from Blackboard

Digital Logs Are Unreliable Evidence of Cheating

It’s important for both students and school officials to understand why digital logs are unreliable evidence of cheating. Course material access logs, for example, can only show that a page, document, or file was accessed by a device—not necessarily why or by whom (if anyone). Much like a cell phone pinging a tower, logs may show files being pinged by a device in short time periods, suggesting a non-deliberate process, as was the case with the access logs we saw from Dartmouth medical students. It can be impossible to know for sure from the logs alone if a student intentionally accessed any of the files, or if the pings happened due to delayed loading, or automatic refresh processes that are commonplace in most websites and online services. 

Canvas, for its part, has stated multiple times that both test-taking logs and course material access logs are not reliable. According to the company, test-taking logs, which purport to show student activity during a Canvas-administered test, “are not intended to validate academic integrity or identify cheating for a quiz.”  Similarly, logs that purport to show student access to class documents uploaded to Canvas, are also not accurate; as the company explains: “This data is meant to be used for rollups and analysis in the aggregate, not in isolation for auditing or other high-stakes analysis involving examining single users or small samples.” 

Blackboard has so far not made any public statements on the accuracy of its logs, but when contacted, the company said they are working on a public disclaimer to avoid any misconceptions on the accuracy and use of this type of data. The company was clear that logs should not be used to allege cheating: “Blackboard does not recommend using this data alone to detect student misconduct, and further, when an inquiry is made by a client related to this type of investigation, Blackboard consistently advises on the possible inaccurate conclusions that can be drawn from the use of such data.” Both Canvas and Blackboard should be more transparent with their users about the accuracy of their logs. For now, it's imperative that educators and administrators understand the unreliability of these logs, which both companies have admitted, albeit not as openly as we would like. 

Collaboration Between Students Can Be Key

If one student is being charged with cheating based on digital logs, it’s likely others are as well, so don’t be afraid of rallying fellow students. At Dartmouth medical school, collective activism helped individual students push back against the cheating allegations, ultimately forcing the administration to withdraw them. Dartmouth students accused of cheating worked together to uncover flaws in the investigation, then contacted advocacy organizations and the press, and held on-campus protests. 

Sympathetic teachers and administrators may also be valuable resources when it comes to pointing out unreliable evidence and due process problems. It may also be helpful to reach out to a technologist where possible, given the technical expertise required to examine digital data. Even a school computer club may be able to offer assistance. 

Surveillance Is Not the Solution

If a school is unable to use digital logs to prove cheating, the administration may consider adding even more invasive measures, like proctoring tools. But mandating more surveillance of students is not the answer. Schools should use technology to serve students, rather than using it as a tool to discipline them.

Disciplinary technologies that start by assuming guilt, rather than promoting trust, create a dangerous environment for students. Many schools now monitor online activity, like social media posts. They track what websites students visit. They require students to use technology on their laptops that collects and shares private data with third-party companies, while other schools have implemented flawed facial recognition technology. And many, many schools have on-campus cameras, more and more of which feed directly to police. 

But these technologies are often dangerously biased, and profoundly ineffective. They rob students of the space to experiment and learn without being monitored at every turn. And they teach young people to expect and allow surveillance, particularly when a power imbalance makes it difficult to fight back, whether that monitoring is by a school, an employer, a romantic partner, or the government. This problem is not just a slippery slope—it’s a cliff, and we must not push an entire generation off of it. Privacy is a human right, and schools should be foundational in a young person’s understanding of what it means to live in a society that respects and protects human rights.

EFF’s Statement on the Use of E-Learning Platform Logs in Misconduct Allegations

If necessary, you may wish to forward to your teachers or administrators this blog post on the problems with using digital logs as evidence of academic misconduct. If course material access logs, specifically, are being cited against you, you may forward EFF’s statement below. While we cannot assist every student individually, we hope this will help guide schools away from improperly using digital logs as evidence of cheating:

As a nonprofit dedicated to defending digital privacy, free speech, and innovation—including in the classroom, our independent research and investigation has determined that there are several scenarios where course material access logs of e-learning platforms can be generated without any student interaction, for example, due to delayed loading on a device or due to automatic refreshing of webpages. Instructure, the company behind the e-learning platform Canvas, has publicly stated that their logs (both course material access logs and test-taking logs) are not accurate and should not be used for academic misconduct investigations. The New York Times, in their own investigation into Canvas access logs, found this to be true as well. Blackboard, as well, has stated that inaccurate conclusions can be drawn from the use of their logs. Any administrator or teacher who interprets digital logs as evidence that a student was cheating may very likely be turning false positives into accusations of academic misconduct.

Educators who seek out technical evidence of students cheating, whether those are through logs, proctoring apps, or other computer-generated techniques, must also seek out technical expertise, follow due process, and offer concrete routes of appeal to students. We urge universities to protect the due process rights of all students facing misconduct charges by ensuring basic procedural safeguards are in place to guarantee fairness. These include, among other things, access to the full suite of evidence—including evidence that might tend to exculpate the student—and sufficient technical guidance for factfinders to interpret the evidence marshaled against the student. Students should also have time to meaningfully prepare for any hearing. These safeguards are necessary to ensure a just and trustworthy outcome is reached.

Jason Kelley

The Company Behind Online Learning Platform Canvas Should Commit to Transparency, Due Process for Students

1 month 1 week ago

Canvas is an online learning platform created by the Utah-based education technology company Instructure. In the past year, the platform has also been turned into a disciplinary technology, as more and more schools have come to rely on Canvas to drive allegations of cheating—despite student protests and technical advice. So far the company has shied away from the controversy. But it’s time for Instructure to publicly and unequivocally tell schools: Canvas does not provide reliable evidence of academic misconduct.

Schools use Canvas in two ways, both of which result in digital logs being generated by the software. First, schools can use Canvas to administer tests, and the platform provides logs of the test-taking activity. Second, schools can use Canvas to host learning materials such as course lectures and notes, and the platform provides logs of when specific course material was accessed by a student’s device. Neither of these logs are accurate for disciplinary use, and Canvas knows this. 

Since January, the Canvas instructor guide has explicitly stated: “Quiz logs should not be used to validate academic integrity or identify occurrences of cheating.” In February, an employee of Instructure commented in a community forum that “weirdness in Canvas Quiz Logs may appear because of various end-user [student] activities or because Canvas prioritizes saving student's quiz data ahead of logging events. Also, there is a known issue with logging of ‘multiple answer’ questions” (emphasis original). The employee concluded that “unfortunately, I can’t definitively predict what happened on the users’ end in that particular case.” 

And as we have previously written, along with the New York Times, course material access logs also do not accurately reflect student activity—they could either indicate that a student was actively engaging with the course material, or that a student’s device was passively logged in to the website, but the student was not actively accessing the course material. Canvas’ API documentation states that access logs should not be used for “high-stakes analysis” of student behavior.

Despite the admitted and inherently unreliable nature of Canvas logs, and an outcry by accused students and digital rights organizations, schools continue to rely on Canvas logs to determine cheating—and Instructure continues to act as if nothing is wrong. Meanwhile, students’ educational careers are being harmed by these flimsy accusations.

Instructure Must Right This Wrong

Last year, the administration of James Madison University lowered the grades of students who had been flagged as “inactive” during an exam according to Canvas test-taking logs. Students there spoke out to criticize the validity of the logs. Earlier this year, over a dozen medical students at Dartmouth’s Geisel medical school were accused of cheating after a dragnet investigation of their Canvas course material access logs. Dartmouth’s administration eventually retracted the allegations, but not before students spent months fighting the allegations, while fearing what they could mean for their futures. And in the past few months, EFF has heard from other students around the country who have been accused of cheating by their schools, based solely or primarily on Canvas logs.

Cheating accusations can result in lowered grades, a black mark on student transcripts, suspension, and even expulsion. Despite the serious consequences students face, they often have very limited recourse. Disturbingly, Canvas provides logs to administrators and teachers, but accused students have been unable to see those same logs, either via the platform itself or from school officials. 

Students deserve better. Schools should accept that Canvas logs cannot replace concrete, dispositive evidence of cheating. If you are a student who has been affected by the misuse of Canvas logs, we’ve written a guide for educating your administrators and teachers on their inaccuracy.

Instructure, for its part, must do better. Admitting to the unreliability of Canvas logs in obscure webpages is not enough. We reached out privately to Instructure, with no response. Now we are publicly calling on the company to issue a clear, public announcement that Canvas logs are unreliable and should not be used to fuel cheating accusations. The company should also allow students to access the same logs their schools are increasingly using to accuse them of academic misconduct—which is important because, when viewed in their entirety, Canvas logs often don’t rationally reveal activity consistent with cheating 

Instructure has a responsibility to prevent schools from misusing their products. Taking action now would show the company’s commitment to the integrity of the academic process, and would give students a chance to face their accusers on the same footing, rather than resigning to an unjust and opaque process.

Bill Budington
Checked
25 minutes 1 second ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed