Activists Sue San Francisco for Wide-Ranging Surveillance of Black-Led Protests Against Police Violence

1 month 3 weeks ago
Violating San Francisco’s Surveillance Technology Ordinance, SFPD Secretly Used Camera Network to Spy on People Protesting Police Killing of George Floyd

San Francisco—Local activists sued San Francisco today over the city police department’s illegal use of a network of more than 400 non-city surveillance cameras to spy on them and thousands of others who protested as part of the Black-led movement against police violence.

The Electronic Frontier Foundation (EFF) and the ACLU of Northern California represent Hope Williams, Nathan Sheard, and Nestor Reyes, Black and Latinx activists who participated in and organized numerous protests that crisscrossed San Francisco, following the police killing of George Floyd.

During the first week of mass demonstrations in late May and early June, the San Francisco Police Department (SFPD), in defiance of a city ordinance, tapped into a sprawling camera network run by a business district to conduct live mass surveillance without first going through a legally required public process and obtaining permission from the San Francisco Board of Supervisors.

“San Francisco police have a long and troubling history of targeting Black organizers going back to the 1960s,” said EFF Staff Attorney Saira Hussain. “This new surveillance of Black Lives Matter protesters is exactly the kind of harm that the San Francisco supervisors were trying to prevent when they passed a critical surveillance technology ordinance last year. And still, with all eyes watching, SFPD brazenly decided to break the law.”

“In a democracy, people should be able to freely protest without fearing that police are spying and lying in wait,” said Matt Cagle, Technology and Civil Liberties Attorney at the ACLU of Northern California. “Illegal, dragnet surveillance of protests is completely at odds with the First Amendment and should never be allowed. That the SFPD flouted the law to spy on activists protesting the abuse and killing of Black people by the police is simply indefensible.”

“Along with thousands of people in San Francisco, I took to the streets to protest police violence and racism and affirm that Black lives matter,” said Hope Williams, the lead plaintiff in this lawsuit and a protest organizer. “It is an affront to our movement for equity and justice that the SFPD responded by secretly spying on us. We have the right to organize, speak out, and march without fear of police surveillance.”

Records obtained and released by EFF in July show SFPD received a real-time remote link to more than 400 surveillance cameras. The vast camera network is operated by the Union Square Business Improvement District (USBID), a non-city entity. These networked cameras are high definition, allow remote zoom and focus capabilities, and are linked to a software system that can automatically analyze content, including distinguishing between when a car or a person passes within the frame.

The lawsuit calls on a court to order San Francisco to enforce the Surveillance Technology Ordinance and bring the SFPD back under the law. San Francisco’s Surveillance Technology Ordinance was enacted in 2019 following a near unanimous vote of the Board of Supervisors.

The plaintiffs, all of whom participated in protests against police violence and racism in May and June of 2020, are:

  • Hope Williams, a Black San Francisco activist. Williams organized and participated in several protests against police violence in San Francisco in May and June 2020.
  • Nathan Sheard, a Black San Francisco activist and community organizer at EFF. In his personal capacity, Sheard attended one protest and helped connect protestors with legal support in May and June 2020.
  • Nestor Reyes, a Latinx activist, native San Franciscan, and community healer. Reyes organized and participated in several protests against police violence in San Francisco in May and June 2020.

For the complaint:
https://www.eff.org/document/williams-v-san-francisco-complaint

Link to video statement of attorneys and client:
https://youtu.be/8gYd9oZzdHg

Case pages:
EFF case page
ACLU case page

For more on police spying tech:

Contact:  SairaHussainStaff Attorneysaira@eff.org press@aclunc.org
Karen Gullo

Announcing Global Privacy Control in Privacy Badger

1 month 3 weeks ago

Today, we’re announcing that the upcoming release of Privacy Badger will support the Global Privacy Control, or GPC, by default.

GPC is a new specification that allows users to tell companies they'd like to opt out of having their data shared or sold. By default, Privacy Badger will send the GPC signal to every company you interact with alongside the Do Not Track (DNT) signal. Like DNT, GPC is transmitted through an HTTP header and a new Javascript property, so every server your browser talks to and every script it runs will know that you intend to opt out of having your data shared or sold. Compared with ad industry-supported opt-out mechanisms, GPC is simple, easy to deploy, and works well with existing privacy tools.

DNT vs. GPC

Do Not Track is an older proposed web standard, meant to tell companies that you don't want to be tracked in any way. (Learn more about what we mean by "tracking" here). Privacy Badger was built around DNT, and will continue to send a DNT signal along with every request your browser makes. Privacy Badger gives third-party companies a chance to comply with DNT by adopting EFF’s DNT policy, and blocks those that look like they're tracking you anyway.

If DNT already expresses your intent to opt out of tracking, why do we need GPC? When DNT was developed, many websites simply ignored users’ requests not to be tracked. That's why Privacy Badger has to act as an enforcer: trackers that don't want to comply with your wishes get blocked. Today, users in many jurisdictions, including California, Nevada, and the European Economic Zone, have the legal right to opt out of some kinds of tracking. That's where GPC comes in. 

GPC is an experimental new protocol for communicating opt-out requests that align with privacy laws. For example, the California Consumer Privacy Act gives California residents the right to opt out of having their data sold. By sending the GPC signal, Privacy Badger is telling companies that you would like to exercise your rights. And while Privacy Badger only enforces DNT compliance against third-party domains, GPC applies to everyone—the first-party sites you visit, and any third-party trackers they might invite in.

GPC is a new proposal, and it hasn't been standardized yet, so many sites will not respect it right away. Eventually, we hope GPC will represent a legally-binding request to all companies in places with applicable privacy laws.

To stop tracking, first ask, then act

The CCPA and other laws are not perfect, and many of our users continue to live in places without strong legal protections. That’s why Privacy Badger continues to use both approaches to privacy. It asks websites to respect your privacy, using GPC as an official request under applicable laws and DNT to express what our users actually want (to opt out of all tracking). It then blocks known trackers, who refuse to comply with DNT, from loading at all.

Starting this release, Privacy Badger will begin setting the GPC signal by default. Users can opt out of sending this signal, along with DNT, in their Privacy Badger settings. In addition, users can disable Privacy Badger on individual first-party sites in order to stop sending the GPC signal to those sites.

Bennett Cyphers

House Antitrust Report Is a Bold Prescription for Curing Big Tech’s Ills

1 month 3 weeks ago

The long-awaited report[pdf] by the House Judiciary Committee staff[1] on Big Tech’s monopoly power hits all the right notes—and just a few wrong ones. Following a year of hearings and research, the staff of the Subcommittee on Antitrust found that Facebook, Google, Amazon, and Apple all have either significant market power or outright monopoly power in a variety of markets. Many of the report’s recommendations echo calls EFF has also made, proof of just how obviously effective, needed, and common-sense they are.

The power of Big Tech weakens innovation, erodes privacy, and undermines “political and economic liberties,” says the report. We’re pleased to see the report go beyond U.S. antitrust law’s narrow focus on consumer prices. The report also recommends many of the interventions that EFF has championed: new requirements for interoperability, a tougher standard for approving mergers and acquisitions, and stepping up the DOJ and Federal Trade Commission enforcement of the antitrust laws.

Interoperability as a Remedy

EFF has long been pointing out the value of interoperability as a monopoly-killer and innovation promoter that harnesses the skills of diverse and widely distributed entrepreneurs, without the central planning of governments or giant tech firms. That’s why we’re pleased that one of the committee’s recommendations is to require the Big Tech firms to allow their services to interoperate with competitors, breaking the power of network effects. Policies to promote compatible products are “an important complement, not substitute, to vigorous antitrust enforcement,” says the report.

Privacy Focus

The report explains how “the persistent collection and misuse of consumer data” by Big Tech firms is a sign of their monopoly power and gets worse the closer to monopoly a company gets. For example, major privacy scandals and data breaches haven’t caused many people to stop using Facebook, which is evidence of the social network’s monopoly power. People continue to use Facebook not because they trust it, but because it’s where their friends and family are. The lack of competition has allowed the Big Tech firms to create a “race to the bottom” on privacy. Importantly, the report connects this to antitrust analysis: while most Facebook and Google services are free to the consumer, poor privacy protection means a lower-quality product that customers wouldn’t accept if they had alternatives. That’s a harm that antitrust laws can address.

Get Tougher on Mergers

The report also recommends raising the bar on approval of mergers and acquisitions by the dominant tech platforms through a burden-shift: before acquiring nascent tech firms, the Big Tech companies would need to prove that the acquisition would not increase monopoly power or substantially decrease competition. This shift is another measure that EFF has long supported, because it makes sense: merging firms always have the most information about impacts on consumers and competition, information that enforcers often have difficulty obtaining, especially with limited budgets for litigation.

For News Media, It’s Goliath vs. Goliath

The report stumbles when it makes recommendations about preserving news media in the face of declining advertising revenues. The report recommends offering news media companies an exemption from the antitrust laws, allowing them to join as a bloc to negotiate some form of payments from Big Tech news aggregators. The problem with this is that U.S. news media is itself a highly concentrated industry. Exempting another set of giant firms from antitrust scrutiny will tend to shift money from one set of monopolists to another, without providing more media diversity for consumers. The report recognizes that such exemptions are “disfavored” and may run contrary to the goals of antitrust law.

Overall, the Judiciary Committee report is a strong, evidence-based prescription for fixing antitrust law to help address the problems of Big Tech. We hope the conversation continues, with good changes to the law and increased enforcement yet to come.

[1] While much of the effort that led to the report was bipartisan, the final report was issued by the majority (Democratic) committee staff.

Mitch Stoltz

Orders from the Top: The EU’s Timetable for Dismantling End-to-End Encryption

1 month 3 weeks ago

The last few months have seen a steady stream of proposals, encouraged by the advocacy of the FBI and Department of Justice, to provide “lawful access” to end-to-end encrypted services in the United States. Now lobbying has moved from the U.S., where Congress has been largely paralyzed by the nation’s polarization problems, to the European Union—where advocates for anti-encryption laws hope to have a smoother ride. A series of leaked documents from the EU’s highest institutions show a blueprint for how they intend to make that happen, with the apparent intention of presenting anti-encryption law to the European Parliament within the next year.

The public signs of this shift in the EU—which until now has been largely supportive toward privacy-protecting technologies like end-to-end encryption—began in June with a speech by Ylva Johansson, the EU’s Commissioner for Home Affairs.

Speaking at a webinar on “Preventing and combating child sexual abuse [and] exploitation”, Johansson called for a “technical solution” to what she described as the “problem” of encryption, and announced that her office had initiated “a special group of experts from academia, government, civil society and business to find ways of detecting and reporting encrypted child sexual abuse material.”

The subsequent report was subsequently leaked to Politico. It includes a laundry list of tortuous ways to achieve the impossible: allowing government access to encrypted data, without somehow breaking encryption.

At the top of that precarious stack was, as with similar proposals in the United States, client-side scanning. We’ve explained previously why client-side scanning is a backdoor by any other name. Unalterable computer code that runs on your own device, comparing in real-time the contents of your messages to an unauditable ban-list, stands directly opposed to the privacy assurances that the term “end-to-end encryption” is understood to convey. It’s the same approach used by China to keep track of political conversations on services like WeChat, and has no place in a tool that claims to keep conversations private.

It’s also a drastically invasive step by any government that wishes to mandate it. For the first time outside authoritarian regimes, Europe would be declaring which Internet communication programs are lawful, and which are not. While the proposals are the best that academics faced with squaring a circle could come up with, it may still be too aggressive to politically succeed as  enforceable regulation—even if tied, as Johannsson ensured it was in a subsequent Commission communication, to the fight against child abuse.

But while it would require a concerted political push, EU’s higher powers are gearing up for such a battle. In late September, Statewatch published a note, now being circulated by the current EU German Presidency, called “Security through encryption and security despite encryption”, encouraging the EU’s member states to agree to a new EU position on encryption in the final weeks of 2020.

While conceding that “the weakening of encryption by any means (including backdoors) is not a desirable option”, the Presidency’s note also positively quoted an EU Counter-Terrorism Coordinator (CTC) paper from May (obtained and made available by German digital rights news site NetzPolitik.org), which calls for what it calls a “front-door”—a “legal framework that would allow lawful access to encrypted data for law enforcement without dictating technical solutions for providers and technology companies”.

The CTC highlighted what would be needed in order to legislate this framework:

The EU and its Member States should seek to be increasingly present in the public debate on encryption, in order to inform the public narrative on encryption by sharing the law enforcement and judicial perspective…

This avoids a one-sided debate mainly driven by the private sector and other nongovernmental voices. This may involve engaging with relevant advocacy groups, including victims associations that can relate to government efforts in that area. Engagement with the [European Parliament] will also be key to prepare the ground for possible legislation.

A speech by Commissioner Johannsson tying defeating secure messaging to protecting children; a paper spelling out “technical solutions” to attempt to fracture the currently unified (or “one-sided”) opposition; and, presumably in the very near future, once the EU has published its new position on encryption, a concerted attempt to lobby members of the European Parliament for this new legal framework: these all fit the Counter-Terrorist Coordinators’ original plans.

We are in the first stages of a long anti-encryption march by the upper echelons of the EU, headed directly toward Europeans’ digital front-doors. It’s the same direction as the United Kingdom, Australia, and the United States have been moving for some time. If Europe wants to keep its status as a jurisdiction that treasures privacy, it will need to fight for it.

Danny O'Brien

California Community Leaders Call on Governor to Help Get State Broadband Up to Speed

1 month 3 weeks ago
Gov. Gavin Newsom Should Convene a Special Session to Pass Universal Access Law Before End of Year

Sacramento - More than 60 California community leaders—including public officials, businesses, education advocates, and civil rights groups—have joined the Electronic Frontier Foundation (EFF) and Common Sense Media to call on California Governor Gavin Newsom to convene a special legislative session to pass universal broadband access legislation this year.

The COVID-19 pandemic has accentuated California's longstanding broadband access crisis. More than 2 million Californians lack access to high-speed broadband today. As KQED recently reported, that includes some 1.2 million students across the state who lack adequate Internet access to do their work.

“Children should not be forced to do homework in fast food restaurant parking lots in the middle of a pandemic, and workers should not be forced to struggle with decades-old Internet infrastructure or literally no broadband access at all,” said Electronic Frontier Foundation’s Senior Legislative Counsel Ernesto Falcon.

The people of California need help, and the state should move forward now to begin the work needed to finally close the digital divide. Newsom himself has identified this as a pressing issue, recently signing an executive order to establish a state goal of 100 mbps download speeds for all Californians, a standard that meets the demands of today’s Internet use.

“Governor Newsom’s Executive Order sets us on the right course of connecting everyone to high-speed access. Now we need his help in pushing the legislature to deliver the money and changes needed in law to deliver on his promise,” said Falcon. “If he heeds our call to invoke a special session on broadband access, this coalition stands ready to push the legislature to follow his leadership.”

For the letter:
https://www.eff.org/document/letter-gov-newsom-special-session-broadband

For more information:
https://www.eff.org/deeplinks/2020/09/join-more-fifty-california-groups-calling-urgent-action-broadband-access

Rebecca Jeschke

Supreme Court Hearing in Oracle v Google: Will the High Court Fix the Federal Circuit's Mess?

1 month 3 weeks ago

On Wednesday the U.S. Supreme Court will hear oral arguments in the long-running case of Oracle v. Google. We’ll be following closely, and looking for signs that the Court will reverse the Federal Circuit’s dangerous decisions in this ground-breaking litigation. And then we’ll be waiting and hoping the Court will issue an opinion explaining that giving copyright protection to Application Programming Interfaces (APIs) is a bad idea or, if that protection exists, that reimplementing them is a lawful fair use.

To summarize the last nine years: Oracle claims a copyright on the Java APIs, and that Google infringed that copyright by using certain Java APIs in the Android OS. When it created the Android OS, Google wrote its own version of Java. But in order to allow developers to write their own programs for Android, Google used certain specifications of the Java APIs. Since APIs are, generally speaking, specifications that let programs talk to each other, it would strike at the heart of innovation and collaboration in technology to declare them copyrightable.

EFF has filed numerous amicus briefs supporting Google and, more importantly, the pro-innovation stance it is taking in this case. As we’ve explained before, the two Federal Circuit opinions are a disaster for innovation in computer software. Its first decision--that APIs are entitled to copyright protection--ran contrary to the views of most other courts and the long-held expectations of computer scientists. Indeed, excluding APIs from copyright protection was essential to the development of modern computers and the Internet.

Then the second decision made things worse. The Federal Circuit's first opinion had at least held that a jury should decide whether Google’s use of the Java APIs was fair, and in fact a jury did just that. But Oracle appealed again, and in 2018 the same three Federal Circuit judges reversed the jury's verdict and held that Google had not engaged in fair use as a matter of law. Having gone to the trouble of sending this case to trial, at enormous expense to the parties and the court system, you might think the Federal Circuit would respect the jury’s decision. It did not. In the court’s view, the jury’s finding was simply advisory.

That ruling created enormous legal uncertainty for any software developer thinking about reimplementing pre-existing APIs. If the first Federal Circuit opinion means that APIs are copyrightable, and the second opinion means that a jury isn’t allowed to decide that using a competitor’s APIs is a fair use, then there are few, if any, ways that a second competitor can enter a market with an API-compatible product. 

Much of the argument Wednesday may be focused on a more procedural question: whether the Federal Circuit overstepped when it substituted its judgment for that of the jury on the second round. The Supreme Court asked for additional briefing on the standard of review, i.e., whether the Federal Circuit should have simply considered whether the jury’s conclusion was reasonable and, if it was, allowed the verdict to stand.

But we are hoping the final ruling takes a bolder step, and clarifies, once and for all, that the APIs at issue simply weren’t copyrightable in the first place. A ruling for Google on fair use grounds would set a good precedent for the next developer of API-compatible software to argue that their use is also fair. But those arguments take time, money, lawyers, and, thanks to the outrageous penalties associated with copyright infringement, come with a substantial risk. And beyond all those knowable costs, wedging a layer of copyright permissions culture into API compatibility comes with serious unknowable costs: how many developers will abandon ideas for competitive software because the legal risks are too great? Huge corporations like Google can take those chances. Small startups – and their investors – will not.

The Federal Circuit created a dangerous precedent that will only discourage competition and innovation just when we need it most. The Supreme Court can and should fix this mess.

 

 

 

 

 

 

 

Related Cases: Oracle v. Google
Corynne McSherry

Come Back with a Warrant for my Virtual House

1 month 3 weeks ago

Virtual Reality and Augmented Reality in your home can involve the creation of an intimate portrait of your private life.  The VR/AR headsets can request audio and video of the inside of our house, telemetry about our movements, depth data and images that can build a highly accurate geometrical representation of your place, that can map exactly where that mug sits on your coffee table, all generated by a simultaneous localization and mapping (SLAM) system.  As Facebook’s Reality Labs explains, their “high-accuracy depth capture system, which uses dots projected into the scene in infrared, serves to capture the exact shape of big objects like tables and chairs and also smaller ones, like the remote control on the couch.” VR/AR providers can create “Replica re-creations of the real spaces that even a careful observer might think are real,” which is both the promise of and the privacy problem with this technology.

If the government wants to get that information, it needs to bring a warrant. 

Nearly twenty years ago, the Supreme Court examined another technology that would allow law enforcement to look through your walls into the sanctity of your private space—thermal imaging. In Kyllo v. United States, the Court held that a thermal scan, even from a public place outside the house, to monitor the heat emanating in your home was a Fourth Amendment search, and required a warrant.  This was an important case, building upon some earlier cases, like United States v. Karo, which found a search when the remote activation of a beeper showed a can of ether was inside a home. 

More critically, Kyllo established the principle that new technologies1 that can “explore details of the home that would previously have been unknowable without physical intrusion, the surveillance is a 'search' and is presumptively unreasonable without a warrant.” A VR/AR setup at home can provide a wealth of information—“details  of the home”—that was previously unknowable without the police coming in through the door.

This is important, not just to stop people from seeing the dirty dishes in your sink, or the politically provocative books on your bookshelf.  The protection of your home from government intrusion is essential to preserve your right to be left alone, and to have autonomy in your thoughts and expression without the fear of Big Brother breathing down your neck. While you can choose to share your home with friends, family or the public, the ability to make that choice is a fundamental freedom essential to human rights.

Of course, a service provider may require sharing this information before providing certain services.  You might want to invite your family to a Covid-safe housewarming, their avatars appearing in a exact replica of your new home, sharing the joy of seeing your new space. To get the full experience and fulfill the promise of the new technology, the details of your house—your furnishings, the art on your walls, the books on your shelf may need to be shared with a service provider to be enjoyed by your friends. And, at the same time, creating a tempting target for law enforcement wanting to look inside your house. 

Of course, the ideal would be that strong encryption and security measures would protect that information, such that only the intended visitors to your virtual house could get to wander the space, and the government would be unable to obtain the unencrypted information from a third-party.  But we also need to recognize that governments will continue to press for unencrypted access to private spaces. Even where encryption is strong between end points, governments may, like the United Kingdom, ask for the ability to insert an invisible ghost to attend the committee of correspondence meeting you hold in your virtual dining room. 

While it is clear that monitoring the real-time audio in your virtual home requires a wiretap order, the government may argue that they can still observe a virtual home in real-time. Not so.  Carpenter v. United States provides the constitutional basis to keep the government at bay when the technology is not enough.  Two years ago, in a landmark decision, the Supreme Court established that accessing historical records containing the physical locations of cellphones required a search warrant, even though they were held by a third-party. Carpenter cast needed doubt on the third-party doctrine, which allows access to third-party held records without a warrant, noting that “few could have imagined a society in which a phone goes wherever its owner goes, conveying to the wireless carrier not just dialed digits, but a detailed and comprehensive record of the person’s movements.”

Likewise, when the third-party doctrine was created in 1979, few could have imagined a society in which VR/AR systems can map, in glorious three dimensional detail, the interior of one’s home and their personal behavior and movements, conveying to the VR/AR service provider a detailed and comprehensive record of the goings on of a person’s house. Carpenter and Kyllo stand strongly for requiring a warrant for any information created by your VR/AR devices that shows the interior of your private spaces, regardless of whether that information is held by a service provider.

In California, where many VR/AR service providers are based, CalECPA generally requires a warrant or wiretap order before the government may obtain this sensitive data from service providers, with a narrow exception for subpoenas, where “access to the information via the subpoena is not otherwise prohibited by state or federal law.”  Under Kyllo and Carpenter, warrantless access to your home through VR/AR technology is prohibited by the ultimate federal law, the Constitution.

We need to be able to take advantage of the awesomeness of this new technology, where you can have a fully formed virtual space—and invite your friends to join you from afar—without creating a dystopian future where the government can teleport into a photo-realistic version of your house, able to search all the nooks and crannies measured and recorded by the tech, without a warrant. 

Carpenter led to a sea change in the law, and since has been cited in hundreds of criminal and civil cases across the country, challenging the third-party doctrine for surveillance sources, like real-time location tracking, 24/7 video cameras and automatic license plate readers. Still the development of the doctrine will take time. No court has yet ruled on a warrant for a virtual search of your house.  For now, it is up to the service providers to give a pledge, backed by a quarrel of steely-eyed privacy lawyers, that if the government comes to knock on your VR door, they will say “Come back with a warrant.” 



  • 1. Kyllo used the phrase “device that is not in general public use,” which sets up an unfortunate and unnecessary test that could erode our privacy as new technologies become more widespread. Right now, the technology to surreptitiously view the interior of a SLAM-mapped home is not in general use, and even when VR and AR are ubiquitous, courts have recognized that technologies to surveil cell phones are not “in general public use,” even though the cell phones themselves are.
Kurt Opsahl

Judge Upends Vallejo’s Use of a Stingray

1 month 3 weeks ago

Cops in Vallejo have put their controversial cell-phone surveillance tool back in the box, after a judge released a tentative ruling (which the judge might or might not later finalize or amend) that they'd acquired it in violation of state law. The case was brought by Oakland Privacy,  the EFF Pioneer Award Winning organization and Electronic Frontiers Alliance member. They allege  that the city of Vallejo, California, may not use its cellular surveillance tool (often called a cell site simulator or stingray) because the police failed to get explicit approval from the city council, following input from residents, of an adequate privacy policy governing its use. According to the tentative ruling (again, it is not final), police must acquire from Vallejo City council a “resolution or ordinance authorizing a specific usage and privacy policy regarding that technology and meeting the requirements” of the state statute. 

The City Council assembled via teleconference in spring 2020, amidst a state-wide pandemic related shelter-in-place order, to vote for the purchase of this controversial piece of surveillance equipment. It did so without adequately obtaining input from the public. 

What’s worse, the city council approved the purchase in violation of state law (S.B. 741) regulating the acquisition of such technology. To ensure democratic control over whether police may obtain cell-site simulators, the California legislature passed it in 2015. EFF advocated to enact it. The law prohibits local government agencies from acquiring cell-site simulators without the local governing body approving a specific privacy and usage policy that “is consistent with respect for an individual’s privacy and civil liberties.” This policy needs to be available to the public, published online, and voted on during an open hearing. But the Vallejo city council did not consider and approve such a policy when it purported to approve purchase of the technology. 

After the judge’s tentative ruling, the Vallejo City Council announced it would be putting a privacy and use policy for this already-purchased machine on the docket for public discussion on October 27. As Oakland Privacy writes on their blog, “This meeting will provide an opportunity for Vallejo residents to read, review and comment upon the policy prior to adoption by the City.” We urge members of the public to turn out and voice their concerns about Vallejo police obtaining expensive new surveillance technology that can intrude on privacy, chill free speech, and disparately burden people of color.

A cell-site simulator pretends to act as a cell tower in order to locate cellular devices that connect to it. Cell phones in an area connect to the device rather than the actual tower, allowing police to see unique identifiers that could be used to identify or track people. Police most commonly use cell site simulators to locate a known phone in an unknown location (for example, to find a person wanted on a warrant), or to identify the unknown phones in a known location (for example, to learn the identity of protesters at a rally). After borrowing such a device from another agency, the Vallejo Police Department argued it needed its own, and proposed spending $766,000 on cell-site simulator devices from KeyW Corporation, along with a vehicle in which police would install it.

Police claim that cell-site simulators are a valuable tool in fighting terrorism and crime--but the truth is police often use them to target low-level infractions. For example, Maryland police deployed a stingray to collect information on the customers of a pizza shop in an attempt to find the thief that absconded with around $50 worth of chicken wings and subs. Worse, there are serious concerns that police use stingrays to identify people who exercise their First Amendment right to attend political demonstrations and protests.

Oakland Privacy’s lawsuit is an important test of CCOPS (Community Control Over Police Surveillance) ordinances, which cities around the country are adopting in order to ensure democratic control over what technology their police departments are able to acquire and use. 

We applaud this tentative ruling as a sign that protective CCOPS ordinances can prevent police from acquiring and using invasive technology without any oversight or accountability. And we applaud Oakland Privacy for bringing this case. 

Matthew Guariglia

Urgent: EARN IT Act Introduced in House of Representatives

1 month 3 weeks ago

The dangerous EARN IT Act passed the Senate Judiciary Committee last month, and now it’s been introduced in the House of Representatives.

Take Action

Tell Congress to Reject the Earn It Act

We need your help to stop this anti-speech, anti-security bill. Email your elected officials in both chambers of Congress today and ask them to publicly oppose the EARN IT Act.

The EARN IT Act would allow all 50 state legislatures, as well as U.S. territories and Washington D.C., to pass laws that would regulate the Internet. By breaking Section 230 of the Communications Decency Act, the EARN IT bill would allow small website owners to be sued or prosecuted under state laws, as long as the prosecution or lawsuit somehow related to crimes against children.

We know how websites will react to this. Once they face prosecution or lawsuits based on other peoples’ speech, they’ll monitor their users, and censor or shut down discussion forums. 

The bill also creates an advisory commission on Internet “best practices” that will be dominated by Attorney General William Barr and law enforcement agencies. Barr’s view on Internet “best practices” is well known—he wants to break encryption and let police read every message sent online.

Public outcry has already forced amendments into the EARN IT Act that purport to defend encryption—but they’re full of loopholes. That window dressing doesn’t fix the bill’s many flaws. 

Take Action

Tell Congress to Reject the Earn It Act

Joe Mullin

The Online Content Policy Modernization Act Is an Unconstitutional Mess

1 month 3 weeks ago

EFF is standing with a huge coalition of organizations to urge Congress to oppose the Online Content Policy Modernization Act (OCPMA, S. 4632). Introduced by Sen. Lindsey Graham (R-SC), the OCPMA is yet another of this year’s flood of misguided attacks on Internet speech (read bill [pdf]). The bill would make it harder for online platforms to take common-sense moderation measures like removing spam or correcting disinformation, including disinformation about the upcoming election. But it doesn’t stop there: the bill would also upend longstanding balances in copyright law, subjecting ordinary Internet users to up to $30,000 in fines for everyday activities like sharing photos and writing online, without even the benefit of a judge and jury.

The OCPMA combines two previous bills. The first—the Online Freedom and Viewpoint Diversity Act (S. 4534)—undermines Section 230, the most important law protecting free speech online. Section 230 enshrines the common-sense principle that if you say something unlawful online, you should be the one held responsible, not the website or platform where you said it. Section 230 also makes it clear that platforms have liability protections for the decisions they make to moderate or remove online speech: platforms are free to decide their own moderation policies however they see fit. The OCPMA would flip that second protection on its head, shielding only platforms that agree to confine their moderation policies to a narrowly tailored set of rules. As EFF and a coalition of legal experts explained to the Senate Judiciary Committee:

This narrowing would create a strong disincentive for companies to take action against a whole host of disinformation, including inaccurate information about where and how to vote, content that aims to intimidate or discourage people from casting a ballot, or misleading information about the integrity of our election systems. S.4632 would also create a new risk of liability for services that “editorialize” alongside user-generated content. In other words, sites that direct users to voter-registration pages, that label false information with fact-checks, or that provide accurate information about mail-in voting, would face lawsuits over the user-generated content they were intending to correct.

It's easy to see the motivations behind the Section 230 provisions in this bill, but they simply don’t hold up to scrutiny. This bill is based on the flawed premise that social media platforms’ moderation practices are rampant with bias against conservative views; while a popular meme in some right-wing circles, this view doesn’t hold water. There are serious problems with platforms’ moderation practices, but the problem isn’t the liberal silencing the conservative; the problem is the powerful silencing the powerless. Besides, it’s absurd to suggest that the situation would somehow be improved by putting such severe limits on how platforms moderate; the Internet is a better place when multiple moderation philosophies can coexist, some more restrictive and some more freeform.

The government forcing platforms to adopt a specific approach to moderation is not just a bad idea; in fact; it’s unconstitutional. As EFF explained in its own letter to the Judiciary Committee:

The First Amendment prohibits Congress from directly interfering with intermediaries’ decisions regarding what user-generated content they host and how they moderate that content. The OCPM Act seeks to coerce the same result by punishing services that exercise their rights. This is an unconstitutional condition. The government cannot condition Section 230’s immunity on interfering with intermediaries’ First Amendment rights.

Sen. Graham has also used the OCPMA as his vehicle to bring back the CASE Act, a 2019 bill that would have created a new tribunal for hearing “small” ($30,000!) copyright disputes, putting everyday Internet users at risk of losing everything simply for sharing copyrighted images or text online. This tribunal would exist within the Copyright Office, not the judicial branch, and it would lack important protections like the right to a jury trial and registration requirements. As we explained last year, the CASE Act would usher in a new era of copyright trolling, with copyright owners or their agents sending notices en masse to users for sharing memes and transformative works. When Congress was debating the CASE Act last year, its proponents laughed off concerns that the bill would put everyday Internet users at risk, clearly not understanding what a $30,000 fee would mean to the average family. As EFF and a host of other copyright experts explained to the Judiciary Committee:

The copyright small claims dispute provisions in S. 4632 are based upon S. 1273, the Copyright Alternative in Small-Claims Enforcement Act of 2019 (“CASE Act”), which could potentially bankrupt millions of Americans, and be used to target schools, libraries and religious institutions at a time when more of our lives are taking place online than ever before due to the COVID-19 pandemic. Laws that would subject any American organization or individual — from small businesses to religious institutions to nonprofits to our grandparents and children — to up to $30,000 in damages for something as simple as posting a photo on social media, reposting a meme, or using a photo to promote their nonprofit online are not based on sound policy.

The Senate Judiciary Committee plans to consider the OCPMA soon. This bill is far too much of a mess to be saved by amendments. We urge the Committee to reject it.

Elliot Harmon

Vote for EFF on CREDO's October Ballot

1 month 3 weeks ago

Right now you can help EFF receive a portion of a $150,000 donation pool just by casting your vote! EFF is one of the three nonprofits featured in CREDO's giving group this month, so if you vote for EFF by October 31 you will direct a bigger piece of the donation pie toward protecting online freedom.

Since its founding, CREDO's mobile, energy, long distance, and credit card services customers have raised more than $90 million for charity. Their mobile customers generate the funds as they use paid services, and each month CREDO encourages the public to choose from three nonprofit recipients that drive positive change.

Anyone in the U.S. can visit the CREDO Donations site and vote for EFF, so spread the word! The more votes we receive, the higher our share of this month's donation pool.

EFF is celebrating its 30th year of fighting for technology users, and your rights online have never mattered more than they do today. Your privacy, access to secure tools, and free expression play crucial roles in seeing us through the pandemic, protecting human rights, and ensuring a brighter future online. Help defend digital freedom and vote for EFF today!

Aaron Jue

Broad Coalition Urges Court Not to Block California’s Net Neutrality Law

1 month 3 weeks ago

After the federal government rolled back net neutrality protections for consumers in 2017, California stepped up and passed a bill that does what FCC wouldn’t: bar telecoms from blocking and throttling Internet content and imposing paid prioritization schemes. The law, SB 822, ensures that that all Californians have full access to all Internet content and services—at lower prices.

Partnering with the ACLU of Northern California and numerous other public interest advocates, businesses and educators, EFF filed an amicus brief today urging a federal court to reject the telecom industry’s attempt to block enforcement of SB 822. The industry is claiming that California’s law is preempted by federal law—despite a court ruling that said the FCC can’t impose nationwide preemption of state laws protecting net neutrality.

Without legal protections, low-income Californians who rely on mobile devices for internet access and can’t pay for more expensive content are at a real disadvantage. Their ISPs could inhibit full access to the Internet, which is critical for distance learning, maintaining small businesses, and staying connected. Schools and libraries are justifiably concerned that without net neutrality protections, paid prioritization schemes will degrade access to material that students and public need in order to learn. SB 822 addresses that by ensuring that large ISPs do not take advantage of their stranglehold on Californians’ Internet access to slow or otherwise manipulate Internet traffic.

The large ISPs also have a vested interest in shaping Internet use to favor their own subsidiaries and business partners, at the expense of diverse voices and competition. Absent meaningful competition, ISPs have every incentive to leverage their last-mile monopolies to customers’ homes and bypass competition for a range of online services. That would mean less choice, lower quality, and higher prices for Internet users—and new barriers to entry for innovators. SB 822 aims to keep the playing field level for everyone.

These protections are important all of the time, but doubly so in crises like the ones California now faces: a pandemic, the resulting economic downturn, and a state wildfire emergency. And Internet providers have shown that they are not above using emergencies to exploit their gatekeeper power for financial gain. Just two years ago, when massive fires threatened the lives of rescuers, emergency workers, and residents, the Santa Clara fire department found that it’s “unlimited” data plan was being throttled by Verizon. Internet access on a vehicle the department was using to coordinate its fire response slowed to a crawl. When contacted, the company told firefighters that they needed to pay more for a better plan.

Without SB 822, Californians – and not just first responders – could find themselves in the same situation as the Santa Clara Fire Department: unable, thanks to throttling or other restrictions, to access information they need or connect with others. We hope the court recognizes how important SB 822 is and why the telecom lobby shouldn’t be allowed to block its enforcement.

Related Cases: California Net Neutrality Cases - American Cable Association, et al v. Xavier Becerra and United States of America v. State of California
Karen Gullo

Tell the Department of Homeland Security: Stop Collecting DNA and other Biometrics

1 month 4 weeks ago

Update (10/16/20): The comment window is now closed. EFF will be posting our response to the DHS in the days to come. Thank you to everyone who made your voice heard against this unjust rule change.)

We need your help. On September 11, 2020, the Department of Homeland Security (DHS) announced its intention to significantly expand both the number of people required to submit biometrics during routine immigration applications and the types of biometrics that individuals must surrender. This new rule will apply to immigrants and U.S. citizens alike, and to people of all ages, including, for the first time, children under the age of 14. It would nearly double the number of people from whom DHS would collect biometrics each year, to more than six million. The biometrics DHS plans to collect include palm prints, voice prints, iris scans, facial imaging, and even DNA—which are far more invasive than DHS’s current biometric collection of fingerprints, photographs, and signatures.  (For an incisive summary of the proposed changes, click here.)

DHS has given the public until October 13, 2020 to voice their concerns about this highly invasive and unprecedented proposal. If you want your voice heard on this important issue, you must submit a comment through the Federal Register. Your visit to their website and comment submission are subject to their privacy policy which you can read here.

Immigrating to the United States, or sponsoring your family member to do so, should not expose your most intimate and sensitive personal data to the U.S. government. But that’s what this new rule will do, by permitting DHS to collect a range of biometrics at every stage of the “immigration lifecycle.” The government does not, and should not, take DNA samples of every person born on U.S. soil—so why should it do the same for immigrants coming to the United States or U.S. citizens seeking to petition a family member?  

We cannot allow the government to normalize, justify, or develop its capacity for the mass collection of DNA and other sensitive biometrics. This move by DHS brings us one step closer to mass dragnet genetic surveillance. It also risks that people’s biometric information will be vulnerable to breach or future misuse by expanding the types of biometrics collected from each individual, storing all data together in one database, and using a unique identifier to link several biometrics to each person. The U.S. government has shown time and time again that it cannot protect our personal data. In 2019, DHS admitted that the images of almost 200,000 people taken for its face recognition pilot, as well as automated license plate reader data, were released onto the dark web after a cyberattack compromised a subcontractor. In 2015, the Office of Personnel Management admitted a breach of 5.6 million fingerprints, in addition to the SSNs and other personal information of more than 25 million Americans. We cannot run the risk of similar infiltrations happening again with people’s DNA, voice prints, iris scans, or facial imaging.

Tell DHS: I oppose the proposed rulemaking, which would allow the Department of Homeland Security to vastly increase the types of biometrics it collects, as well as double the number of people from whom it collects such biometrics, including children. These actions create security and privacy risks, put people in the United States under undue suspicion, and make immigrants and their U.S. citizen family members vulnerable to surveillance and harassment based on race, immigration status, nationality, and religion.

Matthew Guariglia

California Community Leaders Call for Urgent Action on Broadband Access—Add Your Organization to the List

1 month 4 weeks ago

More than fifty California organizations, businesses, and public officials—including the AARP of California, the San Francisco Tech Council, the California Center for Rural Policy, the Khan Academy, and a number of California cities and counties—join Common Sense Kids Action and EFF in urging Governor Gavin Newsom to call the legislature back into a special session to address the state’s digital divide.

The COVID-19 pandemic has accentuated California's longstanding broadband access crisis. Governor Newsom himself has identified this as a pressing issue, with a recent executive order to establish a state goal of 100 mbps download speeds for all Californians. More than 2 million Californians lack access to high-speed broadband today. As KQED recently reported, that includes some 1.2 million students across the state who lack adequate Internet access to do their work. In a nationwide survey from Common Sense on the “homework gap,” 12 percent of teachers say a majority of their students lack home access to the internet or a computer to do schoolwork at home, though 20 percent of K-2 students and 41 percent of high school students need broadband internet access outside of school at least once a week.

And that’s in a normal year. But this is not a normal year. Lack of access has become an emergency for students today as schooling becomes remote in response to the pandemic. Many students with no access at home have been cut off from school computer labs, libraries, or other places where they may usually get the access they need. This type of situation is exactly what led to the viral pictures of two Salinas students—who clearly wanted to learn—doing their school work on the sidewalk outside the local Taco Bell.

It doesn’t have to be like this. California, home to the world’s fifth-largest economy, has solutions available. In this past legislative session, Sen. Lena Gonzalez built broad support for a deal that would have secured more than 100 million dollars a year to secure access to high-speed Internet for families, first responders, and seniors across the state. EFF and Common Sense were proud to sponsor that bill, but despite support from the California Senate and the governor’s office, California Assembly leadership refused to hear the bill and stopped it at the last moment.

California families face these problems every day—regardless of whether their representatives are willing to help them or not. But the people of California need help, and the state should move forward now to begin the work needed to finally close the digital divide.

The following organizations have already joined the call, and we hope Governor Newsom will listen.

If your organization also believes that California cannot wait to start closing the digital divide, please reach out to Senior Legislative Counsel Ernesto Falcon or Legislative Activist Hayley Tsukayama to add your name to the list. 

Signers:

AARP California

Access Humboldt

Access Now

California Center for Rural Policy

Canal Alliance

Central Coast Broadband Consortium

City of Daly City

City of Farmersville

City Council Member, City of Gonzales

City of Greenfield

City of Watsonville

Consumer Reports

Common Sense Kids Action

Council Member, City of Gonzales

County of Monterey

EraseTheRedline Inc.

Electronic Frontier Foundation

Fight for the Future

Founder Academy

Gigabit Libraries Network

GoNoodle

Great School Choices

Indivisible Sacramento

InnovateEDU

Intertie Inc.

Khan Academy

King City Mayor Pro tempore

LA Tech4Good

Latino Community Foundation

MakeKnowledge

Mayor, City of Huron

Mayor, City of Soledad

MediaJustice

Modesto City Councilmember, District 2

Monkeybrains

Monterey County Supervisor, District 1

My Yute Soccer

National Cristina Foundation

National Digital Inclusion Alliance

National Hispanic Media Coalition

New America's Open Technology Institute

Oakland African American Chamber of Commerce

Open Door Community Health Centers

OpenMedia

Peninsula Young Democrats

Public Knowledge

San Francisco Tech Council

School on Wheels

Schools, Health & Libraries Broadband (SHLB) Coalition

The Education Trust-West

The Greenlining Institute

Mayor, Town of Colma

Trueblood Strategy

Hayley Tsukayama

Bust 'Em All: Let's De-Monopolize Tech, Telecoms AND Entertainment

2 months ago

The early 1980s were a period of tremendous foment and excitement for tech. In the four years between 1980 and 1984, Americans met:

But no matter how exciting things were in Silicon Valley during those years, even more seismic changes were afoot in Washington, D.C., where a jurist named Robert Bork found the ear of President Reagan and a coterie of elite legal insiders and began to fundamentally reshape US antitrust law.

Bork championed an antitrust theory called "the consumer welfare standard," which reversed generations of American competition law, insisting that monopolies and monopolistic conduct were rarely a problem and that antitrust law should only be invoked when there was "consumer harm" in the form of higher prices immediately following from a merger or some other potentially anti-competitive action.

Tech and lax antitrust enforcement grew up together. For 40 years, we've lived through two entwined experiments: the Internet and its foundational design principle that anyone should be able to talk to anyone using any protocol without permission from anyone else; and the consumer welfare standard, and its bedrock idea that monopolies are not harmful unless prices increase.

It's not a pretty sight. Forty years on and much of the dynamism of technology has been choked out of the industry, with a few firms attaining seemingly permanent dominance over our digital lives, maintaining their rule by buying or merging with competitors, blocking interoperability, and holding whole markets to ransom.

Thankfully, things are starting to change. Congress's long-dormant appetite for fighting monopolists is awakening, with hard-charging hearings and far-reaching legislative proposals.

And yet... Anyone who hangs out in policy circles has heard the rumors: this was all cooked up by Big Cable, the telecom giants who have been jousting with tech over Net Neutrality, privacy, and every other measure that allowed the public to get more value out of the wires in our homes or the radio signals in our skies without cutting in the telecoms for a piece of the action.

Or perhaps it's not the telecoms: maybe it's Big Content, the giant, hyper-consolidated entertainment companies (five publishers, four movie studios, three record labels), whose war on tech freedom has deep roots: given all their nefarious lobbying and skullduggery, is it so hard to believe they'd cook up a fake grassroots campaign to defang Big Tech under color of reinvigorating antitrust?

In any event, why selectively enforce competition laws against tech companies, while leaving these other sectors unscathed?

Why indeed? Who said anything about leaving telecoms or entertainment untouched by antitrust? The companies that make up those industries are in desperate need of tougher antitrust enforcement, and we're here for it.

Who wouldn't be? Just look at the telecoms industry, where cable and phone companies have divided up the nation like the Pope dividing up the "New World," so that they never have to compete head to head with one another. This sector is the reason that Americans pay more for slower broadband than anyone else in the world, and the pandemic has revealed just how bad this is.

When Frontier declared bankruptcy early in the Covid-19 crisis, its disclosures revealed the extent to which American families were being victimized by these monopolies: Frontier's own planning showed that it could earn $800,000,000 in profits by providing 100gb fiber to three million households, but it did not, because the company's top execs were worried that spending money to build out this profitable fiber would make the company's share price dip momentarily, and since a) these execs are mostly paid in stock; and b) none of those households had an alternative, Frontier left nearly $1 billion on the table and three million households on ancient, unreliable, slow Internet connections.

The big telcos and cable operators are in sore need of adult supervision, competitive pressure, and Congressional hearings.

And things are no better in the world of entertainment, where a string of mergers —most recently the nakedly anticompetitive Disney-Fox merger—has left performers and creators high and dry, with audiences hardly faring any better.

Anyone who tells you that we shouldn't fight tech concentration because the telecom or entertainment industry is also monopolistic is missing the obvious rejoinder: we should fight monopoly in those industries, too.

In boolean terms, trustbusting tech, entertainment, and cable is an AND operation, not a XOR operation.

Besides, for all their public performance of hatred for one another, tech, content, and telcos are perfectly capable of collaborating to screw the rest of us. If you think tech isn't willing to sell out Net Neutrality, you need to pay closer attention. If you think that tech is the champion who'll keep the entertainment lobby from installing automated copyright filters, think again. And if you think all competing industries aren't colluding in secret to rig markets, we've got some disturbing news for you.

Surviving the 21st Century is not a matter of allying yourself with a feudal lord—choosing Team Content, Team Tech, or Team Telecomand hoping that your chosen champion will protect you from the depredations of the others.

If we're gonna make it through this monopolistic era of evidence-free policy that benefits a tiny, monied minority at the expense of the rest of us, we need to demand democratic accountability for market abuses, demand a pluralistic market where dominant firms are subjected to controls and penalties, where you finally realize birthright of technological self-determination.

If Big Cable and Big Content are secretly gunning for Big Tech with antitrust law, they're making a dangerous bet: that trustbusting will be revived only to the extent that it is used to limit Big Tech, and then it will return to its indefinite hibernation. That's not how this works. Even if tech is where the new trustbusting era starts, it's not where it will end.

Cory Doctorow

Introducing “YAYA”, a New Threat Hunting Tool From EFF Threat Lab

2 months ago

At the EFF Threat Lab we spend a lot of time hunting for malware that targets vulnerable populations, but we also spend time trying to classify malware samples that we have come across. One of the tools we use for this is YARA. YARA is described as “The Pattern Matching Swiss Knife for Malware Researchers.” Put simply, YARA is a program that lets you create descriptions of malware (YARA rules) and scan files or processes with them to see if they match. 

The community of malware researchers has amassed a great deal of useful YARA rules over the years, and we use many of them in our own malware research efforts. One such repository of YARA rules is the Awesome YARA guide, which contains links to dozens of high-quality YARA repositories. 

Managing a ton of YARA rules in different repositories, plus your own sets of rules, can be a headache, so we decided to create a tool to help us manage our YARA rules and run scans. Today we are presenting this open source tool free to the public: YAYA, or Yet Another YARA Automation. 

Introducing YAYA

YAYA is a new open source tool to help researchers manage multiple YARA rule repositories. YAYA starts by importing a set of high-quality YARA rules and then lets researchers add their own rules, disable specific rulesets, and run scans of files. YAYA only runs on Linux systems for now. The program is geared towards new and experienced malware researchers, or those who want to get into malware research. No previous YARA knowledge is required to be able to run YAYA.

A video example of YAYA being run

If you are interested in getting YAYA or contributing to its development, you can find the github repository here. We hope this tool will make a useful addition to many malware researchers’ tool kits. 

Cooper Quintin

The Government’s Antitrust Suit Against Google: Go Big and Do It Right

2 months ago

U.S. antitrust enforcers are reported to be crafting a lawsuit against Google (and its parent company, Alphabet). The Department of Justice and a large coalition of state attorneys general are meeting this week and could file suit very soon. While it will reportedly focus on Google’s dominance in online advertising and search, the suit could encompass many more facets of Google’s conduct, including its dominance in mobile operating systems and Web browsers. This suit has the potential to be the most significant antitrust case against a technology company in over 20 years, since the DOJ’s 1998 suit against Microsoft.

All users of Google products, and everyone who views Google-brokered ads—in other words, nearly all Internet users—have a stake in the outcome of this lawsuit. That’s why it needs to be done right. It should cover more than just advertising markets, and focus on Google’s search, browser, and mobile OS market power as well. And the antitrust authorities should pursue smart remedies that look beyond money damages, including breakups and requiring interoperability with competitive products, all while being careful to protect users’ free speech and privacy interests that will inevitably be caught up in the mix. Like many, we worry that if this is rushed, it might not go well, so we urge the enforcers to take the time they need to do it right, while recognizing that there is ongoing harm.

The Importance of “Big Case” Antitrust

U.S. government antitrust suits have a have an important place in the history of innovation, because even when they didn’t end with a court order to break up a company, there is a reasonable case to be made that they created breathing room for innovation. The DOJ’s suit against IBM, for “monopolizing or attempting to monopolize the general purpose electronic digital computer system market” began in 1969, went to trial in 1975, and was withdrawn in 1982. Even though the case didn’t end with a court-ordered remedy, a decade-plus of public scrutiny into IBM’s business practices arguably kept the company from squashing nascent competitors like Microsoft as it had squashed other rivals for decades. That result likely helped launch the personal computer revolution of the 1980s, fueled by the ideas of numerous companies beyond “Big Blue.”

The government’s suit against AT&T, which had held a legally recognized monopoly over telephone service in the U.S. for most of the twentieth century, almost ended the same way as the IBM suit. Filed in 1974, US v. AT&T had not reached a judgment by 1982. But years of scrutiny, combined with the rise of potential competitors who were itching to enter telephone service and related markets, led AT&T’s leadership to agree to a breakup, allowing the “Baby Bells” and technology-focused spinoffs to innovate in data communications and the development of the Internet in ways that “Ma Bell” could not. It’s hard to imagine the Internet entering widespread use in the 1990s with a monolithic AT&T still deciding who could connect to telephone networks and what devices they could use. Even though the successors to AT&T eventually re-assembled themselves, the innovation unleashed by the breakup is still with us.

The 1998 case against Microsoft over its exclusion of rival Web browser software from PCs was also a boon to innovation. At the time, an AT&T-style breakup of Microsoft, perhaps splitting its operating system and application software divisions into separate companies, was a distinct possibility. In fact, the trial court ordered just that. On appeal, the D.C. Circuit rejected a breakup and instead ordered Microsoft to adhere to ongoing restrictions on its behavior towards competing software vendors and PC manufacturers. Once again, this seemed like something less than shining success. But facing ongoing monitoring of its behavior towards competitors, Microsoft shelved plans to crush a little-known search company called Google.

Two decades later, Google has expanded into so many Internet-related markets that its position in many of them appears unassailable. Google has been able to buy out or drive out of business nearly every company that dares compete with it in a variety of markets. The engine of disruptive innovation has broken down again, and a bold, thoughtful antitrust challenge by federal and state enforcers may be just the thing to help revive it.

Look at All of Alphabet, Not Just Advertising

News reports suggest that the suit against Google may focus on the company’s dominance of Web advertising, which is alleged to depress the ad revenues available to publishers. The suit may also address Google’s conduct towards competitors in search markets, such as travel and product searches. It could also look at Google’s history of mergers and acquisitions, such as its 2007 purchase of the advertising company DoubleClick, to see if it led to it acquiring monopoly power in various markets.

On the scope of the lawsuit, antitrust enforcers should go big. A suit that challenges Google’s conduct in advertising markets alone would not benefit consumers very much. While Google’s vast collection of data about users’ browsing habits and preferences, derived from its ad networks, causes great harm to consumers, it’s not yet clear whether dozens of smaller competitors in the ad-tech space, some of which are extremely shady, will be better stewards of user privacy. The answer here involves both antitrust and much-needed new privacy law. That will help consumers across the board.

The DOJ and states should also challenge Google’s leveraging of its existing market power in search, Web browsing (Chrome), and mobile operating systems (Android) to shut out competition. The suit should also address whether Google’s collection and control of user data across so many apps and devices—plus data from the millions of websites connected to Google’s advertising networks—gives it an advantage in other markets that triggers antitrust scrutiny. These are, in part, novel and difficult claims under antitrust law. But enforcers should aim high. Success in holding Google to account for its wielding of monopoly power, even if it does not succeed entirely, can help the courts and possibly Congress adapt antitrust law for the digital age. It can also help create a bulwark against further centralization of the Internet. And as history shows, even a suit that doesn’t ultimately lead to a breakup can help make room for innovation from diverse sources beyond the walls of today’s tech giants.

Have A Big Toolbox of Remedies: Interoperability, Follow-On Innovation, Conduct Rules, and Breakups

It’s obvious by now that antitrust enforcers need to pursue remedies beyond money damages. Google and Alphabet are large enough to treat almost any fine as a cost of doing business without significantly changing their behavior. A court-ordered breakup of Alphabet should be seriously considered as part of the case, as well—perhaps an order to separate aspects of Google’s advertising business or to unwind Google’s most problematic acquisitions.

Breakups shouldn’t be the only arrow in the government’s quiver of remedies, though. Ordering Google to allow its competitors to build products that interoperate would strike at the heart of Google’s monopoly problem and might be an easier lift. As we’ve shown through many examples, building products that can connect to existing, dominant products, without permission from the incumbent vendor, can be a potent antimonopoly weapon. It can help put users back into control.

In Google’s case, that could mean ordering Google not to interfere with products that block Google-distributed ads and tracking. Allowing ad- and tracker-blocking technologies to flourish would allow users to shape the ad-tech market by favoring companies that collect less user data, and who are more responsible with the data they have. It could also spur innovation in that market (note: EFF’s Privacy Badger is one of those technologies, but this could help many services). In that scenario, Google would have to compete for users’ views and clicks by protecting their privacy.

Interoperability requirements imposed as an antitrust remedy, or as part of a settlement, avoid many of the problems that come with more generalized technology mandates because orders can be crafted to suit a particular company’s technologies and practices. It can involve ongoing monitoring to ensure compliance and help avoid problems with security or other issues that could arise with a one-size-fits-all approach.

The upcoming lawsuit against Google is a great opportunity to begin addressing the problems of Big Tech’s monopoly power. If enforcers act boldly, they can set new antitrust precedents that will promote competition throughout high-tech markets. They can also use this opportunity to advance interoperability as a spur to innovation and a monopoly remedy.

Mitch Stoltz

Students Are Pushing Back Against Proctoring Surveillance Apps

2 months ago

Special thanks to legal intern Tracy Zhang, who was lead author of this post.

Privacy groups aren’t the only ones raising the alarm about the dangers of invasive proctoring apps. Through dozens of petitions across the country, and the globe, students too are pushing school administrators and teachers to consider the risks these apps create.  

Schools must take note of this level of organized activism.

Students at the University of Texas at Dallas are petitioning the school to stop using the proctoring app Honorlock. The petition has over 6,300 signatures, notes that Honorlock can collect “your face, driver’s license, and network information,” and calls use of Honorlock a “blatant violation of our privacy as students.” Students at Florida International University are petitioning their school to stop using Honorlock as well, gathering over 7,200 signatures. They highlight the amount of data that Honorlock collects and that Honorlock is allowed to keep the information for up to a year and, in some cases, 2 years. Students at California State University Fullerton are petitioning the school to stop using Proctorio, calling it “creepy and unacceptable” that students would be filmed in their own house in order to take exams. The petition has over 4,500 signatures.  

But it’s not just privacy that's at stake. While almost all the petitions we’ve seen raise very real privacy concerns—from biometric data collection, to the often overbroad permissions these apps require over the students’ devices, to the surveillance of students’ personal environments—these petitions make clear that proctoring apps also raise concerns about security, equity and accessibility, cost, increased stress, and bias in the technology.  

A petition by the students at Washington State University, which has over 1,700 signatures, raises concerns that ProctorU is not secure, pointing to a July 2020 data breach in which the information of 440,000 users was leaked. Students at University of Massachusetts Lowell are petitioning the school to stop using Respondus, in particular calling out the access that its Ring-0 software has on students’ devices, and noting that the software “creates massive security vulnerabilities and attack vectors, and thus cannot be tolerated on personal devices under any circumstances.” The petition has over 1,000 signatures.

Students at the University of Colorado Boulder raise concerns about the accessibility of proctoring app Proctorio, saying that “the added stress of such an intrusive program may make it harder for students with testing anxiety and other factors to complete the tests.” The petition has over 1,100 signatures. The Ontario Confederation of University Faculty Associations wrote a letter speaking out about proctoring technologies, noting that the need for access to high-speed internet and newer computer technologies “increase [students’] stress and anxiety levels, and leave many students behind.” 

In addition to privacy concerns, the petition from students at Florida International University notes that because Honorlock requires a webcam and microphone, “students with limited access to technology or a quiet testing location” are placed at a disadvantage, and that the required use of such technology “does not account for students with difficult living situations.” A petition against Miami University’s use of Proctorio notes that its required use “discriminates against neurodivergent students, as it tracks a student’s gaze, and flags students who look away from the screen as ‘suspicious.’ This, too, “negatively impacts people who have ADHD-like symptoms.” The petition also noted that proctoring software often had difficulty recognizing students with black or brown skin and tracking their movements. Their petition has over 400 signatures.

Students have seen success through these petitions. A petition at The City University of New York, supported by the University Student Senate and other student body groups, resulted in the decision that faculty and staff may not compel students to participate in online proctoring. After students at the University of London petitioned against the use of Proctortrack, the university decided to move away from the third-party proctoring provider.

Students have seen success through these petitions.

Below, we’ve listed some of the larger petitions and noted their major concerns. There are hundreds more, and regardless of the number of signatures, it’s important to note that even a few concerned students, teachers, or parents can make a difference at their schools.  

As remote learning continues, these petitions and other pushback from student activists, parents, and teachers will undoubtedly grow. Schools must take note of this level of organized activism. Working together, we can make the very real concerns about privacy, equity, and bias in technology important components of school policy, instead of afterthoughts.  

***** 

If you want to learn more about defending student privacy, EFF has several guides and blog posts that are a good place to start.  

  • A detailed explanation of EFF’s concerns with unnecessary surveillance of proctoring apps is available here
  • Our Surveillance Self-Defense Guide to student privacy covers the basics of techniques that are often used to invade privacy and track students, as well as what happens to the data that’s collected, and how to protect yourself. 
  • Proctoring apps aren’t the only privacy-invasive tools schools have implemented. Cloud-based education services and devices can also jeopardize students’ privacy as they navigate the Internet—including children under the age of 13. From Chromebooks and iPads to Google Apps for Education, this FAQ provides an entry-point to learn about school-issued technology and the ramifications it can have for student privacy. 
  • Parents and guardians should also understand the risks created when schools require privacy-invasive apps, devices, and technologies. Our guide for them is a great place to start, with ways to take action, and includes a printable FAQ that can be quickly shared with other parents and brought to PTA meetings. 
  • All student privacy-related writing EFF does is collected on our student privacy page, which also includes basic information about the risks students are facing.
  • In the spring of 2017, we released the results of a survey that we conducted in order to plumb the depths of the confusion surrounding ed tech. And as it turns out, students, parents, teachers, and even administrators have lots of concerns—and very little clarity—over how ed tech providers protect student privacy.
  • COVID has forced many services online besides schools. Our guide to online tools during COVID explains the wide array of risks this creates, from online chat and virtual conferencing tools to healthcare apps.
  • Some schools are mandating that students install COVID-related technology on their personal devices, but this is the wrong call. In this blog post, we explain why schools must remove any such mandates from student agreements or commitments, and further should pledge not to mandate installation of any technology, and instead should present the app to students and demonstrate that it is effective and respects their privacy. 

*****

Below is a list of just some of the larger petitions against the required use of proctoring apps as of September 24, 2020. We encourage users to read the privacy policies of any website visited via these links.

  • Auburn University students note that “proctoring software is essentially legitimized spyware.”
  • NJIT petitioners write that while students agreed to take classes online, they “DID NOT agree to have [their] privacy invaded.”
  • CUNY students successfully leveraged 27,000 signatures to end the “despicable overreach” of proctoring app Proctorio.
  • Students at the University of Texas at Dallas, Dallas College, and Texas A&M called the use of Honorlock “both a blatant violation of our privacy as students and infeasible for many.”
  • University of Tennessee Chattanooga students say that “Proctorio claims to keep all information safe and doesn't store or share anything but that is simply not true. Proctorio actually keeps recordings and data on a cloud for up to 30 days after they have been collected.” 
  • Washington State University students note that in July, “ProctorU had a data breach of 440,000 students/people's information leaked on the internet.”
  • In a letter to the Minister of Colleges and Universities, the Ontario Confederation of University Faculty Associates argue that “Proctortrack and similar proctoring software present significant privacy, security, and equity concerns, including the collection of sensitive personal information and the need for access to high-speed internet and newer computer technologies, These requirements put students at risk, increase their stress and anxiety levels, and leave many students behind.” 
  • In a popular post, a self-identified student from Florida State University wrote on Reddit that “we shouldn't be forced to have a third-party company invade our privacy, and give up our personal information by installing what is in reality glorified spyware on our computers.” An accompanying petition by students at FSU says that using Honorlock “blatantly violates privacy rights.”
  • CSU Fullerton students call it “creepy and unacceptable” that students would be filmed in their own house in order to take exams, and declare they “will not accept being spied on!”
  • Miami University petitioners argue that “Proctorio discriminates against neurodivergent students, as it tracks a student's gaze, and flags students who look away from the screen as 'suspicious' too, which negatively impacts people who have ADHD-like symptoms.” The petition goes on to note that “students with black or brown skin have been asked to shine more light on their faces, as the software had difficulty recognizing them or tracking their movements.”
  • CU Boulder students say that, with Proctorio, the “added stress of such an intrusive program may make it harder for students with testing anxiety and other factors to complete the tests.”
  •  UW Madison students are concerned about Honorlock’s “tracking of secure data whilst in software/taking an exam (cookies, browser history); Identity tracking and tracing (driver's license, date of birth, address, private personal information); Voice Tracking as well as recognition (Specifically invading on privacy of other members of my home); Facial Recognition and storage of such data.”
  •  Florida International University students note that “Honorlock is allowed to keep [recordings of students] for up to a year, and in some cases up to 2 years.” The petition also notes that “Honorlock requires a webcam and microphone. This places students with limited access to technology or a quiet testing location at a disadvantage…You are required to be in the room alone for the duration of the exam. This does not account for students with difficult living situations.”
  •  Georgia Tech petitioners are concerned that data collected by Honorlock “could be abused, for example for facial recognition in surveillance software or to circumvent biometric safety system.”
  •  University of Central Florida students argue that “Honorlock is not a trustworthy program and students should not be forced to sign away their privacy and rights in order to take a test.”
  •  UMass Lowell students call out the “countless security vulnerabilities that are almost certainly hiding in the Respondus code, waiting to be exploited by malware and/or other forms of malicious software.”
  •  University of Regina students argue that “facial recognition software and biometric scanners have been shown to uphold racial bias and cannot be trusted to accurately evaluate people of color. Eye movement and body movement is natural and unconscious, and for many neurodivergent people is completely unavoidable."
Jason Kelley

How Police Fund Surveillance Technology is Part of the Problem

2 months ago

Law enforcement agencies at the federal, state, and local level are spending hundreds of millions of dollars a year on surveillance technology in order to track, locate, watch, and listen to people in the United States, often targeting dissidents, immigrants, and people of color. EFF has written tirelessly about the harm surveillance causes communities and its effect is well documented. What is less talked about, but no less disturbing, are the myriad ways agencies fund the hoarding of these technologies. 

In 2016, the U.S. Department of Justice reported on the irresponsible and unregulated use and deployment of police surveillance measures in the town of Calexico, California. One of the most notable examples of the frivolous spending culture includes spending roughly $100,000 in seized assets on surveillance equipment (such as James Bond-style spy glasses) to dig up dirt on city council members and complaint-filing citizens with the aim of blackmail and extortion. Another example: a report from the Government Accountability Office showed that U.S. Customs and Border Protection officers used money intended to buy food and medical equipment for detainees to instead buy tactical gear and equipment. 

Drawing attention to how police fund surveillance technology is a necessary step, not just to exposing the harm it does, but also to recognize how untransparent and unregulated the industry is. Massive amounts of funding for surveillance have allowed police to pay for dozens of technologies that residents have no control over, or even knowledge about. When police pay for use of predictive policing software, do town residents get an inside look at how it works before it deploys police to arrest someone? No, often because that technology is “proprietary” and the company will claim that doing so would give away trade secrets. Some vendors even tell police not to talk to the press about it without the company's permission or instruct cops to leave use of the technology out of arrest reports. When law enforcement pays private companies to use automated license plate readers, what oversight do the surveilled have to make sure that data is safe? None—and it often isn’t safe. In 2019, an ALPR vendor that was hacked allowed 50,000 Customs and Border Patrol license plate scans to leak onto the web.

Law enforcement will often frame surveillance technology as being solely a solution to crime–but when viewed as a thriving industry made up of vendors and buyers, we can see that police surveillance has a whole lot more to do with dollars and cents. And often it's that money that's driving surveillance decisions, and not the community's interests. 

How Police Fund Surveillance:
Asset Forfeiture

Civil asset forfeiture is a process that allows law enforcement to seize money and property from individuals suspected of being involved in a crime before they have been convicted or sometimes before they’ve even been charged. When a local law enforcement agency partners with a federal agency it can apply for a share of the resources seized through a process called “equitable sharing.” Law enforcement often spends these funds on electronic surveillance, such as wiretaps, but also on other forms of surveillance technology, such as automated license plate readers.

Private Benefactors 

Wealthy individuals can have an immense impact on public safety, and are often the sources behind large scale surveillance systems. Baltimore’s “Aerial Investigation Research,” which would place a spy plane over the city, was funded in part by billionaires Laura and John Arnold, who put up $3.7 million to fund the program. Another billionaire, Ripple’s Chris Larson, has donated millions to neighborhood business districts throughout the San Francisco Bay Area to install sophisticated camera networks to deter property crime. The San Francisco Police Department was given live access to these cameras for over a week in order to spy on BLM protestors, which invaded their privacy and violated a local surveillance ordinance.

In Atlanta, businessman Charlie Loudermilk gave the city $1 million in order to create the Loudermilk Video Integration Center where police receive live feeds from public and private cameras. 

These grants, gifts, and donations illustrate the imbalance of power when it comes to decisions about surveillance technology.

Federal Grants

The federal government often pursues its nationwide surveillance goals by providing money to local law enforcement agencies. The U.S. Department of Justice has an entire office devoted to these efforts: the Bureau of Justice Assistance (BJS). Through BJS, local agencies can apply for sums ranging from tens of thousands to millions of dollars for police equipment, including surveillance technology. Through Justice Assistance Grants (JAGs), agencies have acquired license plate readers and mobile surveillance units, along with other surveillance technologies. BJA even has a special grant program for body-worn cameras.

Meanwhile, the U.S. Department of Homeland Security has paid local agencies to acquire surveillance technology along the U.S.-Mexico border through the Urban Area Security Initiative and Operation Stonegarden, a program that encourages local police to collaborate on border security missions.

Private Foundations

Many foundations provide technology, or funds to purchase technology, to local law enforcement. This process is similar to the “dark money” phenomenon in election politics: anonymous donors can provide money to a non-profit, which then can pass it on to law enforcement. 

Police foundations receive millions of dollars a year from large corporations and individual donors. Companies like Starbucks, Target, Facebook, and Google all provide money to police foundations which go on to buy equipment ranging from long guns to full surveillance networks.

According to ProPublica, in 2007, Target single-handedly paid for the software at LAPD’s new state-of-the-art surveillance center.

Kickbacks Between Surveillance Vendors and Police Departments

Because selling police surveillance tech is such a lucrative industry, it is no surprise that an economy has cropped up of shady and unregulated kick back schemes. Under these arrangements, police receive economic incentives to promote the adoption of certain surveillance equipment–in their own jurisdiction, to the people they should be protecting, and even to other towns, states, and countries.

Microsoft developed the sweeping city-wide surveillance system, Domain Awareness Systems, for the New York City Police Department, which was built gradually over years and cost $30 million. Its formal unveiling in 2012 led Microsoft to receive a slew of requests to buy the technology from other cities. Now, according to the New York Times, the NYPD receives 30% of “gross revenues from the sale of the system and access to any innovations developed for new customers.”

This leads to a disturbing question that undergirds many of these public-private surveillance partnerships in which police get kickbacks: Does our society actually need that much surveillance, or are the police just profiting off its proliferation? The NYPD and Microsoft make money when a city believes it needs to invest in a large-scale surveillance system. That undermines our ability to know if the system actually works at reducing crime, because its users have an economic interest in touting its effectiveness. It also means that there are commercial enterprises that profit when you feel afraid of crime.

Ring, Amazon’s surveillance doorbells, now has over 1,300 partnerships with police departments across the United States. As part of this arrangement, police are offered free equipment giveaways in exchange for a number of residents downloading their Neighbors app or using a town’s discount code to purchase a Ring camera. These purchases are often subsidized by the town itself.

This raises the very troubling question: do police think you need a camera on your front door because your property is in danger, or are they hustling for a commission from Amazon when they make a sale?

This arrangement is sure to deepen the public’s distrust of police officers and their public safety advice. How would people know if safety tips are motivated by an attempt to sow fear, and by extension, sell cameras and build an accessible surveillance network?

They Don’t Buy Surveillance Equipment, They Use Yours 

Throughout the country, police have been increasingly relying on private surveillance measures to do the spying they legally or economically cannot do themselves. This includes Ring surveillance doorbells people put on their front door, license plate readers homeowner’s associations mount at the entrance to their community, and full camera networks used by business improvement districts. No matter who controls surveillance equipment, police will ask to use it. 

Thus, any movement toward scrutinizing how police fund surveillance must also include scrutiny of our own decisions as private consumers. The choice of individuals to install their own invasive technology ultimately enables police abuse of the technology. It also allows police to circumvent measures of transparency and accountability that apply to government-owned surveillance technology.

Conclusion:

Community Control of Police Surveillance (CCOPS) measures around the country are starting to bring public awareness and transparency to the purchase and use of surveillance tech. But there are still too few of these laws ensuring democratic control over acquisition and application of police technology.

With police departments increasingly spending more and more money for access to face recognition, video surveillance, automated license plate readers, and dozens of other specific pieces of surveillance tech, it’s time to scrutinize the many dubious and opaque funding streams that bankroll them. But oversight alone will likely never be enough, because funding is in the billions and comes from various hard-to-trace sources, new technologies are always on the rise, and their uses mostly go unregulated and undisclosed.

So we must push for huge cuts in spending on police surveillance technology across the nation. This is a necessary step to protect privacy, freedom, and racial justice. 

Matthew Guariglia

The Time Has Come to End the PACER Paywall

2 months ago

In a nation ruled by law, access to public court records is essential to democratic accountability. Thanks to the Internet and other technological innovations, that access should be broader and easier than ever. The PACER (Public Access to Court Electronic Records) system could and should play a crucial role in fulfilling that need. Instead, it operates as an economic barrier, imposing fees for searching, viewing, and downloading federal court records, making it expensive for researchers, journalists, and the public to monitor and report on court activity. It's past time for the wall to come down.

The bipartisan Open Courts Act of 2020 aims to do just that. The bill would provide public access to federal court records and improve the federal court’s online record system, eliminating PACER's paywall in the process. This week, EFF and a coalition of civil liberties organizations, transparency groups, retired judges, and law libraries, have joined together to push Congress and the U.S. Federal Courts to eliminate the paywall and expand access to these vital documents. In a letter (pdf) addressed to the Director of the Administrative Office of United States Courts, which manages PACER, the coalition calls on the AO not to oppose this important legislation.

Passage of the bill would be a huge victory for transparency. Any person interested in accessing the trove of federal public court records that currently reside in PACER, from lawyers, the press, to the public, must currently pay a fee of 10 cents per page for search results and 10 cents per page for documents retrieved. These costs add up quickly, and have been criticized by open-government activists such as the late Aaron Swartz and Public.Resource.Org. As noted in the letter, which was signed by Public Knowledge, Open the Government, R Street, and the Project on Government Oversight, among others,

"It is unjust to charge for court documents and place undue burdens on students, researchers, pro se litigants, and interested members of the public – not to mention the journalists who cover the courts. The fairest alternative is not moving from 10 cents a page to eight cents; it’s no price at all."

The Open Courts Act of 2020 offers a clear path for finally making PACER access free to all, and should be supported by anyone who wants the public to engage more readily with their federal courts system. EFF urges Congress and the courts to support this important legislation and remove the barriers that make PACER a gatekeeper to information, rather than the open path to public records that it ought to be. 

Jason Kelley
Checked
2 hours 29 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed