Digital Rights Updates with EFFector 35.7

2 days 18 hours ago

Catch up on the latest news in the digital rights movement with our EFFector newsletter! Our latest issue is out now, and it is jam packed with updates, from decisions made by the Supreme court on Section 230 and fair use cases, to EFF's investigation into California police agencies sharing drivers' location data with law enforcement agencies in other states. You can read the full newsletter here, or listen to the audio version below!

Listen on YouTube

EFFector 35.7 - EFF at RightsCon 2023

Make sure you never miss an issue by signing up by email to receive EFFector as soon as it's posted! Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Our Right To Challenge Junk Patents Is Under Threat 

2 days 19 hours ago

The U.S. Patent Office has proposed new rules about who can challenge wrongly granted patents. If the rules become official, they will offer new protections to patent trolls. Challenging patents will become far more onerous, and impossible for some. The new rules could stop organizations like EFF, which used this process to fight the Personal Audio “podcasting patent,” from filing patent challenges altogether. 

We need EFF supporters to speak out against this proposal, which is a gift for patent trolls. We’re asking supporters who care about a fair patent system to file comments using the federal government’s public comment system. Your comments don’t need to be long, or use fancy legalese. The important thing is that everyday users and creators of technology get a chance to state their opposition to these rules. Below the button you can see a simple proposed comment you can cut-and-paste; you can also add to it, or write your own. 

If you have a personal experience with patent troll attacks, please mention it. Comments are not anonymous and you should use your real name. 

TAKE ACTION

Tell USPTO To Work For the Public, Not Patent Trolls

Sample comment: 

I am opposed to the USPTO’s proposed rules changes for inter partes review (IPR) and other patent challenges. These proposed rules should be withdrawn, and the IPR process should remain open to all. The USPTO should follow the rules Congress set out, and consider all patent challenges, including IPR petitions, on their merits. 

IPR Is The Best Process For Limiting Bad Patents

The Patent Trial and Appeal Board, or PTAB, is one of the only places in the nation where patent trolls can be held to account for the outrageous and harmful claims they make in their patents. Congress created the “inter partes review” (IPR) process, which is overseen by specialized PTAB judges, more than a decade ago. “Inter partes” simply means “between the parties,” and the IPR process allows members of the public to challenge patents that never should have been granted in the first place. 

The IPR process is one of two main ways to challenge patents, along with challenging them in district court. The big difference is that the IPR process, while not simple or cheap, is much faster and cheaper than going to trial in federal court. Invalidating a patent in court can cost millions of dollars, while even a complicated IPR process costs a fraction of that. 

Through IPR, thousands of patents have been thrown out. The patent challengers who have kicked out some of these wrongly-granted monopolies have protected not just themselves, but countless other hobbyists, software developers, small businesses, and nonprofits, who could no longer be threatened with some of the worst patents to slip through the cracks. 

  • A patent troll called WordLogic tried to shake down Wikipedia for $30,000. Once PTAB gave its initial ruling—that WordLogic’s patent was likely invalid—WordLogic got smart and settled the case. The lawsuits WordLogic was pushing against Wikipedia, and many other organizations, were dropped. 
  • A patent troll called SportBrain Holdings sued more than 80 companies on a patent they said covered getting user data, then sharing it over a network and providing feedback. The patent did not hold up to serious analysis. When a panel of PTAB judges looked at it, they canceled all claims. SportBrain was challenged by Unified Patents, a membership-based for-profit company that will be explicitly banned if the USPTO enacts these troll-friendly rules. 
  • Shipping and Transit LLC (formerly Arrivalstar) filed for bankruptcy in 2018 after more than a decade of litigation and 500 patent lawsuits. Shipping and Transit sued a vast array of retailers and shippers, claiming its patents covered almost any type of delivery notification. In its court filings, it valued its own portfolio of 34 U.S. patents at $1. IPR filings against Shipping and Transit, together with court fee orders, were a critical part of ending this onslaught against small business. 

The IPR process hasn’t eliminated patent trolling. But it’s been so effective that patent trolls and their pro-patent protectors absolutely hate the process. That’s why they are pushed so hard for these proposed rules, and are celebrating their arrival. 

The Proposed Rules Deliberately Sabotage The IPR System

USPTO Director Kathi Vidal has already tried to walk back responsibility for these rules. She said in Congress last month that the rules “giving stakeholders a chance to shape the rules.” But the only “stakeholders” who seem to have had a hand here are patent trolls and large patent-holders.

Many patent trolls would be exempt from IPRs altogether. The USPTO would prohibit anyone from challenging the patents of “small entities” and “under-resourced inventors.” But it’s trivially easy for even the most litigious patent trolls to portray themselves as “small inventors.” It happens all the time, and the USPTO rules buy into this sham. Many “inventors” are patent attorneys who have learned to game the system; they haven’t invented anything other than patents. Patent trolls that have sued hundreds of small businesses, and even public transportation systems, including Shipping and Transit LLC and various Leigh Rothschild entities, have claimed to be “inventor owned” businesses. 

If these rules were in force, it’s not clear that EFF would have been able to protect the podcasting community by fighting, and ultimately winning, a patent challenge against Personal Audio LLC. Personal Audio claimed to be an inventor-owned company that was ready to charge patent royalties against podcasters large and small. EFF crowd-funded a patent challenge and took out the Personal Audio patent after a 5-year legal battle (that included a full IPR process and multiple appeals). 

TAKE ACTION

We have a right to fight back against patent trolling

The Idea That People Challenging Patents Are Abusing The System Is Absurd

The rules create an upside-down world in which people who work to challenge patents are treated as the abusers of the system, rather than the patent trolls they’re up against. For instance, the rules would punish groups that file “serial petitions” or “parallel petitions” by simply denying them access to the PTAB. It also creates new rules denying petitions “to ensure that certain for-profit entities do not use the [] processes in ways that do not advance the mission and vision of the Office to promote innovation.” 

But it’s the patent office’s own wrongly granted patents—each one a 20-year government-granted monopoly—that often inhibit innovation. USPTO patents have allowed business models like Lodsys, the company that sent out hundreds of threats to small app developers demanding royalties for using basic, off-the-shelf in-app payment systems. The office has done nothing to rein in patent trolls; but now that there’s a system that can occasionally challenge them. 

This is wrong. It’s in the public interest to challenge patents and test which ones are wrongly granted. All properly timed and filed patent challenges should be heard on the merits, whether they are filed by for-profits, non-profits, large entities or individuals. That’s what Congress envisioned when it created the patent challenge process. 

The Rules Are A Direct Attempt By USPTO To Overturn Congress’s Power 

The IPR process was created by Congress in 2013 to resolve certain patent disputes more quickly and efficiently than courts could. When evidence is presented that there is “prior art,” or previously existing technology, that should have prevented the patent from issuing, the IPR process allows for a relatively quick quasi-judicial process that can result in patent claims being revoked. 

The IPR process was created after a long debate by elected representatives. If Congress wants to change the system, they’re able to do so. But USPTO officials must not be allowed to cripple patent challenges from the inside. 

As USPTO’s own statistics point out, it’s actually a tiny sliver of “live” patents that are even challenged, much less invalidated. In the last fiscal year, the USPTO partly invalidated 350 patents. Compare that to the 300,000 patent grants USPTO is handing out every year. 

The U.S. patent system remains wildly imbalanced—in favor of patent owners, not patent challengers. These proposed rules show that USPTO has it backwards. Please join us and speak out through the public comment process. No one should tolerate a patent troll takeover at PTAB.  

TAKE ACTION

Tell USPTO To Work For the Public, Not Patent Trolls

Joe Mullin

The Foreign Intelligence Surveillance Court Has Made a Mockery of the Constitutional Right to Privacy

3 days 12 hours ago

The latest evidence that Section 702 of the Foreign Surveillance Intelligence Act (FISA) must be ended or drastically reformed came last month in the form of a newly unsealed order from the Foreign Intelligence Surveillance Court (FISC) detailing massive violations of Americans’ privacy by the FBI.

The FISC order is replete with problems. It describes the government’s repeated, widespread violations—over a seven-year period—of procedures for searching its databases of internet communications involving Americans, all without a warrant. These searches included especially sensitive people and groups, including donors to a political campaign. And it shows the FISC giving the FBI all-but-endless do-overs, each time proclaiming that the executive branch has made “promising” steps toward compliance with procedures that are largely left up to government attorneys to design.

Perhaps most shocking, however, is the court’s analysis of how the Fourth Amendment should apply to the FBI’s “backdoor searches” of Americans’ communications. These searches occur when the FBI queries Section 702 data that was ostensibly collected for foreign intelligence purposes without a warrant but includes a person on U.S. soil in the communication.

Although the court acknowledged that the volume of Americans’ private communications collected using Section 702 is “substantial in the aggregate,” and that the FBI routinely searches these communications without a warrant for routine matters, it held that the government’s oft-broken safeguards are consistent with the Fourth Amendment and “adequately guard against error and abuse.” When EFF writes that Section 702 and similar programs have created a “broad national security exception to the Constitution,” this is what we mean.

As long as Section 702 has been debated, its defenders have assured the public that the FISC is just like any other federal court: independent from the executive branch under Article III of the Constitution and charged with protecting individual rights. But as this latest order shows, the FISC’s performance of this duty bears no resemblance to how other Article III courts have treated the same questions, even when those courts have been hamstrung by unwarranted secrecy around the facts of national security surveillance.

Case in point is the U.S. Court of Appeals for the Second Circuit’s 2019 opinion in United States v. Hasbajrami. Hasbajrami was a criminal case in which government agents read a US-resident’s emails collected using Section 702 and charged him with supporting a terrorist organization. As with every other criminal prosecution involving FISA, the defense did not have access to evidence about how the government actually used 702 to surveil Hasbajrami. Yet even with this unfairly narrow review, on appeal the Second Circuit pressed the government on important constitutional questions, including backdoor searches. It even ordered the government to submit additional briefing on why backdoor searches did not violate the Fourth Amendment.

In its Hasbajrami opinion, the Second Circuit wrote that regardless of the procedures the FBI put in place for backdoor searches, these searches must be treated as “separate Fourth Amendment events.” In other words, each and every time the government runs one of these searches, it must ensure it is not unreasonably violating Americans’ privacy. The court’s reasons for reaching this conclusion are noteworthy:

(1) Under Supreme Court precedent, just because the government comes into possession of a person’s private communications—as the NSA does routinely with Section 702— the government is not necessarily allowed to read them without getting a warrant.

(2) The “vast technological capabilities” of Section 702 mean that the government can simply throw Americans’ communications into databases and search them at a later date unrelated for a purpose unrelated to the original “incidental collection.”

(3) Even though Section 702 prohibits directly targeting US residents, the “the NSA may have collected all sorts of information about an individual, the sum of which may resemble what the NSA would have gathered if it had directly targeted that individual in the first place.”

(4) The agency running the searches matters. The example the court gave that would raise Fourth Amendment concerns? “FBI queries directed to a larger archive of millions of communications collected and stored by the NSA for foreign intelligence purposes, on the chance that something in those files might contain incriminating information about a person of interest to domestic law enforcement.” That's exactly the issue that was before the FISC in this latest opinion.

Clearly, the Second Circuit opinion raises a number of serious questions about whether a single backdoor search is constitutional. That concern is compounded by the hundreds of thousands of searches done by the government’s aggregate querying under Section 702, representing a massive violation of Americans’ privacy.

Even if the FISC did not wrestle with these questions adequately in the past—and it didn’t—you would expect the court to take notice of the Hasbajrami opinion and offer its own analysis. You’d be wrong. The newly unsealed opinion is apparently the first time the FISC has considered Hasbajrami, and in just over a page, the FISC wrote that it “respectfully” disagreed that each search should be viewed as a separate Fourth Amendment event. Instead, it “adhered” to its previous conclusion that the government’s own procedures safeguard privacy “as a whole.” So the scope of the collection and searching was irrelevant, as was the government’s consistent inability to even follow its procedures. But as we’ve said before, allowing the government to claim that protocols are sufficient to protect our constitutional rights turns the Fourth Amendment on its head.

The FISC’s treatment of backdoor searches makes a mockery of the right to privacy. In Hasbajrami, the court did not have a record of backdoor searches run against Mr. Hasbajrami, meaning that it could not say definitively what the Fourth Amendment required. In this FISC opinion, however, the court was presented with an extensive record of backdoor searches—as well as the ability to supplement the factual record to its satisfaction— and the court nevertheless refused to confront what was staring it in the face.

The FISC’s refusal to enforce the Fourth Amendment is yet another reason the surveillance enabled by Section 702 needs to be ended or drastically reformed. A starting point is a requirement in the law itself that the government obtain a warrant before searching its databases for Americans’ communications, which would address the Second Circuit’s concerns in Hasbajrami. Our privacy should not depend on the FBI’s self-policing and the secret court’s contorted interpretation of the Constitution.

Related Cases: United States v. Hasbajrami
Andrew Crocker

The Right to Repair Is Law in Minnesota. California Should Be Next

4 days 13 hours ago

Last week, Minnesota governor Tim Walz signed an omnibus bill that includes a comprehensive right to repair law requiring manufacturers to make spare parts, repair information, and tools available to consumers and repair shops. This law builds on smaller, but still significant, wins in Colorado, Massachusetts, and New York. California could be next. "The Right to Repair" Act (S.B. 244), just passed the California Senate and is on its way to the State Assembly.

The right to repair movement has a lot of momentum. In 2022, Colorado passed a law that gave wheelchair users access to the resources they need to repair their own chairs, and the state followed that up with another targeted bill giving farmers and ranchers the right to repair agricultural equipment. Massachusetts has passed several measures around car repairs. Last year we also got the first broad consumer right to repair legislation in New York, though that bill took a big step backward at the last moment.

TAKE ACTION

Support the "Right to Repair" Act

After a disappointing loss in California last year, we are happy to see California’s legislators revisit the issues with the new "Right to Repair" Act. The bill requires manufacturers of electronic and appliance products to provide repair manuals, replacement parts, and tools. It includes all of the same types of products covered by Minnesota’s legislation, and explicitly adds products sold to schools, businesses, and local governments outside of retail sale. This is especially important in schools, where Chromebooks have short lifespans. Combined with the Song-Beverly Act, S.B. 244 sets a specific timeline on how long manufacturers must provide access to parts, tools and documentation for repair: at least three years for products wholesale priced between $50 and $99.99, and at least seven years for products over $100. In contrast, Minnesota's bill specifies that manufacturer's don't have to sell parts after the product is off the market.

S.B. 244 is not perfect. Like Minnesota's new law, it doesn’t cover cars, farm equipment, medical devices, industrial equipment, or video game consoles. But thankfully S.B. 244 doesn't include the confusing language around cybersecurity that the Minnesota law has. Overall, it raises the bar. 

Minnesota's right to repair law is the broadest yet, and will likely benefit people around the nation, especially when it comes to repair manual availability. If California passes S.B. 244 those benefits will broaden, while still leaving room for improvements in the future.

The "Right to Repair" Act is a great step forward, but we must keep fighting for the right to repair ALL of your devices, including cars, medical devices, farm equipment, and everything in between. 

If you're a Californian, you can help!  Please take action to support the "Right to Repair" Act today.

TAKE ACTION

Support the "Right to Repair" Act

Thorin Klosowski

Federal Judge Makes History in Holding That Border Searches of Cell Phones Require a Warrant

5 days 14 hours ago

With United States v. Smith (S.D.N.Y. May 11, 2023), a district court judge in New York made history by being the first court to rule that a warrant is required for a cell phone search at the border, “absent exigent circumstances” (although other district courts have wanted to do so).

EFF is thrilled about this decision, given that we have been advocating for a warrant for border searches of electronic devices in the courts and Congress for nearly a decade. If the case is appealed to the Second Circuit, we urge the appellate court to affirm this landmark decision.

The Border Search Exception as Applied to Physical Items Has a Long History

U.S. Customs & Border Protection (CBP) asserts broad authority to conduct warrantless, and often suspicionless, device searches at the border, which includes ports of entry at the land borders, international airports, and seaports.

For a century, the Supreme Court has recognized a border search exception to the Fourth Amendment’s warrant requirement, allowing not only warrantless but also often suspicionless searches of luggage and other items crossing the border.

The number of warrantless device searches at the border and the significant invasion of privacy they represent is only increasing. In Fiscal Year 2022, CBP conducted an all-time high of 45,499 device searches.

The Supreme Court has not yet considered the application of the border search exception to smartphones, laptops, and other electronic devices that contain the equivalent of millions of pages of information detailing the most intimate details of our lives—even though we asked them to back in 2021.

Circuit Courts Have Narrowed the Border Search Exception’s Application to Digital Data

Federal appellate courts, however, have considered this question and circumscribed CBP’s authority.

The Ninth Circuit in United States v. Cano (2019) held that a warrant is required for a device search at the border that seeks data other than “digital contraband” such as child pornography. Similarly, the Fourth Circuit in United States v. Aigbekaen (2019) held that a warrant is required for a forensic device search at the border in support of a domestic criminal investigation.

These courts and the Smith court were informed by Riley v. California (2014). In that watershed case, the Supreme Court held that the police must get a warrant to search an arrestee’s cell phone.

The Smith Court Rightly Applied the Riley Balancing Test

In our advocacy, we have consistently argued that Riley’s analytical framework should inform whether the border search exception applies to cell phones and other electronic devices. This is precisely what the Smith court did: “In holding that warrants are required for cell phone searches at the border, the Court believes it is applying in straightforward fashion the logic and analysis of Riley to the border context.”

In Riley, the Supreme Court applied a balancing test, weighing the government’s interests in warrantless and suspicionless access to cell phone data following an arrest, against an arrestee’s privacy interests in the depth and breadth of personal information stored on modern cell phones.

In analyzing the government’s interests, the Riley Court considered the traditional reasons for authorizing warrantless searches of an arrestee’s person: to protect officers from an arrestee who might use a weapon against them, and to prevent the destruction of evidence.

The Riley Court found only a weak nexus between digital data and these traditional reasons for warrantless searches of arrestees. The Court reasoned that “data on the phone can endanger no one,” and the probability is small that associates of the arrestee will remotely delete digital data.

The Riley Court also detailed how modern cell phones can in fact reveal the “sum of an individual’s private life,” and thus individuals have significant and unprecedented privacy interests in their cell phone data.

On balance, the Riley Court held that the traditional search-incident-to-arrest exception to the warrant requirement does not apply to cell phones.

The Smith court properly applied the Riley balancing test in the border context, noting that travelers’ privacy interests in their digital data are also significant:

Just as in Riley, the cell phone likely contains huge quantities of highly sensitive information—including copies of that person’s past communications, records of their physical movements, potential transaction histories, Internet browsing histories, medical details, and more … No traveler would reasonably expect to forfeit privacy interests in all this simply by carrying a cell phone when returning home from an international trip.

In analyzing the government’s interests in gaining warrantless access to cell phone data at the border, the Smith court considered the traditional justifications for the border search exception: in the words of the judge, “preventing unwanted persons or items from entering the country.” In particular, the government has a strong interest in conducting warrantless searches of luggage and other containers to identify goods subject to customs duty (import tax) and items considered contraband or that would otherwise be harmful if brought into the country such as drugs or weapons.

Considering these traditional rationales for the border search exception in the context of modern cell phones, the Smith court concluded that the government’s “interest in searching the digital data ‘contained’ on a particular physical device located at the border is relatively weak.”

The court focused on the internet and cloud storage, stating: “Stopping the cell phone from entering the country would not … mean stopping the data contained on it from entering the country” because any data that can be found on a cell phone—even digital contraband—“very likely does exist not just on the phone device itself, but also on faraway computer servers potentially located within the country.” This is different from physical items that if searched without a warrant may be efficiently interdicted, and thereby actually prevented from entering the country.

The Smith court further explained:

To be sure, that data may contain information relevant to the Government’s determination as to whether a person should be allowed entry, but the Government has little heightened interest in blocking entry of the information itself, which is the historical basis for the border search exception.

Thus, the Smith court concluded:

Because the government’s interests in a warrantless search of a cell phone’s data are thus much weaker than its interests in warrantless searches of physical items, and a traveler’s privacy interests in her cell phone’s data are much stronger than her privacy interests in her baggage, the Court concludes that the same balancing test that yields the border search exception cannot support its extension to warrantless cell phone searches at the border.

EFF’s Work Is Making a Difference

The Smith court’s application of Riley’s balancing test is nearly identical to the arguments we’ve made time and time again.

The Smith court also cited Cano, in which the Ninth Circuit engaged extensively with EFF’s amicus brief even though it didn’t go as far as requiring a warrant in all cases. The Smith court acknowledged that no federal appellate court “has gone quite this far (although the Ninth Circuit has come close).”

We’re pleased that our arguments are moving through the federal judiciary and finally being embraced. We hope that the Second Circuit affirms this decision and that other courts—including the Supreme Court—are courageous enough to follow suit and protect personal privacy.

DONATE TO EFF

Sophia Cope

EU’s Proposed Cyber Resilience Act Raises Concerns for Open Source and Cybersecurity

5 days 17 hours ago

The EU is in the middle of the amendments process for its proposed Cyber Resilience Act (CRA), a law intended to bolster Europe’s defenses against cyber-attacks and improve product security. This law targets a broad swath of products brought to market intended for European consumers, including Internet of Things (IoT) devices, desktop computers, and smartphones. It places requirements on device manufacturers and distributors with regards to vulnerability disclosure, and introduces new liability regulations for cybersecurity incidents.

EFF welcomes the intention of the legislation, but the proposed law will penalize open source developers who receive any amount of monetary compensation for their work. It will also require manufacturers to report actively exploited, unpatched vulnerabilities to regulators. This requirement risks exposing the knowledge and exploitation of those vulnerabilities to a larger audience, furthering the harms this legislation is intended to mitigate.

Threats to Open Source Software

Open source software serves as the backbone of the modern internet. Contributions from developers working on open source projects such as Linux and Apache, to name just two, are freely used and incorporated into products distributed to billions of people worldwide. This is only possible through revenue streams which reward developers for their work, including individual donations, foundation grants, and sponsorships. This ecosystem of development and funding is an integral part of the functioning and securing of today’s software-driven world.

The CRA imposes liabilities for commercial activity which bring vulnerable products to market. Though recital 10 of the proposed law exempts not-for-profit open source contributors from what is considered “commercial activity” and thus liability, the exemption defines commercial activity much too broadly. Any open source developer soliciting donations or charging for support services for their software is not exempted and thus liable for damages if their product inadvertently contains a vulnerability which is then incorporated into a product, even if they themselves did not produce that product. Typically, open source contributors and developers write software and make it available as an act of good-will and gratitude to others who have done the same. This would pose a risk to such developers if they receive even a tip for their work. Smaller organizations which produce open source code to the public benefit may have their entire operation legally challenged simply for lacking funds to cover their risks. This will push developers and organizations to abandon these projects altogether, damaging open source as a whole.

We join others in raising this concern and call on the CRA to further exempt individuals providing open source software from liability, including when they are compensated for their work.

Vulnerability Disclosure Requirements Pose a Cybersecurity Threat

Article 11 of the proposed text requires manufacturers to disclose actively exploited vulnerabilities to the European Union Agency for Cybersecurity (ENISA) within 24 hours. ENISA would then be required to forward fine details of these vulnerabilities on to the Member States’ Computer Security Incident Response Teams (CSIRTs) and market surveillance authorities. Intended as a measure for accountability, this requirement incentivizes product manufacturers with a lackluster record on product security to actively pursue and mitigate vulnerabilities. However well intended, this requirement will likely result in unintended consequences for manufacturers who prioritize their product security. Vulnerabilities that have serious security implications for consumers are often treated by these companies as well-guarded secrets until fixes are properly applied and deployed to end devices. These vulnerabilities can take weeks or even months to apply a proper fix.

The short time-frame will disincentivize companies from applying “deep” fixes which correct the root cause of the vulnerability in favor of “shallow” fixes which only address the symptoms. Deep fixes take time, and the 24-hour requirement starts the timer on response and will result in sloppy patchwork responses. 

The second effect will be that a larger set of agencies and people will be made aware of the vulnerability quickly, which will greatly expand the risk of exposure of these vulnerabilities to those who may want to use them maliciously. Government knowledge of a range of software vulnerabilities from manufacturers could create juicy targets for hacking and espionage. Manufacturers concerned about the security outcomes for their customers will have little control or insight into the operational security of ENISA or the member-state agencies with knowledge of these vulnerabilities. This reporting requirement increases the risk that the vulnerability will be added to the offensive arsenal of government intelligence agencies. Manufacturers should not have to worry that reporting flaws in their software will result in furthering cyber-warfare capabilities at their expense.

An additional concern is that the reporting requirement does not include public disclosure. For consumers to make informed decisions about their purchases, details about security vulnerabilities should be provided along with security updates.

Given the substantial risks that this requirement poses, we call on European lawmakers to abstain from mandating inflexible deadlines for tackling security issues and that detailed reports about vulnerabilities are issued to ENISA only after vulnerabilities have been fixed. In addition, detailed public disclosure of security fixes should be required. For companies that have shown a lackluster record on product security, more stringent requirements may be imposed—but this should be the exception, not the rule.

Further Protections for Security Researchers

Good-faith security research—which can include disclosure of vulnerabilities to manufacturers—strengthens product security and instills confidence in consumers. We join our partner organization EDRi in calling for a safe harbor for researchers involved in coordinated  disclosure practices. This safe harbor should not imply that other forms of disclosure are harmful or malicious. An EU-wide blanket safe harbor will give assurance to security researchers that they will not come under legal threat by doing the right thing.

Start With a Good First Step

The Cyber Resilience Act is intended to strengthen cybersecurity for all Europeans. However, without adopting changes to the proposed text, we fear aspects of the act will have the opposite effect. We call on the European Commission to take the concerns of the open source community and security professionals seriously and amend the proposal to address these serious concerns.

Bill Budington

To Save the News, We Must Ban Surveillance Advertising

5 days 23 hours ago

This is part three of an ongoing, five-part series. Part one, the introduction, is here. Part two, about breaking up ad-tech companies, is here.

The ad-tech industry is incredibly profitable, raking in hundreds of billions of dollars every year by spying on us. These companies have tendrils that reach into our apps, our televisions, and our cars, as well as most websites. Their hunger for our data is insatiable. Worse still, a whole secondary industry of “brokers” has cropped up that offers to buy our purchase records, our location data, our purchase histories, even our medical and court records. This data is continuously ingested by the ad-tech industry to ensure that the nonconsensual dossiers of private, sensitive, potentially compromising data that these companies compile on us are as up-to-date as possible. 

Commercial surveillance is a three-step process:

  1. Track: A person uses technology, and that technology quietly collects information about who they are and what they do. Most critically, trackers gather online behavioral information, like app interactions and browsing history. This information is shared with ad tech companies and data brokers.
  2. Profile: Ad tech companies and data brokers that receive this information try to link it to what they already know about the user in question. These observers draw inferences about their target: what they like, what kind of person they are (including demographics like age and gender), and what they might be interested in buying, attending, or voting for.
  3. Target: Ad tech companies use the profiles they’ve assembled, or obtained from data brokers, to target advertisements. Through websites, apps, TVs, and social media, advertisers use data to show tailored messages to particular people, types of people, or groups.

This data-gathering and processing is the source of innumerable societal harms: it fuels employment discrimination, housing discrimination, and is a pipeline for predatory scams. The data also finds its way into others’ hands, including the military, law enforcement, and hostile foreign powers. Insiders at large companies exploit data for their own benefit. It’s this data that lets scam artists find vulnerable targets and lets stalkers track their victims. 

Our entire digital environment has been warped to grease the skids for this dragnet surveillance. Our mobile devices assign tracking identifiers to us by default, and these unique identifiers ripple out through physical and digital spaces, tracking us to the most minute degree. 

All of this done in the name of supporting culture and news. The behavioral advertising industry claims that it can deliver more value to everyone through this surveillance: advertisers get to target exactly who they want to reach; publishers get paid top dollar for setting up exactly the right user with exactly the right ad, and the user wins because they are only ever shown highly relevant ads that are tailored to their interests.

Of course, anyone who’s ever used the internet knows that this is hogwash. Advertisers know that they are being charged billions of dollars for ads that are never delivered. Publishers know that billions of dollars collected from advertisers for ads that show up alongside their content are never delivered.

And as to the claim that users “like ads, so long as they are relevant,” the evidence is very strong that this isn’t true and never was. Ad-blocking is the most successful consumer boycott in human history. When Apple give iPhone users a one-click opt-out to block all surveillance ads, 96 percent of users clicked the button (presumably, the other four percent were confused, or they work for ad-tech companies).

Surveillance advertising serves no one except creepy ad-tech firms; for users, publishers and advertisers, surveillance ads are a bad deal.

It’s time to kill them.

Non-Creepy Ads

Getting rid of surveillance ads doesn’t mean getting rid of ads altogether. Despite the rhetoric that “if you’re not paying for the product, you’re the product,” there’s no reason to believe that the mere act of paying for products will convince the companies that supply that product to treat you with respect.

Take John Deere tractors: farmers pay hundreds of thousands of dollars for large, crucial pieces of farm equipment, only to have their ability to repair them (or even complain about them) weaponized and monetized against them.

You can’t bribe a company into treating you with respect - companies respect you to the extent that they fear losing your business, or being regulated. Rather than buying our online services and hoping that this so impresses tech executives that they treat us with dignity, we should ban surveillance ads.

If surveillance ads are banned, advertisers will have to find new ways to let the public know about their products and services. They’ll have to return to the techniques that advertisers used for centuries before the very brief period in which surveillance advertising came to dominate: they’ll have to return to contextual ads.

A contextual ad is targeted based on the context in which it appears: what article it appears alongside of, or which publication. Rather than following users around to target them with ads contextual advertisers seek out content that is relevant to their messages, and place ads alongside that content.

Historically, this was an inefficient process, hamstrung by the need to identify relevant content before it was printed or aired. But the same realtime bidding systems used to place behavioral ads can be used to place contextual ads, too. 

The difference is this: rather than a publisher asking a surveillance company like Google or Meta to auction off a reader on its behalf, the publisher would auction off the content and context of its own materials.

That is, rather than the publisher saying “What am I bid for the attention of this 22 year old, male reader who lives in Portland, Oregon, is in recovery for opioid addiction, and has recently searched for information about gonorrhea symptoms?” the publisher would say, “What am I bid for the attention of a reader whose IP address is located in Portland, Oregon, who is using Safari on a recent iPhone, and who is reading an article about Taylor Swift?”

There are some obvious benefits to this. First things first: it doesn’t require surveillance. That’s good for readers, and for society.

But it’s also good for the publisher. No publisher will ever know as much about readers’ behavior than an ad-tech company; but no ad-tech company will ever know as much about a publisher’s content than the publisher. That means that it will be much, much harder for ad-tech companies to lay claim to a large slice of the publisher’s revenue, and it will be much, much easier for publishers to switch ad-tech vendors if anyone tries it.

That means that publishers will get a larger slice of the context ads pie than they do when the pie is filled with surveillance ads. 

But what about the size of the pie? Will advertisers pay as much to reach readers who are targeted by context as they do when the targeting is behavioral? 

Not quite. The best research-driven indications we have so far is that advertisers will generally pay about five percent less for context-based targeting than they do for behavioral targeting.

But that doesn’t mean that publishers will get paid less - even if advertisers insist on a five percent discount to target based on context, a much greater share of the ad-spending will reach the publishers. The largest ad-tech platforms currently bargain for more than half of that spending, a figure they’re only able to attain because their monopoly power over behavioral data gives them a stronger negotiating position over the publishers.

But more importantly: if ad tracking was limited to users who truly consented to it, almost no one would see any ads, because users do not consent to tracking.

This was amply demonstrated in 2021, when Apple altered iOS, the operating system that powers iPhones and iPads, to make it easy to opt out of tracking. 96 percent of Apple users opted out - costing Facebook over $10 billion dollars in lost revenue in the first year

Unfortunately, Apple continues to track its users in order to target ads at them, even if those users opt out. But if the US were to finally pass a long-overdue federal privacy law with a private right of action and require real consent before tracking, the revenue from surveillance ads would fall to zero, because almost no one is willing to be tracked.

This is borne out by the EU experience. The European Union’s General Data Protection Regulation (GDPR) bans surveillance for the purpose of ad-targeting without consent. While the US-based ad-tech giants have refused to comply with this rule, they are finally being forced to do so.

Not everyone has flouted the GDPR. The Dutch public broadcaster NPO only used targeted ads for users who consented to them, which means it served virtually no targeted ads. Eventually, NPO switched to context ads and saw a massive increase in ad revenues, in part because the ads worked about as well as surveillance ads, but mostly because no one saw their surveillance ads, while everyone saw context ads.

Killing surveillance ads will make surveillance companies worse off. But everyone else: readers, journalists, publishers, and even advertisers will be much better off.

Cory Doctorow

Podcast Episode: Who Inserted the Creepy?

6 days 3 hours ago

Writers sit watching a stranger’s search engine terms being typed in real time, a voyeuristic peek into that person’s most private thoughts. A woman lands a dream job at a powerful tech company but uncovers an agenda affecting the lives of all of humanity. An app developer keeps pitching the craziest, most harmful ideas she can imagine but the tech mega-monopoly she works for keeps adopting them, to worldwide delight.  

%3Ciframe%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F1465fa17-86dd-4673-9aad-b14fe8f2f4f4%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20width%3D%22100%25%22%20height%3D%2252px%22%20frameborder%3D%22no%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

  
  

You can also find this episode on the Internet Archive.

The first instance of deep online creepiness actually happened to Dave Eggers almost 30 years ago. The latter two are plots of two of Eggers’ many bestselling novels—“The Circle” and “The Every,” respectively—inspired by the author’s continuing rumination on how much is too much on the internet. He believes we should live intentionally, using technology when it makes sense but otherwise logging off and living an analog, grounded life. 

Eggers — whose newest novel, “The Eyes and the Impossible,” was published this month — speaks with EFF’s Cindy Cohn and Jason Kelley about why he hates Zoom so much, how and why we get sucked into digital worlds despite our own best interests, and painting the darkest version of our future so that we can steer away from it.   

In this episode, you’ll learn about: 

  • How that three-digit credit score that you keep striving to improve symbolizes a big problem with modern tech. 
  • The difficulties of distributing books without using Amazon.  
  • Why round-the-clock surveillance by schools, parents, and others can be harmful to kids. 
  • The vital importance of letting yourself be bored and unstructured sometimes. 

Dave Eggers is the bestselling author of his memoir “A Heartbreaking Work of Staggering Genius” (2000) as well as novels including “What Is the What” (2006), “A Hologram for the King” (2012), “The Circle” (2013), and “The Every” (2021); his latest novel, “The Eyes and the Impossible,” was published May 9. He founded the independent publishing company McSweeney’s as well as its namesake daily humor website, and he co-founded 826 Valencia, a nonprofit youth writing center that has inspired over 70 similar organizations worldwide. Eggers is winner of the American Book Award, the Muhammad Ali Humanitarian Award for Education, the Dayton Literary Peace Prize, and the TED Prize, and has been a finalist for the National Book Award, the Pulitzer Prize, and the National Book Critics Circle Award. He is a member of the American Academy of Arts and Letters.

Transcript

DAVE EGGERS
I worked at Salon.com when they were new. So it was one of the first online magazines. This was ‘94. There was only like six of us that worked at Salon back then, and we were having a ball. And he showed me a screen that looked like a regular search engine screen. This was in the era of, like, Alta Vista. But instead of you doing a search, you were watching other people do searches. You could watch them typing in words and then getting a result and then typing in the next word. So you were seeing somebody look up like gonorrhea, or treatment for eczema or whatever it was. And it was real people, it wasn't just some sort of demonstration. And we watched that for five, 10 minutes. And then there's just this creeping sense of just disgust, you know, like who came up with this? Who is enabling us to look through this mirror? What mind would think of this to let somebody else access this, even if it's so-called, you know, anonymized.

And at that moment, I remember it like it was yesterday, cuz that was a turning point // And it was a little bit of a foreshadowing of just how creepy things would get, and, and that's a word that I think comes up again and again, you know, is that creepy aspect of it that should have been just pure delight and access and democracy and sharing everything from articles to cat pictures, every, all of these great things about it. But who decided to insert the creepy?

I think it was some very strange minds that got ahold of some of the levers of power very early on and, and had too much of it. And some of them are now billionaires. But I think that that was that moment when I thought, ‘Uh oh.’

CINDY COHN
That’s author Dave Eggers and he’s talking about an early experience with the potential for creepiness online – a topic he has since satirized in several of his novels. His recent novel The Every describes apps that weaponize surveillance to determine the truthfulness of your friends, the quality of your parents, how happy you are based upon the products you’re buying, and eventually a “SUMNUM” that summarizes your entire worth in a three-digit number.

I’m Cindy Cohn, the Executive Director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley, EFF’s Associate Director of Digital Strategy.

This is our podcast series: How to Fix the Internet.

CINDY COHN
One of the things we’re trying to do with this show is to share a bunch of visions of what it looks like when we get the internet RIGHT. We’re grounding ourselves in the four levers of change articulated by Larry Lessig in his seminal work, Code and other laws of Cyberspace: law, code, markets and norms. And I’m excited to talk to Dave today because his books about the problems in our digital world focus on the norms part – how we get sucked into digital worlds despite our own best interests and why. And once we’re there, the other levers – laws, code, markets, become less powerful and can feel almost irrelevant, or easily neutralized. There was a grain of truth there, and I wanted to challenge him to flip the script and think about what happens if we get it right.

JASON KELLEY
Dave is an award winning author, as well as an educator and activist. And in two of his recent novels, he explores, and pretty hilariously satirizes, many of the issues that we deal with here at EFF – social media monopolies,surveillance, and what happens when the problems with the internet grow so big that tackling them becomes nearly impossible.

As Cindy described, in his book The Every, Dave takes all the worst and scariest ideas about online life to their most horrifying conclusion – and no one ever says “ENOUGH!”

So we started with a simple question: What compels him to imagine these worst-case digital scenarios and put them on the page?

DAVE EGGERS
I think that we sometimes need the darkest version of where we're going as a species - we need to be able to paint that picture so that we can avoid it. We need to see how bad it could get if we don't change course. And I think that's always been the point of dystopian fiction. And typically it's written by people that are fiercely humanistic and want the best outcome for us as a group and as a species, and are maybe horrified by where we're going and maybe have an idea of like, okay, this is how bad it could get, but we have the time, we have the power to course correct.

That's my goal. You know, I think that we have so much power. So much of this is fixable. And, um, and I think that what I was trying to do is exaggerate it to the point so that it was comical, and also mix that with sheer horror. And this is the sort of way, anytime I'm experiencing digital life, it's a mixture of horror and comedy.

CINDY COHN
The thing that I really loved about the conceit of it is that there's this, you know, activists tell themselves that if everybody just saw how horrible it could get, you know, everybody would say, whoa, we shouldn't do that and change course. And so you set your protagonist to do that, and then it, it doesn't work, right? Because no matter what crazy idea she comes up with, like, are your friends, your friends and your parents, were they good parents? And all of those kinds of things that everybody just adopts them,and it ends up that it's pretty hard for her to create that moment where we, we all recognize how bad it's gotten and we make the great leap forward to something like that. And I wondered how you, how you thought about that because it, it, you know, as an activist and as somebody who's often trying to bring those moments about, I shared her frustration.

DAVE EGGERS
I think that over the last couple decades, especially in this century, I've been surprised at how often what seems to be the worst possible ideas, that I think are not going to get out of the gate, are quickly adopted. And, and made sort of ubiquitous. I mean, Zoom is a good example of, I think crude and flawed technology that makes everyone feel hollow and exhausted. And yet everything, every meeting that everybody does now has to be done via Zoom.

I don't know if any technology has been so globally adopted so quickly, at least not in my lifetime as Zoom. And we haven't necessarily paused and said, do we like this ? Is it the right technology? Do we enjoy it? Does it make us feel better? I think that the look, everybody in their closets, everybody in their garages and their living rooms, they have to think about what their backdrop is, everybody that, that fisheye technology that makes it all look like a hostage video. It's so strange to see how universally adopted it is, when I've never met anybody that likes it. And so, I don't know, that's just one of the thousand ways that I think that we quickly embrace every specious or maybe questionable or in some time, in some cases, purely absurd new twist on digital life. It's weird. I have such infinite hope in humanity, and then sometimes I'm just kind of startled by the things that we'll embrace.

JASON KELLEY
One of the main apps in the book is called, at first it's called Authenti-Friend, and it determines whether or not the person you're talking to, who's your friend, ostensibly, is being honest and sincere and, like, whether or not they are your friend, basically. And there's a moment where the character, the main character is pitching this and she says that she also thinks it could help with depression and maybe even suicidal ideation and things like that because it can detect what kind of thoughts people are really having based on what their face looks like and what they're saying. And she's intending this to be, you know, a terrible idea, which of course is immediately adopted.

And just a few years ago, my partner works in mental health and there was a hackathon where literally this, that exact idea was presented. And all of the other judges who were all VCs were like, this is the best idea anyone's ever had. And she was like, this is a terrible idea. You can't diagnose someone with depression, just with a metric. And then no follow up. There's so many reasons why that wouldn't work. And as I read this, I was like, wow. Are you, were you at that hackathon or is this just like, like where did these terrible ideas come from? As someone who doesn't, like you said, tries to spend less time than probably people like us at e ff looking at new tech, how did you come up with these terrible ideas? I mean, this is a weird mindset to put yourself in, but I feel like it is literally the mindset of a lot of, uh, company founders, you know?

DAVE EGGERS
I have the same impulse. I'm trying to think of the worst and most, um, destructive apps and platforms and algorithms possible, but also something that would have some ostensible use or some superficial appeal. Because that's where I think that the line is always interesting to me, is that there are maybe useful applications, and I always used to call it like a fifty one forty nine.

Like 49% of people might say, this is actually really useful. I'm gonna adopt this tomorrow. And then the 51 would say, this is gonna end the species as we know it. And so much of technology that we live with is exactly that. There's so many wonderful uses, but on balance, it's actually diminished us as a species.

And I think that the question is, would people feel like that's too far? If to say, I'm gonna, while I'm, you know, FaceTiming with a friend, it's gonna tell me whether they're lying to me, whether they're being truthful about what they think of my boyfriend, whether they're being truthful about where they were last night when they said that they were staying home, but in actuality, they were out with friends without me. All of these things. I think that the temptation to use that kind of personal interpersonal surveillance technology is so overwhelming that it would be popular and universally adopted kind of overnight, even despite the fact that it's based on mistrust of your friends. It's based on spying. You know, it requires sort of a kind of surreptitious spying on your friends. It itself is deceitful. You know what's so funny is because it's a one way technology where you are spying on your friend to see if your friend is truthful to you. You know, and you're not telling your friend that you're using this technology while you're using it.

But I think that anything that gives people, and this is all of us, I'm not judging because I think that we all are trying to eliminate uncertainty in our lives. And so anything that tells us or seems to answer a question that beforehand had seemed unanswerable – is this person I love and care about being truthful with me – um, we will instantly adopt that technology. And I think that that's the kind of species level pivot that we've made in those last 10, 15 years that I think is especially unsettling.

Think about how we live with credit scores, which a lot of parts of the world do not. And we have three companies that are private and opaque, and they govern our most of our access to opportunity. So through a three digit number that's based on whether you did or didn't pay a credit card bill when you were in college, or whether or not you, you know, your dentist reports you, uh, to a collection agency because you're a month late on a bill, if you dip below a certain number on your credit score, you cannot rent an apartment. You can't buy a car. You can't sometimes be employed. And these companies answer to nobody. And yet it is an outrageous system. It's incredibly anti-democratic. It's most destructive to the most marginalized people.

And yet there's no legislation really that governs them effectively. There's been no pushback. We all accept it because somehow we think that they probably know what they're doing. And we, and we also trust that if there's three numbers, if there's a three digit number, there's some definitiveness to it that we implicitly trust. Like, oh, well, you know, I'm sure it's some incredibly complicated formula that, uh, really knows me. And it is outrageous. It's incredibly crude. And it is absolutely anti-human. And, you know, the countries that don't use credit scores this way are just aghast at how much it governs our lives here.

But the fact that we live with them and accept them and have for decades means that we would probably accept a much more intrusive, all-encompassing number to say, okay, let's incorporate parking tickets. Let's incorporate high school grades. Let's incorporate late fees at the library, everything together to determine your value, your validity, your worth as a human.
And I think that people would accept it because again, the illusionary definitiveness of that number.

CINDY COHN
Yeah. We've gotten involved with the credit agencies because of their data breaches, right?

DAVE EGGERS
Mm-hmm.

CINDY COHN
They not only collect a lot of data about us, so they can't keep it very well. And, um, and it's amazing in that context, the power of the, well, you know, the American dream is based upon easy credit. That is so empowering to people that we need to tolerate, you know, not only the system, but the fact that the system breaks in ways that really hurts people, um, in the ways that you're talking about, but also in, you know, with identity theft and other things when they can't keep a hold of their data.

It's an amazingly powerful argument, but I think you're right at the, about the underlying emotional thing. The other thing I liked about your book a similar way is, you know, this idea that you can surveil yourself to safety, right? This is something we fight in a lot of the work that we do, that the more, the more we're watching each other and our streets, you know, and, and, and our, you know, our public behavior, but even our private behavior, you know, the safer will be.

DAVE EGGERS
I'm always sort of more interested in the average human, how we are empowering the life of 24/7 surveillance and the power of monopolies, how we are giving all this power willingly away. That's always been the most interesting thing to me.

JASON KELLEY
On that note, the jungle as it's called in The Every, is really just an obvious stand-in kind of for Amazon and, and it is monopolies that you're kind of talking about when it comes to, um, the circle combining with this other large sort of shopping and distribution company that is represented as The Jungle in the book. That was something that you actually tackled when you put out the book by not putting it on Amazon. And I think that that was, it sounds like a very difficult thing to do right in the, in 2022 or whatever year, uh, we are, we're at now to put out a book that isn't on Amazon. Sounds like it was surprisingly complex. And I wonder like, how, how did that happen? How did you manage that and how is it working?

DAVE EGGERS
Well, we, you know, we have a little publishing company here in San Francisco called McSweeney's, and right now we are five people. And so it took us maybe six months to plan out and work around all of the tendrils and arms of the Amazon Octopus because they are involved in every aspect, not just their channel, but every distribution means.

If you want to be distributed by a given distributor, they have a wraparound deal with every distributor that any book that that distributor distributes also has to be on Amazon, if that makes sense. So you can't go around them without them breaking their overall contract with the distributor that you want to go with.

So our distributor, which is a small one, had to, they had a deal that any book that they put out has to go through Amazon. So we actually had to go through like a, a weird guerilla subset of that distributor to get the book out and not have it go through the same metadata, to have the same system and everything that, that every other book goes through. Because Amazon is the default, they distribute all the data about every book in existence more so than any other system. So to go around that because they are the keeper of all of that metadata, uh, was exceedingly hard. And you sort of have to be vigilant about it every hour and every day so that it doesn't end up there because there's also second, third party sellers that could very quickly pick up a box of the books and put it on Amazon in their own way.

And so it was so hard and such a pain in the ass, but we felt like we had to try it. And we first tried this in 2002 with my second book, You Shall Know Our Velocity. We put it out without Amazon and it was infinitely easier then, but they hadn't, they hadn't taken over so much of the industry. You know, whether it's Goodreads or ABE Books or you know, all of these different aspects of the publishing industry they have swallowed.

And so back then it wasn't all that hard. We could quickly, you know, direct distribute to 500 stores and direct bill them and everything, but it's been so much harder now. And I think that this is a monopoly to end all monopolies. I think that you just don't realize just how much power they have and it's, it's power that we gave them and it's power that's very hard to take back at this point. Anyway, it mirrored very closely some of the dystopian, uh, outcomes in the book to see in, in real life how hard it is to work around them.

JASON KELLEY
I want to jump in here for a mid-show break to say thank you to our sponsor.

“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. So a tip of the hat to them for their assistance.

And now back to our conversation with Dave Eggers. Even though he spends a lot of time sounding alarm bells about the dark side of digital life – much like us at EFF – he’s not anti-technology. Actually, back in the ‘90s, he was a self-described early adopter.

DAVE EGGERS
I was a pretty early adopter, I had the first, the second, the third Apple, you know, the Mac. I learned a desktop publishing because of how they sort of democratize access to those of us that can't, uh, do any math. You know, like, computers before the Apple computers, it was completely inaccessible to a mind like me, but suddenly, uh, because of how Apple made it aesthetically inviting and sort of, you know, created an interface that, you know, the humanities people could, uh, understand. It made me a publisher. It made me a graphic designer. It gave so much access in so many ways. And so I was such a devotee way back when. And uh, and I think because I was so enamored with so much, so much of this technology that I got really disappointed when it took so many dark turns of like the conglomeration of wealth and power, the surveillance aspect of it. Every new twist to her just made me really sad because, uh, because I love so much of it. Uh, before that, I mean, as an example, I still have, the laptop I work on on a daily basis is 19 years old. And all my software I bought, then I own it. And I know that EFF talks about this a lot. Like, uh, I have, I can repair it, you know, so it gets into Right To Repair the software. I never have to update because I bought it. I own it. I shouldn't have to have an ongoing relationship with this company. I own the stuff. I know how to use it.

And I think that that relationship to technology is sort of how I like it. Like, you buy something, you buy a great tool that brilliant engineers put together, and that's it. That's the relationship. I say it. See you later. Thank you very much. I admire what you've done. Now leave me alone. Let me use the tool that you created. But I think that this relationship that you can never be free, that you, that, uh, you have to report back to them, that they essentially own what you bought.

This machine that you thought you bought is sort of on loan in a way or on tether to the mothership I think is very disturbing, and very unfree. And very much different than the relationship that we had or in the early days to these, uh, these great machines and technologies, right?

CINDY COHN
Yeah, absolutely. I mean, I, I really, this, this really sets up one of the main things I wanted to talk about with you, which is what does it look like to you if we get it right, what are the kinds of things that we would experience in our, our day-to-day lives involving technology? But how would it, how would it feel different?

What would, what would it look like if we, if we, if we came up out of this dystopian into something that really worked, and already I'm hearing a piece of it, which is, you know, we, at EFF we say, you know, you bought it, you own it, right? Like the, the, the tools that you are using every single day are yours. And you can tinker with them and play with them and decide whether you, you know, how you wanna interact with them and they're not tethered to a, to a mothership for that. That's a beautiful one. I'm wondering about that, but also if there's any other things that you think of.

DAVE EGGERS
Yeah, I think none of these tools should require an internet connection, you know? I mean, until recently I never, I didn't have one at home and, um, I didn't, I went to intentionally to a library or something to do email or any sort of interaction I needed to do there. And it was great.

It was like intentional. That was my hour online. But the fact that so many of these things require being online at all times to use them, um, and that when I see kids, like even if they're writing a paper, they have to get online to access their Google Doc or something, just seems so absurd. But, um, so I think that one, that too, you know, you should have the option of doing any kind of work offline.

And overall, I think we know that every additional minute we spend on a screen, we're less happy. I mean, every sociologist, every psychologist, every study, every everything points to that. Especially when we talk with young people, you know? Um, we know that there's stratospheric levels of teen depression, especially among girls. And, uh, we know that this goes up and has gone up with and parallel with the rise of social media.

So how do we as a species, allow ourselves to live an analog life and choose an hour here, an hour there to be connected, but not to channel everything that we do through a screen, through a phone. How do we make sure that we live intentionally and make those choices and, um, and use the technology when it makes sense and avoid it when it doesn't make sense. We're at a weird inflection point right now where much of education is being channeled through screens and that, you know, accelerated a thousand fold during the pandemic.

But we need to come back and say, we know it's unhealthy for kids to spend so much time on screen. So the educational system has got to be at the forefront of getting them off screens. So you shouldn't have to be turning in your homework on a screen, you shouldn't be reading on a screen unnecessarily. There needs to be a diversity of experiences where paper books are used in the mix and classroom, and three-dimensional sort of in-person experiments and tinkering and outdoor learning. All of these different things with screens being what, 10, 15% of that experience, you know, or whatever that healthy balance is.

CINDY COHN
There's something almost very American about this idea. Like, if a little is good, a lot's gonna be better. Right? We continue to hear from a lot of kids who having, uh, digital life that isn't tracked by their parents, um, can be a real lifeline for a lot of kids, but that we have as a society.

Doubled down and tripled down and quartered down on the fact everything is tethered, everything is centralized. Of course, everything tracks you, which is a lot of the work that Jason does with student privacy. Um, because the other side of this, it's not just that the kid is being forced to be online, it's the kid is being watched all the time.

But that we, we take something where a little of it might be great and the next thing you know, everybody has pie for dinner every night, you know?

DAVE EGGERS
Right. I mean, That's exactly it. And you know, when it comes to like parental and student surveillance, it's all coming from pretty much a good place. We love our kids, parents love their kids. Teachers wanna make sure, you know, administrators wanna make sure everybody's safe. And so there's cameras in every classroom and, and parents track their kids through, you know, Find My Phone because they care about them. They want to know that they're somewhere. But at what cost? You know, for a young person that's been surveilled every day of their life since they're 10 years old, it's a very different experience. And there's no wonder that so many students, young people have trouble with independence and problem solving and, and dealing with, uh, so-called adulting is because they've been, you know, living in a state of constant surveillance since they've been cognizant.

And I think that we have got to be comfortable with some mystery, some nuance. When your, when your teenager goes out, they tell you where they're gonna go, and you have to trust them, you know, and then they come back, which is the way it was for 10,000 years. But I think that we, you know, we've gotta think about the way that we're using these technologies and the way that we are, I don't know, I guess accelerating this very rapid change in the species to be one that, uh, is comfortable living in a, in a panopticon as opposed to one that is truly free.

JASON KELLEY
I look forward to the utopia that you write about as the sequel to the every, I think that's something we, not that you're planning on doing that, but we would love, I think, to hear all the good things that could happen instead of all of the terrible things that do happen.

DAVE EGGERS
You know, it's so easy. I mean, I know I, I sound like such a grump, but like, I, uh, it's so easy to live in balance. I just think you have to make choices. I can't have a smartphone because I find them too addictive. So I have a flip phone. I've always had a flip phone, and to me, that's the right balance. And even then I check, I can get this one weird little newsfeed on my flip phone and I check that too much. But I know what's too much from me and, and how I, and it took me a lot of years to find that balance. I knew if I had internet access all day, I wouldn't work. And so I had to make that choice to say, I'm not finishing the day where I want to finish it. I don't feel good.

And I think if we all, and employers allowed this, if schools allowed it, parents allowed everybody to make that intentional choice of how they best feel at the end of the day, then we're a happier species.

We have to remember that we are animals, you know, , we are, we, we do really benefit from being outside and being unstructured and being bored sometimes and being totally free and not being, you know, checking in on, you know, the mothership all the time and just being completely sort of untethered. And I think that's what's missing and that's what's making us sick. You know, that's why there's a societal malaise so many people feel, and that kind of empty feeling that they're feeling is because they're not giving themselves all of these things that for, you know, 10,000 years we've needed as a species.

And I think we just have to remember what we need to feel good to feel fully human. And if we can put all those things in balance and use these wonderful tools, you know, in balance, then I think, uh, we're gonna be okay. But we gotta think about it.

JASON KELLEY
It was really fascinating to talk to Dave Eggers because his new novel really articulated a lot of the concerns that E FF has. And also he's someone whose books I've been reading for frankly, decades. So I was really excited to speak with him and learn, you know, how he came to these conclusions. And in that conversation I learned a lot about what he sees as the best solutions that people can take to fix some of the problems he’s described with our digital lives. Cindy, what parts of the conversation really struck you as being something that you'll come away with?

CINDY COHN
I mean, I think the thing that Dave is, um, a master of is kind of thinking about, again, what Larry Lessig called social norms, but more like what inside us is drawing us to build some of these bad tools that are not in our interest and kind of excavating that a little bit. And I think it's important, both because it's really good storytelling and it really brings things to life, but also, you know, recognizing the role that we as individuals or as a society play in, you know, going towards surveillance as a solution to every problem. And believing in the modern day phrenology that you can tell, you know, what's going on in somebody's head from their facial expressions or their clicks on a keyboard or something like that. Um, I think that that's his job. Um, I don't think that that's the only way to think about solutions. I think that law and.

The way we code and markets can all play a role in getting out of this hole. But I also think he points out that once you get one company that is very, very powerful, those other tools seem less and less available. And you know, we, we, we have a story in the every about a legislator, um, who's trying to bring some balance into the world.

And you know, the fact. This company is surveilling everybody means that they can bring out whatever, um, piece they need to bring out to, to neutralize them. So the other three levers become less important or less available to us. The more we give power to one company or one set of leaders to do all the rest of it.

So I think that, I think he just does a beautiful job of bringing that to light. And then it's our job, you know, as the activist to try to figure out how to, how to take that and combat it. But it, it's important to recognize that there are powerful forces that are leading a lot of people to choose things that really aren't the right things for themselves or.

JASON KELLEY
I think that's 100% right. You know, I was really struck by his idea of the right to an analog life and the hope that we could build systems that allow people the ability to step away from tech if they want to. What did you think about that?

CINDY COHN
Yeah, I think that it's really right. And I think for, you know, for some people that's a matter of personal choice, but for others it's a matter of building systems that really allow it, especially in the context of, say, school, and he talked about the need for kids to be able to not have to be in front of screens the whole time that they are in school and having this balance and this mix.

And I think that's really right. And I, again, I think there are individual choices here too, but there's also societal choices that facilitate that. And, and the more you know, the more marginalized you are in society, and the less power you have in society, the fewer of those choices you have.

JASON KELLEY
And the right to an analog life is really just a sort of interpretation of another idea he brought up, which was our ability to have a balance. An hour a day, if you will. I'm sure it's a lot more for most of us, and that's fine, but an hour a day of using devices, for example. Um, I think that ability to be able to set up that balance for ourselves is really important and part of that right to have that analog life or at least have a part of your life, be analog.

CINDY COHN
One of the things that he talked about there is really, you know, lands in the code question, which is we now have, all of our tools are tied to being online all the time. And, you know, in a position where he can refuse to do that and use older versions of software. But all the rest of us should be able to do that as well. And that's a huge piece of what EFF is trying to do to try to urge people to build tools that support other choices other than being online all the time, especially in the context of the surveillance business model. I appreciate that Dave has basically been able to build for himself, uh, uh, a more analog life, and I think we wanna figure out ways in which we as a society can support that choice for more people.

JASON KELLEY
That’s it for this episode of How to Fix the Internet.

Thank you so much for listening. If you want to get in touch about the show, you can write to us at podcast@eff.org or check out the EFF website to become a member or donate. We are a member-supported organization, so we appreciate all the help that you can give so that we can protect digital rights.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. You can find their names and links to their music in our episode notes, or on our website at eff.org/podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

This is the final episode of this season – thanks for listening, and we’ll talk to you again soon.

I’m Jason Kelley…

CINDY COHN
And I’m Cindy Cohn.


Music credits

This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by its creators

CommonGround by airtone featuring simon littlefield.

Additional beds and alternate theme remixes by Gaëtan Harris.

Josh Richman

Californians: Speak Up To Protect People Seeking Repro and Gender-Affirming Care

1 week 2 days ago

We need your help to advance A.B. 793, a bill authored by Assemblymember Mia Bonta to protect people seeking abortion and gender-affirming care from dragnet-style digital surveillance. It's facing law enforcement opposition as it heads to the Assembly floor for a vote. California's Assemblymembers need to hear from you to stand up for what's right.

TAKE ACTION

Tell Your Assemblymember to Support aB 793

EFF is a proud co-sponsor of A.B. 793, which targets law enforcement demands that can compel tech companies to reveal the identities of people in their records who have been near a specific place or looked up particular keywords online. These demands, known as “reverse demands”, “geofence warrants,” or “keyword warrants,” are shadowy tools with questionable effectiveness, that have also falsely led police to arrest people simply going about their daily lives. Hundreds (sometimes thousands) of people can be caught up in a single demand without cause.

Following the recent Dobbs decision and an increase in state law criminalizing gender-affirming care, these broad warrants pose an even greater threat. People are coming to California to get care they can no longer get at home. But law enforcement officials from other states can ask for data on who's been around certain clinics or searched for information on how to seek care in California. Companies such as Google—which also supports the bill—have recently seen a sharp increase in such requests.

Law enforcement officials in California want to stop this bill. Your Assemblymembers need to hear these demands not only threaten everyone's privacy but also pose a serious, immediate danger to people seeking reproductive or gender-affirming care in California. This bill is a priority bill for the California Women's Caucus and the California Future of Abortion Council. It is also a component of the NARAL Reproductive Freedom Scorecard. More than 50 civil liberties, reproductive justice, healthcare equity, and LGBTQ+ advocacy groups joined us to support the bill, including NARAL Pro-Choice California, Equality California, and Planned Parenthood Affiliates of California. Add your voice to the chorus.

TAKE ACTION

Tell Your Assemblymember to Support aB 793

 

Hayley Tsukayama

EFF at RightsCon 2023

1 week 2 days ago

After three years of virtual gatherings, RightsCon is back! The 12th edition of the world’s leading summit on human rights in the digital age will be a hybrid convening taking place online through the RightsCon platform and in San José, Costa Rica between June 5-8.

RightsCon provides an opportunity for human rights experts, technologists, government representatives, and activists to discuss pressing human rights challenges and their potential solutions. 

We’re excited that many EFFers are heading to Costa Rica and will be actively participating in this year's event – both online and in person. Several members will be leading sessions and contributing as speakers, as well as being available for networking.

Our delegation includes:

  • Carlos Wertheman, Translations Manager
  • Christoph Schmon, International Policy Director
  • Cindy Cohn, Executive Director
  • David Greene, Senior Staff Attorney and Civil Liberties Director
  • Eva Galperin, Director Of Cybersecurity
  • Jillian York, Director of International Freedom of Expression
  • Katitza Rodriguez, Policy Director for Global Privacy
  • Paige Collings, Senior Speech and Privacy Activist
  • Shirin Mori, Senior Design & Research Lead
  • Veridiana Alimonti, Associate Director For Latin American Policy

We hope you have an opportunity to connect with us at the following: 

Tuesday 6 June  

Confronting cybercrime laws and human rights abuse, a local to global discussion
10:15 - 11:15 CST
Katitza Rodriguez, Policy Director for Global Privacy
Host institutions: ARTICLE 19, Electronic Frontier Foundation (EFF)

Around the world repressive governments are increasingly exploiting cybercrime legislation to arbitrarily restrict the freedom of expression and access to information. Looking at case studies in Asia and Latin America, this session aims to identify the relationship between national and global level cybercrime laws, and to propose systematic solutions for civil society.  

Artists have digital rights, too: SCP 2.0 and advocating for artistic expression online
11:30 - 12:30 CST [ONLINE]
Paige Collings, Senior Speech and Privacy Activist
Host institutions: Electronic Frontier Foundation (EFF), National Coalition Against Censorship (NCAC), Don't Delete Art  

Addressing the lack of art-specific analysis of content moderation mechanisms, this session convenes artists and curators to discuss challenges they face in sharing artistic content online, with the goal of engaging arts communities in digital rights work, and forging new advocates for artistic expression online.  

The next frontier for the UNGPs: protect, respect, and remedy in the metaverse
14:00 - 15:00 CST
Katitza Rodriguez, Policy Director for Global Privacy
Host institution: Article One Advisors

In a virtual world, how should we interpret international human rights standards and frameworks that were designed with the physical one in mind? This session will discuss the opportunities and challenges to implementing the UN Guiding Principles on Business and Human Rights’ ‘Protect, Respect, Remedy’ framework in this next, virtual frontier.   

Interoperability: from buzzword to a map of solutions
14:00 - 15:00 CST [HYBRID]
Cory Doctorow, EFF Special Advisor 
Host institution: Fundação Getúlio Vargas (FGV) Law School, Center for Technology and Society (CTS)  

This session will be devoted to understanding the promises and challenges of interoperability in digital ecosystems, and why it could matter for the protection of people’s rights online. Speakers will outline the legal and economic reasons underlying voluntary and mandated interoperability, and discuss the governance challenges in creating, monitoring and enforcing interoperability. 

The police paper trail: scrutinizing state surveillance with public records
15:15 - 16:15pm CST [ONLINE]
Beryl Lipton, Investigative Researcher
Host institution: Electronic Frontier Foundation (EFF)

Law enforcement entities around the world have access to dozens of tools for everyday dragnet surveillance. Attendees will discuss the civil rights implications of indiscriminate state scrutiny, discuss ways surveillance tools have been adopted and used in various countries, and hear about local and national laws intended to curb abuses (and the lack thereof).  

Wednesday 7 June

So you want to be an Executive Director?
10:15 - 11:15 CST
Cindy Cohn, Executive Director
Host institutions: Access Now, Electronic Frontier Foundation (EFF)

Five digital rights Executive Directors from five continents will hold an open discussion with participants in order to exchange knowledge, experiences, and insights with fellow and forthcoming leaders in the sector. We will discuss opportunities and challenges of becoming a leader in the digital rights world with attention to both global, regional, and national perspectives.

Internet service providers and data privacy rights: challenges, trends, and ways forward in Latin America and Spain 
14:00 - 15:00 CST [HYBRID]
Veridiana Alimonti, Associate Director For Latin American Policy
Host institution: Electronic Frontier Foundation (EFF)

Since 2015, ¿Quién Defiende tus Datos? /¿Dónde Están mis Datos? reports in Latin America and Spain have held Internet Service Providers accountable by comparing their privacy policies and practices, especially on government demands for user data. The session will explore achievements, gaps, and concerning trends found throughout the series. 

Thursday 8 June

Co-creating the online spaces we want in a (fe)diverse and decentralized internet
9:00 - 10:00 CST
Jillian York, Director of International Freedom of Expression
Host institutions: Center for Studies on Freedom of Expression and Access to Information (CELE), European Center for Not-For-Profit Law (ECNL)

Against the backdrop of Elon Musk’s purchase of Twitter and a (renewed) interest in decentralized social media platforms, this session will explore the challenges and opportunities of the ‘fediverse’ as well as emerging technologies such as web3. Civil society has a real opportunity to redefine how platforms should operate and how content could be moderated.

Rethinking transparency reporting
11:30 - 12:30 CST
Jillian York, Director of International Freedom of Expression
Host institution: Ofcom

From the UK’s Online Safety Bill, the EU’s Digital Services Act, to Australia’s Online Safety Act, transparency reporting obligations are featuring increasingly. But what does effective and meaningful transparency reporting look like? And why do we need to have this discussion now? Participants are invited to reflect on what meaningful transparency reporting looks like.

The promise and peril of immersive technologies: accessibility, privacy, and discrimination
15:15 - 16:15 CST
Katitza Rodriguez, Policy Director for Global Privacy
Host institution: Future of Privacy Forum (FPF)

Immersive technologies like extended reality hold enormous potential to improve the quality and accessibility of education, health, and entertainment. This panel will explore these challenges and possibilities, including how data collected by and for these technologies could potentially be used in harmful ways, and what platforms, regulators, and users can do to avoid these harms.  

In addition to these events, the EFF staff will be attending many other events at RightsCon and we look forward to meeting you there. You can view the full programing, as well as many other useful resources, on the RightsCon Summit Platform.

Registration for RightsCon is still open and closes on June 2 at 23:59 PST, and online participation is free. Get your ticket now! 

Paige Collings

Victory in California! Police Instructors Can’t Claim Copyright Protections to Block Release of Use-of-Force and Other Training Materials

1 week 3 days ago

After a two-year legal battle, the state agency that certifies police officers in California has agreed to EFF's demand that it stop using copyright concerns as a predicate to withhold law enforcement training materials from public scrutiny.

The immediate impact of this victory for transparency is the public will be able to visit the California Commission on Peace Officers Standards & Training (POST) website to inspect 19 previously unseen training outlines. These documents cover a variety of sensitive issues, such as police shootings and internal affairs investigations, and were developed by the California Peace Officers Association, which represents more than 23,000 law enforcement officers across the state. The longer term impact is that law enforcement agencies across California will no longer be able to rely on POST's practices to justify their own decisions to withhold their training records on copyright grounds.

The story behind this case dates back to 2018, when California State Senator Steven Bradford introduced legislation that recognized a key ingredient to police accountability is allowing the public to scrutinize what kind of training officers receive when it comes to practices such as police stops, use-of-force, and surveillance. SB 978 requires all local law enforcement agencies and POST to publish their policy manuals and training materials on their websites, at least to the extent that those records would be releasable under the California Public Records Act. With EFF's support, the California legislature passed the bill and it went into effect in January 2020.

However, when EFF went to the POST's OpenData hub to review training materials for issues such as police shootings, automated license plate readers, and face recognition, we found that the documents were completely redacted. Even the training on California public records requests was ironically redacted. All that was left was a single line: “The course presenter has claimed copyright for the expanded course outline.” While courses developed by California agencies are by default in the public domain, third-party private entities can also seek POST certification for their training materials.

These redactions were unacceptable. When a trainer asks for the imprimatur of the state government for their presentations, they have to submit these materials for approval—and at that point, the document becomes a government record and should be open to public inspection. Otherwise, the public can’t assess the content of the training police receive or ensure that POST is doing proper quality control.

After EFF sent a letter demanding the documents be published in full, POST re-reviewed the face recognition and license plate reader documents and concluded there was no statutory basis for withholding the records. However, POST refused to publish 19 training outlines created by CPOA, still citing copyright as an excuse.

Again, this was outrageous. As the self-proclaimed voice of law enforcement officers statewide, CPOA represents the interests of its members first and foremost, not the California public. So, when its instructors deliver a training about use of force or police shootings, one would expect the guidance to be skewed toward protecting the officer from litigation, termination, and prosecution, rather than subjecting them to accountability. We say “expect” because without being able to review the materials, the public can’t know for sure.

So, in May 2021, EFF sued POST. And although the case dragged on for two years, we won.

Not only did POST agree to release the materials, the commission signed a settlement stating  that from now on it will no longer redact course outlines to conceal copyrighted material, nor will it retain any redactions applied by the trainers to protect copyrighted materials.

The newly released records are available on DocumentCloud (links below) and through POST's OpenData site.

A lot of information in the records is outside EFF’s expertise as a digital rights and transparency organization, and so we leave analysis to experts on police practices and advocates for police reform or abolition. Unfortunately, in some cases, the outlines are so lean they raise questions about whether POST is adequately reviewing the content before certifying the courses. However, it’s worth noting that some presentations do include instruction of clear concern. For example, the training on “Legal Implications for Use of Force” coaches police officers on a variety of techniques for manipulating juries and oversight bodies, down to mannerisms and “Dressing for Success”:

EFF’s work on SB 978 also extends statewide. In April 2021, EFF partnered with Stanford Libraries’ KNOW Systemic Racism project to compile and publish links to more than 450 California law enforcement agencies’ policy manuals and training materials.

As communities observe the third anniversary of George Floyd’s killing, this win reaffirms EFF’s commitment to supporting the Black-led protest movement that rose in response to police violence. By defending access to law enforcement records—be it police trainings, fusion center agreements, search warrants, or drone videos—we can ensure that the public has the evidence it needs to expose abuses of power. 

Dave Maass

Civil Liberties Groups Demand California Police Stop Sharing Drivers’ Location Data With Police In Anti-Abortion States

1 week 3 days ago
This sharing by 71 CA police agencies violates state law and could be used by other states to identify and prosecute abortion seekers and providers.

SAN FRANCISCO—Seventy-one California police agencies in 22 counties must immediately stop sharing automated license plate reader (ALPR) data with law enforcement agencies in other states because it violates California law and could enable prosecution of abortion seekers and providers elsewhere, three civil liberties groups demanded Thursday in letters to those agencies.

The letters from the Electronic Frontier Foundation (EFF), the American Civil Liberties Union of Northern California (ACLU NorCal), and the American Civil Liberties Union of Southern California (ACLU SoCal) gave the agencies a deadline of June 15 to comply and respond. A months-long EFF investigation involving hundreds of public records requests uncovered that many California police departments share records containing detailed driving profiles of local residents with out-of-state agencies.

ALPR camera systems collect and store location information about drivers, including dates, times, and locations. This sensitive information can reveal where individuals work, live, associate, worship—or seek reproductive health services and other medical care.

“ALPRs invade people’s privacy and violate the rights of entire communities, as they often are deployed in poor and historically overpoliced areas regardless of crime rates,” said EFF Staff Attorney Jennifer Pinsof. “Sharing ALPR data with law enforcement in states that criminalize abortion undermines California’s extensive efforts to protect reproductive health privacy.”

The letters note how the nation’s legal landscape has changed in the past year.

“Particularly since the Supreme Court’s decision in Dobbs v. Jackson Women’s Health Organization, which overturned Roe v. Wade, ALPR technology and the information it collects is vulnerable to exploitation against people seeking, providing, and facilitating access to abortion,” the letters say. “Law enforcement officers in anti-abortion jurisdictions who receive the locations of drivers collected by California-based ALPRs may seek to use that information to monitor abortion clinics and the vehicles seen around them and closely track the movements of abortion seekers and providers. This threatens even those obtaining or providing abortions in California, since several anti-abortion states plan to criminalize and prosecute those who seek or assist in out-of-state abortions.”

Idaho, for example, has enacted a law that makes helping a pregnant minor get an abortion in another state punishable by two to five years in prison.

The agencies that received the demand letters have shared ALPR data with law enforcement agencies across the country, including agencies in states with abortion restrictions including Alabama, Idaho, Mississippi, Oklahoma, Tennessee, and Texas. Since 2016, sharing any ALPR data with out-of-state or federal law enforcement agencies is a violation of the California Civil Code (SB 34). Nevertheless, many agencies continue to use services such as Vigilant Solutions or Flock Safety to make the ALPR data they capture available to out-of-state and federal agencies.

California law enforcement’s sharing of ALPR data with law enforcement in states that criminalize abortion also undermines California’s extensive efforts to protect reproductive health privacy, specifically a 2022 law (AB 1242) prohibiting state and local agencies from providing abortion-related information to out-of-state agencies.

For one of the new letters from EFF, ACLU NorCal, and ACLU SoCal: https://eff.org/document/sample-alpr-demand-letter-tracy-police-department

For information on how ALPRs threaten abortion access: https://www.eff.org/deeplinks/2022/09/automated-license-plate-readers-threaten-abortion-access-heres-how-policymakers

For general information about ALPRs: https://www.eff.org/pages/automated-license-plate-readers-alpr

Agencies that received the demand letters include:

That’s 71 agencies in 22 counties:

  • 12 in Orange County
  • 11 in Los Angeles County
  • 8 in Contra Costa County
  • 7 in Riverside County
  • 6 in San Joaquin County
  • 5 in San Bernardino County
  • 5 in Imperial County
  • 2 in Ventura County
  • 2 in Marin County
  • 1 each in El Dorado, Fresno, Humboldt, Kern, Kings, Madera, Merced, Placer, Sacramento, San Diego, Santa Clara, Solano, and Yolo counties
Josh Richman

Congress Must Exercise Caution in AI Regulation

1 week 4 days ago

Artificial intelligence technologies (AI) are all the rage in Washington D.C. these days. Policymakers are hearing stories of utopian opportunities and certain doom from technologists, CEOs, and public interest groups and trying to figure out when and how Congress should intervene.

Congress should be paying attention to AI technologies. Many are tools with extraordinary potential. They can help users distill large volumes of information, manage numerous tasks more efficiently, and change how we work – for good and for ill, depending on where you sit. Influential corporate and government actors recognize the ability of AI to redistribute power in ways they can’t control, which is one reason so many are seeking Congressional intervention now.

But Congress should regulate with extreme caution, if at all, and focus on use of the tools rather than the tools themselves. If policymakers are worried about privacy, they should pass a strong privacy law. If they are worried about law enforcement abuse of face recognition, they should restrict that use. And so on. Above all, they must reject the binary thinking that AI technologies are going to lead to either C-3PO or the Terminator.

Unfortunately, policymakers seem more inclined to move fast and break things.

AI Technologies Should Not Be Regulated by a Commission

At recent hearings, several Members of Congress proposed creating an independent government commission with extraordinary powers over AI technology, including the ability to license AI technology development.

This is a bad idea. Historically agencies like these are created when an industry has reached a center level of maturity and is an essential part of our society and economy. For example, independent commissions oversee telecommunications, medicine, energy, and financial securities. AI technologies are in early stages of development and are integrated in many industries. As a practical matter, it’s hard to imagine how a single agency could operate effectively.

What is worse, forcing developers to get permission from regulators is likely to lead to stagnation and capture. An army of lobbyists and access to legislators through campaign contributions and revolving doors will ensure that such an agency will favor only the most well-connected corporations with licenses.

Expanding Copyright Will Undermine AI Potential

The same holds true for another set of proposals focused on copyright reform. Rightsholders insist that they are owed compensation for things like the use of training data, even though the use of training data is likely protected under fair use. Much of this stems from a major misunderstanding of how AI generative tools work, which we explain here. Simply put, machine learning does not rest on copyright infringement.

Others may realize as much, so they looking to change the law to make it so. We’ve seen this before in broadcast television and cable systems. Broadcasters claimed they had a copyright in the free over the air broadcast signal that cable companies were using for free on cable TV. The Supreme Court disagreed and found no copyright interests rests in the broadcast signal, so TV broadcasters got Congress to create a new right to compensattion for “retransmission.”

But even if Congress were to do that for AI training data, who should get paid, and how much? Training data could touch billions of points of information to formulate an output that’s worth very little money. No one wants to believe they will only be given 1 millionth of a penny or less per use, which is what happens if you divided the value of the output with the vast volume of inputs. And no one will be able to create an AI tool that relies on billions of data points if the costs to do so are increased to unsustainable levels.

Ernesto Falcon

To Save the News, We Must Shatter Ad-Tech

1 week 4 days ago

This is part two of an ongoing, five-part series. Part one, the introduction, is here. Part three, about banning surveillance ads, is here.

The news is in trouble. It’s not just the mass closures of newsrooms - it’s also the physical and ideological attacks on journalists.News websites are plastered with ads, but more than half of the money those ads generate is siphoned off by ad-tech companies, with the lion’s share going to just two companies, Google and Meta, whose ad-tech duopoly has allowed them to claim an ever-greater share of the income generated by ads placed alongside of news content.

Once, tech platforms promised that “behavioral advertising” would be a bonanza for both media companies and their tech partners. Rather than paying commissioned salespeople to convince firms to place ads based on a publication’s reputation and readership, media companies would run ads placed by the winners of a slew of split-second auctions, each time a user moved from one page to another. 

These auctions would offer up the user, not the content, to an array of bidders representing different advertisers: “What am I bid for the right to show an ad to a depressed, 19 year old male Kansas City Art Institute sophomore who has recently searched for car loans and also shopped for incontinence pads?” In an eyeblink, every ad-slot on the page would be filled with ads purchased at a premium by advertisers anxious to reach that specific user. And that user will like it! They will be grateful for the process and all the “highly relevant” advertisements it dangled under their nose.

Such an arrangement has numerous moving parts. The “ad-tech stack” includes:

  •  A “supply-side platform” (SSP): The SSP acts as the publisher’s broker, bringing each user to market and selling their attention on the basis of their “behavioral” traits;
  • A “demand-side platform” (DSP): The DSP represents the advertisers, consulting a wishlist of specific behavioral traits that each advertiser wants to target;
  • A marketplace: The marketplace solicits bids on behalf of the SSP, collects bids from DSPs, and then consummates the transaction by delivering the winning bidder’s ad to the SSP to be crammed into the user’s eyeballs.

There are many companies that offer one or two of these services, but the two biggest ad-tech companies - Meta and Google - offer all three

That means that there are millions of transactions every single day in which Google (representing a publisher) tells Google (representing the marketplace) about an ad-slot for sale; whereupon Google (representing many different advertisers) places bids on that ad-slot. Once  the sale is consummated, Google earns three different fees: one for serving as the seller’s agent, another for serving as the buyer’s agent, and a third for the use of its marketplace.

What’s more, Google is also a major publisher, offering millions of ad-slots for sale on YouTube and elsewhere. It is also an advertising agency, buying millions of those selfsame ad-spots on behalf of its business customers.

There are no parallels for this in the real world: imagine if the owners of the New York Stock Exchange were also a brokerage house and an underwriting bank - as well as owning several of the largest businesses on the exchange, and buying huge amounts of stock on its own exchange.

Imagine if a real estate agent represented both the buyer and the seller, and also owned the listing service, and also bought and sold millions of houses, bidding against its own buyer-customers and competing for sales with its own seller-customers.

Imagine if a divorce lawyer represented both parties, and was also the judge in the divorce court, and was also trying to match both of the soon-to-be-single parties on a dating service. 

Owning the marketplace lets Google give preference to its own brokers, on both the advertiser and publisher sides. Being on both sides of the transaction lets Google tweak the bids and the acceptances to maximize its own revenue, by rigging the auctions to charge advertisers more and pay publishers less.

It’s not just Google: Meta also operates a dominant, “full-stack” ad system, intimately connected to its multiple platforms, including Facebook and Instagram, where it competes with the publishers it brokers ads for. Just like Google, Meta represents buyers and sellers on a marketplace it controls, and rigs the bidding to benefit itself at the expense of both.

Even worse, Google and Meta are alleged to have illegally colluded to rig the market, creating a system of nearly inescapable disadvantages, where sellers and buyers had nowhere to turn.

The ad-tech market isn’t a market at all: it’s a big store con where everyone the publisher sees is in on the game: the buyer’s agent, the seller’s agent and the marketplace where they bring the publisher’s product are all run by a single company, or by two companies that have secretly agreed not to compete. If you can’t spot the sucker at the poker-table…you’re the sucker.

That’s how ad-tech grew to consume more than half of all the ad dollars spent. They stole it.

This needs to be fixed. The actually illegal stuff - market rigging - is the kind of thing that antitrust enforcers frequently look after. They’re on it.  

But even if the ad-tech duopoly are ordered to halt their most obviously egregious conduct, it will not be enough. It’s not enough to make  the companies pinky-swear that they won’t use their power as agents for buyers and sellers in their own marketplace to enrich themselves at publishers’ expense.

Ask any lawyer. Ask any judge. Ask any sports-fan. The only way to resolve a conflict of interest like that is to eliminate it. The referee can’t own the team. The team can’t own the referee. The judge can’t hear their kid’s case. Your lawyer can’t work for your opponent.

And an ad-tech company can’t be the marketplace, the buyer’s agent and the seller’s agent. 

That’s where the AMERICA Act comes in. Introduced by Sen. Mike Lee [R-UT], the bill is truly bipartisan, numbering among its co-sponsors both Sen. Ted Cruz [R-TX] and Sen. Elizabeth Warren [D-MA], and many other powerful senators from both sides of the aisle.

Under the AMERICA Act’s provisions, companies like Google and Meta would be forced to sell off or shut down their demand-side (buyer) platforms and their supply-side (seller) platforms. No large company (processing $20 billion per year or more in ad transactions) that operated an ad exchange would be allowed to represent the buyers and sellers who used that exchange. Likewise, no buyer-side platform could operate a seller-side platform, and vice-versa.

For smaller companies - those transacting between $5 billion and $20 billion per year in ad sales - the AMERICA Act establishes a duty to “act in the best interests of their customers, including by making the best execution for bids on ads,” and to maintain transparent, auditable systems so that buyers and sellers can confirm that this is the case. Companies that represent buyers and sellers would need “firewalls” between the two sides of the business, with still penalties for conflicts of interest.

This kind of rule was once a bedrock of American competition regulation. When too-big-to-fail bankers and too-big-to-jail rail barons brought America to the brink of ruin, regulators imposed “structural separation” on these platform businesses, prohibiting them from competing with their own customers. 

That meant that railroads couldn’t compete with the freight companies that shipped goods on their rails. It meant that banks couldn’t own businesses that competed with the companies they loaned money to. The railroads and the banks could swear that they would never “self-preference” but the temptation to do so is strong, and the chance of getting caught is low, and the consequence is the conversion of American industry to a planned economy run by a handful of cozy CEOs.

For years, the ad-tech duopoly swore that they would never yield to the temptation to rig the game in their favor. But they couldn’t help themselves. That’s not surprising: conflict-of-interest rules don’t just exist to thwart the dishonest, they exist to steer the honest-but-fallible away from temptation. And whomst amongst us can claim to be infallible?

For the news industry, the AMERICA Act is an incredible opportunity. Simply changing the distribution of ad-dollars - reducing the share going to the platforms to a more modest 10 percent, say - could give publishers a 20 percent increase in ad revenues, while reducing the cost of advertising by 20 percent. 

That’s good for everyone. Giving publishers their fair share of ad revenue means they won’t have to plaster their websites with content-obscuring ads. Reducing costs for advertisers means that goods can be sold more cheaply. 

The AMERICA Act affirms something that everyone understands in their bones: you can own the league, you can own a team, or you can referee the game - but you can’t do all three and still run an honest game.

Cory Doctorow

How Do Different Encrypted Messaging Apps Treat Deleted Messages?

1 week 5 days ago

A feature of various end-to-end encrypted (E2EE) messaging apps and other non E2EE social media messaging are disappearing messages, which automatically delete after a set period of time. This feature may be useful for general privacy within your extended network, high-risk users, and preemptively clearing side conversations easily within linear chats. However, different messaging apps handle deleted and disappearing messages a little differently, in particular when it comes to quoted messages, chat backups, and screenshot notifications. It’s important to note that this isn’t a vulnerability in the software, but it could cause someone to change their threat model—the way that they think about protecting their data and privacy. Below, we note the variance that exists between different apps.

How Signal Handles Deleted and Disappearing Messages in Replies

When a user on Signal deletes a message, if that message was quoted previously in a reply, the app still shows around 70 characters of the message.

If a disappearing message time was changed while someone replies, the quoted message remains for the amount of new time set on the reply.

Set to 4 weeks during reply

All the apps we looked at have manual deletion options for messages, but auto deletion intervals varied. For Signal, the shortest auto deletion period is 30 seconds. Chat backups in Signal are automated on a 24-hour window or on demand. If a user enables chat backups, then any messages visible during a period of time can potentially be in their backup file. Thankfully both Signal and WhatsApp have encrypted backups for added protection in cases where a third party might try to access this information.

How WhatsApp Handles Deleted and Disappearing Messages in Replies

WhatsApp acknowledges the quoted reply scenario in their FAQ. Signal should do this in its documentation too.

“When you reply to a message, the initial message is quoted. If you reply to a disappearing message, the quoted text might remain in the chat after the duration you select.”

WhatsApp’s shortest automated disappearing interval is 24 hours. This extended time period can enable backups of WhatsApp auto-removed messages to be more common.

How Facebook Messenger Handles Deleted and Disappearing Messages in Replies

In FB Messenger Secret (E2EE) Conversations, original messages are removed in quoted text after a message is deleted or disappears. However, the message does stay past its auto-delete timer if neither user types or leaves the chat. Not as worrying in practice, but this is a notable quirk.

Secret Conversation also offers screenshot notifications when messages are set to auto disappear. The shortest interval is 5 seconds for auto-deletion, the shortest time out of the three messengers. There are also no chat backup mechanisms available to the user on the phone but it is saved on the Facebook platform. Disappearing messages also are removed from local storage soon after.

Documentation Is Key 

We focused mainly on E2EE-based apps, but there are other social media apps like Snapchat that offer disappearing messages. We did not test this reply quirk in Snapchat. However, similar to other apps we looked at, you can save messages or take screenshots.

This is not a software vulnerability, but pointing out the differences on how ephemeral messages are treated is worth the trouble since major E2EE apps apply different parameters. Messages should be removed when they expire or manually deleted. Small mistakes occur all the time in group chats that you might want deleted immediately with no historical evidence, including quotes. For example, accidentally pasting a password in a large group chat where you may not know everyone too well, or more severe cases, where someone might potentially be reported to law enforcement for seeking reproductive care.

Even when paired with the concern that someone can take screenshots of conversations, ephemeral messages are a very useful feature for many different scenarios, and in today’s climate, where private communications are regularly attacked, improving these features, and their documentation, and using E2EE communications will remain an important necessity for exercising your right to privacy.

Alexis Hancock

What the Supreme Court’s Decision in Warhol Means for Fair Use

1 week 5 days ago

The Supreme Court has issued its long-awaited decision in Andy Warhol Foundation v. Goldsmith, a fair use case that raised fundamental questions about rights and obligations of commercial artists. The Court’s opinion did not answer many of those questions, but happily it affirmed both important fair use precedents and the role of fair use as a crucial element of the copyright system. EFF filed an amicus brief in the case.

These are the basic facts: In 1981, Newsweek commissioned Lynn Goldsmith to take a series of photos of Prince. In 1984, she licensed one of those photos to Conde Nast for artist Andy Warhol to use as a “reference photo” to create his own portrait of the musician. Warhol created a series in various colors and the magazine chose one of these portraits to illustrate a piece on Prince. In 2016, the Andy Warhol Foundation gave Conde Nast a license to use a different portrait in the series (“Orange Prince”) for use in a special tribute magazine dedicated to Prince. Goldsmith demanded compensation. AWF sought a declaration that Warhol’s portraits made fair use of Goldsmith’s photo and, therefore, it had every right to license the resulting work. A district court said yes, the Second Circuit disagreed, and AWF appealed. Along the way most of the claims and questions were dropped, leaving the Supreme Court with one narrow but important question: whether the first fair use factor—the “purpose and character” of the use—weighed in AWF’s favor or Goldsmith’s.

As a reminder, fair use is the idea that there are certain ways that you can use a piece of copyrighted work regardless of whether you have the rightsholder’s permission, and it's determined by a balancing test that considers four factors—

  1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
  2. the nature of the copyrighted work;
  3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
  4. the effect of the use upon the potential market for or value of the copyrighted work.

The original black and white portrait photograph of Prince taken in 1981 by Lynn Goldsmith, from the Court's decision.

An orange silkscreen portrait of Prince on the cover of a special edition magazine published in 2016 by Condé Nast, from the Court's decision.

The Second Circuit’s analysis of the first fair use factor caused an uproar among copyright lawyers, and many hoped the Supreme Court would fix it. Factor One, as its commonly called, asks whether the new, second use is transformative, i.e., whether it has a new and different purpose and character. The Second Circuit held that Orange Prince and Goldsmith’s photo shared the same basic purpose because they were both “works of visual art” depicting the same person. It rejected any need to look for a meaning or message that isn’t obvious to a reasonable viewer and suggested that even though Warhol’s portrait changed Goldsmith’s photograph to give “a different impression of its subject,” those changes weren’t transformative because the photo was still the “recognizable foundation” for Orange Prince.

While the Supreme Court affirmed the Second Circuit’s ultimate conclusion, it took a different analytical path. The good news for fair use: The Court, in a 7-2 majority, expressly reaffirmed landmark fair use cases like Campbell v. Acuff-Rose and Google v. Oracle. Here’s the key language from the Court’s opinion: “In sum, the first fair use factor considers whether the use of a copyrighted work has a further purpose or different character, which is a matter of degree, and the degree of difference must be balanced against the commercial nature of the use.” The fair use analysis has always been about balancing all of the relevant facts, and if anything, this formulation may serve as a helpful reminder that transformativeness is not a binary.

Here's where it gets tricky: Because it concluded that the Factor One analysis must turn exclusively on the specific allegedly infringing use at issue, the majority was not especially interested in Andy Warhol’s purpose in creating Orange Prince. Rather, it focused on AWF’s purpose in licensing Orange Prince to Vanity Fair. Because Goldsmith also licensed her photos of Prince to magazines, the Court concluded, the parties shared “substantially similar” purposes. And because AWF’s purpose was also commercial (the other half of the Factor One test), Factor One favored Goldsmith.

This is a somewhat puzzling approach; normally one would expect a court to focus most of its analysis on comparing the works in question, which is why AWF had offered extensive expert testimony regarding Warhol’s artistic approach and explained how it was different from Goldsmith’s. For the majority, the question is not the work but rather the “use” of those works. For example, I might grab a few minutes of Ted Lasso in order to comment on them in a video. That’s one use. But if, long after I’m dead, my video is included in a compilation of videos about Ted Lasso, for profit, that’s another use. In this respect, the Court’s decision could be read to mean that anyone who seeks to re-use a work that makes a fair use of another work will need to make sure their re-use, as well as that of the initial work, is fair.

All of that said, fair use depends on multiple factors, and any concerns we might have about the ruling are tempered by the Court’s even narrower focus on the specific claim at issue. In particular, the Court stressed that it was not expressing any opinion on how Factor One would apply to Warhol’s original creation of the Prince Series or anything else.

While we’re disappointed that the Supreme Court did not take this opportunity to further strengthen fair use law, we hope courts applying Warhol in new cases will heed the Court’s caveats about its narrow application and recognize that its main takeaway is the continued force of Campbell and Google.

Corynne McSherry

SFPD Obtained Live Access to Business Camera Network in Anticipation of Tyre Nichols Protest

1 week 5 days ago

New documents EFF received through public records requests have revealed that the San Francisco Police Department (SFPD) received live access to the hundreds of surveillance cameras that comprise the Union Square Business Improvement District’s (USBID) camera network in anticipation of potential protests following the police killing of Tyre Nichols in Memphis, Tennessee. The protests were impassioned and peaceful as Bay Area residents gathered to oppose police violence–and the SFPD proactively got access to watch it all, opening up activists to potential risk of reprisal and retribution for their political beliefs and potentially chilling future participation in demonstrations.

On January 27, 2023, an SFPD commander reached out to the USBID with a 12-hour live monitoring request for the 450 cameras in its network, citing “potential civil unrest” in anticipation of the release of body camera footage of Tyre Nichols’ killing by members of the Memphis Police Department. Reporting in the San Francisco Standard suggests that the SFPD may not have ended up engaging in live monitoring, but simply requesting this access before a protest can chill First Amendment activity. This act also indicates the SFPD is interpreting the ordinance too broadly. The policy states: “SFPD is prohibited from accessing, requesting, or monitoring any surveillance camera live feed during First Amendment activities unless there are exigent circumstances or for placement of police personnel due to crowd sizes or other issues creating imminent public safety hazards." But the SFPD has not shown that, when they obtained live access, there were any imminent hazards or exigent circumstances.

The SFPD was able to seek live monitoring as a result of the controversial September 2022 temporary ordinance that authorized police to receive live access to non-city security cameras for a host of reasons, including to monitor so-called “significant events.” This temporary camera ordinance, which passed as a 15-month pilot, was vigorously opposed by community and civil liberties organizations, including EFF; members of the city’s civilian oversight Police Commission; and four members of the Board of Supervisors. San Francisco’s landmark 2019 privacy law forbade the SFPD from using non-city surveillance cameras, or any other surveillance technology, absent permission from the city’s Board of Supervisors via an ordinance. The department delayed seeking such permission for nearly three years.

This is not the first time that the SFPD has obtained live access to non-city cameras concerning First Amendment-protected activity. EFF and ACLU of Northern California have an ongoing lawsuit against the SFPD, Williams v. City and County of San Francisco, for gaining live access to these same cameras for over a week to surveil protests against the police murder of George Floyd in the summer of 2020. The SFPD’s continued push for live access to business district camera networks creates a dangerous precedent that puts all people participating in peaceful protest at risk of surveillance. 

You can read the documents below.

Related Cases: Williams v. San Francisco
Matthew Guariglia

Newly Public FISC Opinion is The Best Evidence For Why Congress Must End Section 702

1 week 6 days ago

A surveillance court order unsealed last week that details massive violations of Americans’ privacy by the FBI underscores why Congress must end or radically change the unconstitutional spying program enabled by Section 702 of the Foreign Intelligence Surveillance Act (FISA).

The opinion recounts how for years the FBI illegally accessed a database containing communications obtained under Section 702 and other FISA authorities more than 278,000 times, including searching for communications of people arrested at protests of police violence and people who donated to a congressional candidate. Section 702 authorizes the surveillance of communications between people overseas. But when a person on U.S. soil is in contact with one of these surveillance targets, that leaves their side of the exchange sitting in a database and vulnerable to these warrantless FBI searches. As the opinion says, “Notwithstanding this foreign directed targeting, the extent to which Section 702 acquisitions involve U.S. persons should be understood to be substantial in the aggregate.”

The pervasiveness of the FBI’s failure to comply with even the most modest reforms designed to limit the agency’s surveillance powers reveals two problems that Congress must address as it considers the Administration’s request to reauthorize Section 702.

First, the FBI is incapable of policing itself when it comes to trawling through the communications of Americans without a warrant. “There is a point at which it would be untenable to base findings of sufficiency untenable on long promised, but still unrealized, improvements in how FBI queries Section 702 information,” the court wrote. That point is now. The FBI simply cannot help but violate the law.

Second, the Foreign Intelligence Surveillance Court (FISC) is incapable of protecting Americans from the FBI’s unconstitutional searches of their communications. Since passage of Section 702 in 2008, Executive Branch leaders have argued that judicial oversight would ensure that the FBI and other spying agencies would not illegally intrude on people’s constitutional rights. That has never been true. Yet despite a series of well-documented violations by the FBI, NSA, and CIA, the FISC has consistently approved and reapproved the agencies’ ability to use Section 702. The newly released opinion is just the most recent, and perhaps most egregious, example of a judicial rubber stamp.

Congress can fix both problems by prohibiting the FBI from using Section 702 to engage in “backdoor searches.” Ending this practice will protect the constitutional rights of Americans to be free from warrantless surveillance and provide meaningful oversight of the FBI’s lawless domestic surveillance program. At minimum, the opinion reveals how badly Congress must reform Section 702, including by implementing better transparency measures to enable timely disclosures of agencies’ misuse of the law and a meaningful way for victims to challenge the government’s illegal surveillance.

Secret court kicks oversight can down the road

The FBI’s penchant for violating the Fourth Amendment and other limits on when it can query Americans’ communications obtained under Section 702 and other parts of FISA is well known. Since at least 2015, the FBI has consistently failed to comply with basic limits to prevent its agents from accessing people’s communications without a warrant.

Witnessing this well-documented pattern of lawlessness, the federal court charged with ensuring FISA surveillance is lawful has essentially given the FBI unlimited mulligans for all of its unconstitutional acts.

After recounting a series of disturbing queries that targeted protesters, people involved in purely criminal activity, and those who had donated to a political campaign, the court recognized that “compliance problems with the FBI’s querying of Section 702 information have proven to be persistent and widespread.” Although the court suggested that further incidents might prompt limiting who within the FBI could access information obtained under Section 702, it imposed no other restrictions on the FBI besides those proposed by the agency itself.

The court wrote that it was “encouraged” by the FBI’s woefully inadequate changes to how it queries data, new practices and training, and greater record-keeping and internal audits. And the court once more approved the FBI’s ability to search through Americans’ communications swept up by Section 702.

The FISC’s failure to impose any significant restrictions on the FBI despite its pattern of violating Section 702 and the Constitution is damning. It shows that the FISC appears unwilling or unable to protect us from the FBI’s illegal surveillance, putting to lie the idea that the judiciary can impose real checks on the Executive Branch’s mass surveillance programs.

Congress must recognize the opinion as a failure of the judiciary’s ability to protect people’s privacy rights. And it should not continue to wait for the FISC to take up that role. Instead, Congress must step up and end this unconstitutional surveillance by refusing to renew Section 702 without critical reforms.

Delayed disclosures hamper basic understanding, oversight

The opinion is also a great example of how the Executive Branch can delay disclosure of its illegal acts and obfuscate basic public understanding of its Section 702 surveillance powers. That ultimately benefits the government, as lawmakers and the public struggle to understand basic details about spying at the same time Congress considers reforming FISA.

For example, for all the information disclosed in the FISC’s opinion, we still do not know how many times the FBI queried Section 702 using search terms that identify Americans. The opinion describes the FBI querying a database of FISA material that appears to come from Section 702 and other parts of FISA that authorize other forms of surveillance. It thus appears that the court could not say–likely because the government never did—what portion of those queries were for data obtained without a FISA warrant under Section 702. The opinion references a complicated web of FBI databases and recordkeeping systems. In instance, users were directed to document queries on a “separate SharePoint site,” because the system itself could not support that feature. Unsurprisingly, there was a “systemic compliance issue involving the failure” to make this documentation.

The inherent secrecy the government employs here means that neither the court nor the public have answers to basic questions like how many times the FBI queried Section 702 information improperly in a given period. And as the back-and-forth documented in the opinion recognizes, the government will often supplement its initial issues after finding other problems. That slow trickle of detail means that no one outside the Executive Branch has a clear picture of Section 702 and any abuse by federal agencies.

Another problem is the time-warp. The FISC issued its opinion recounting the FBI’s abuses on April 21, 2022. It took more than a year for the government to declassify and release the opinion, coming out just as Congress is considering whether to renew Section 702. So although the FISC, Executive Branch, and likely some members of Congress have known about the opinion for some time, the American public is just learning about this now.

That sort of delay between when major misuses of the FBI’s mass surveillance are discovered and when they are made public is anathema to basic democratic governance. EFF and ACLU have worked for years to make FISC opinions public, via FOIA suits like the ACLU filed earlier this year seeking the disclosure of Section 702 FISC opinions. But that litigation takes time and a lot of resources. The public cannot understand, much less advocate against, the government’s mass surveillance programs in these circumstances.

Greater transparency and more timely disclosures of the government’s mass surveillance programs are sorely needed. At the same time, Congress does not need any more reports or declassified opinions to see what’s really happening here: routine misuse of Section 702 to violate people’s Fourth Amendment rights. Congress can and should put a stop to this.

Related Cases: Jewel v. NSA
Aaron Mackey

EFF to Court: California’s Public Records Law Must Remain a Check on Police Use of Drones

1 week 6 days ago

An increasing number of cities are adding drone flights to their law enforcement tool kit. Public access to appropriately redacted video footage from those flights can provide oversight of police surveillance and help ensure cities are living up to their privacy promises.

Every second of every drone video should not be categorically exempt from public records laws as an investigatory record. This is the argument of an amicus letter filed in California state court last week by EFF, the First Amendment Coalition, and the Reporters Committee for Freedom of the Press.

The case centers around journalist Arturo Castañares, publisher of La Prensa San Diego, who requested drone flight videos created by the Chula Vista Police Department, under the California Public Records Act (CPRA). The department, which serves one of the largest cities along the Southern border, touts its program as one of the first in the country to use drones as first responders to emergency calls for police service, and the agency has advocated for other law enforcement to create similar programs in other cities in both the U.S. and Mexico. EFF has previously raised alarm that the relative cheapness of deploying drones—compared to helicopters or on-the-ground policing—encourages more surveillance.

In denying the public records request, the city claimed the videos were categorically exempt from disclosure under the CPRA because they are investigatory records. After the requester sued, the trial court agreed with Chula Vista and ruled that it would be unduly burdensome to require the city to review the video footage and release redacted versions. The requester has asked the California Court of Appeal to reverse the trial court’s decision.

EFF’s amicus letter filed in the appellate court argues that the CPRA’s investigatory records' exemption must be construed narrowly, and that the city did not carry its burden to prove that either (1) all drone flights derive from a targeted criminal investigation, or (2) that every moment of footage is exempt. The city’s own policies note that it deploys drones for noncriminal matters, such as to evaluate damage after a natural disaster, and that its drones record video even before the drone arrives at the scene and when it is returning to base. At minimum, that footage is not likely to reflect any specific investigation. The letter also points to other cases where the government released redacted documents, even though the redaction process was much more burdensome than this case.

More generally, the CPRA’s investigatory records exemption is not a broad shield that allows police to withhold their surveillance technology from the public. Indeed, EFF and the ACLU of Southern California fought all the way to the California Supreme Court to ensure that the public can obtain records from a similar law enforcement tool: Automated License Plate Readers.

Aside from pointing out legal errors, the letter highlights how appropriately redacted footage can provide an oversight mechanism for new police surveillance. For example, Chula Vista says that it generally tries to avoid recording areas where people have a reasonable expectation of privacy. The policies instruct that drone operators might turn the camera away from sensitive areas, zoom out, or point the camera up at the sky during return flights. Do operators follow those policies in practice? Redacted footage would help the public verify that the police are complying with their own rules.

As police departments increasingly use new surveillance technology, public records requests—in addition to privacy laws and litigation—must remain an essential check on these powers.

“The CPRA is an important accountability tool that should be interpreted to allow oversight of modern law enforcement technologies,” EFF wrote in the amicus letter. “If the investigatory record exemption does broadly shield drone footage from ever being disclosed, it would blunt public understanding of a technology that is being used to replace basic police activity. And that reasoning could worryingly be applied to future technologies.”

May 23, 2023 Update: The third paragraph of this blog has been edited to include the name of the plaintiff in the CPRA case, additional links, and Chula Vista's proximity to the U.S.-Mexico border.

Related Cases: Automated License Plate Readers- ACLU of Southern California & EFF v. LAPD & LASD
Mario Trujillo

From Past Lessons to Future Protections: EFF's Advice to the EU Commission on Extended Reality Governance

1 week 6 days ago

EFF, in partnership with Access Now and the European Center for Not-for-Profit Law (ECNL), has responded to the European Commission's consultation, "Virtual Worlds (Metaverses) – A Vision for Openness, Safety, and Respect." This follows our joint statement on International Human Rights Day in 2021, "Virtual Worlds, Real People: Human Rights in the Metaverse," which called for governments and corporations to uphold human rights within Extended Reality (XR), which include Virtual Reality (VR) and Augmented Reality (AR). We are now submitting an updated version of this statement.

The term "metaverse" is elusive, and open to many interpretations. It has evolved into an umbrella term encompassing numerous concepts, often influenced by the perspectives of those utilizing it. Given this broad and vague scope that can even extend beyond Extended Reality (XR), our submission neither endorses nor critiques the EU Commission's initiative.

However, EFF strongly urges the EU Commission to consider historical digital rights lessons learned. People need principles that safeguard them from undue state and corporate overreach, which should be the focus of any measures introduced by the EU Commission about the metaverse.

The metaverse does not need to be a single platform, nor does any metaverse need to be owned or controlled by a single entity. Instead, it is more beneficial to consider the metaverse as a generic term for a vast and interoperable network of different VR, AR, and "other services."

XR presents enormous potential for entertainment, education, connectivity, and human rights advocacy. Yet, it also poses risks to these human rights. Our joint statement emphasizes the importance of past experiences in securing human rights within XR, and extending the protection of our rights. We also propose several principles to prevent state and corporate overreach in XR.

You can read the full submission here:

Katitza Rodriguez
Checked
12 minutes 36 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed