The EU's New Message-Scanning Regulation Must Be Stopped

2 weeks 4 days ago

The executive body of the European Union is pushing forward with a proposal that will endanger privacy and security for us all.  When this proposal was made public by the EU Commission last month, we said it was a terrible idea. Today, we’re joining together with more than 70 organizations in Europe and around the world to explain how dangerous this proposed law will be. 

The Commission’s proposal would compel a broad range of technology companies to scan and analyze their users’ messages, in the name of fighting crimes against children. Email, texts, social media messages, and DMs could all be subject to plain-text access and scanning. It could eviscerate end-to-end encryption by installing client-side scanning on our devices. 

Our letter explains the many ways that this EU scanning regulation puts us all at risk. Lawyers, journalists, human rights workers, political dissidents, and oppressed minorities—the people who need secure communications the most—will be the most affected. This also harms abused and at-risk children, who need to securely communicate with trusted adults to seek help.

These vulnerable people will be subject to constant law enforcement scans in the EU. Beyond the EU’s borders, it could be even worse. Once these special access systems are built, we can be sure that more authoritarian countries will demand the same ability to read our messages. 

Today we’re officially asking the EU Commission to withdraw this proposed regulation. If they don’t, we’re going to fight hard to get it stopped in the European Parliament. As our letter explains, the EU has fought to be a “beacon of human rights,” setting global standards for privacy and data protection. “But with the proposed CSA Regulation, the European Commission has signaled a U-turn towards authoritarianism, control, and destruction of online freedom.”  

Our right to have a free and private conversation is critical to democratic participation. EFF will keep fighting for that right—and will remind lawmakers, in any country, that there’s no exception for the online world. 

Joe Mullin

Your Resistance Pauses Axon’s Dangerous Drone Tasers

2 weeks 5 days ago

After recent horrific mass shootings, police vendor Axon announced plans to develop a supposed solution: a remotely controlled drone armed with a taser. In response to this announcement, and in light of objections from Axon’s Ethics Board, EFF called on those concerned to voice their criticism and ask tough questions at the Axon CEO’s Reddit AMA about the project. Now, Axon has halted the project and a majority of its Ethics Board has resigned.

You can read the entire AMA with Axon’s CEO here.

We want to thank everyone who showed up on social media and voiced their concern about this dangerous and (for now) thwarted plan. We also appreciate members of Axon’s ethics board for taking a stand against the project, and then resigning when it looked as if Axon was going to ignore their recommendation.

In an era where police are so unaccountable and have so many different revenue streams for acquiring invasive technology, it’s good to remind ourselves that there is another pressure point: the companies that make and profit off of this technology. We will keep up the pressure and we hope you’ll continue to fight with us.

Disclosure: Until this week, EFF's Surveillance Litigation Director Jennifer Lynch served on the Axon AI Ethics Board in her personal capacity. Her resignation from the board was formally announced on June 6, 2022. 

Matthew Guariglia

When DRM Comes For Your Wheelchair

2 weeks 5 days ago
Why EFF Supports Colorado’s Right to Repair Wheelchairs Law Wheelchairs Break

Three million Americans rely on wheelchairs, which makes wheelchairs a key driver of the $50 billion Durable Medical Equipment industry. Many people depend on wheelchairs to help with the basic necessities of life: getting around the house, going to work, shopping, and spending time with families. This is especially true of powered wheelchairs, which integrate sophisticated computers that allow wheelchairs to respond dynamically to their environment.

Anyone who’s ever dropped a cellphone or laptop knows that any gadget that travels with you around the world will eventually need repairs. This goes double for powered wheelchairs, not least because Medicare has adopted a narrow interpretation of its statutory obligations and will only pay for indoor chairs, despite the fact that the owners of these chairs use them outdoors, as well.

Any product that travels with you is likely to break, eventually. A product that is designed solely for indoor usage but gets used outdoors is even more at risk. But for powered wheelchair users, this situation is gravely worsened by an interlocking set of policies regarding repair and reimbursement that mean that when their chairs are broken, it can take months to get them repaired.

This has serious consequences. Wheelchairs are powerful tools that enable mobility and freedom; But broken wheelchairs can strand people at home—or even in bed, at risk of bedsores and other complications from immobilization—away from family, friends, school and work. Broken wheelchairs can also be dangerous for their users, leading to serious injuries. 


Stranded is a new report from the Public Interest Research Group (PIRG), based on interviews with 141 wheelchair users about their experiences with mechanical and electrical failures in their powered chairs. 

The report documents the dismally frequent incidents of wheelchair failures (93% of respondents needed wheelchair service in the previous year, 68% needed two or more repairs), and the long service delays that wheelchair users must endure (62% waited four or more weeks for each repair; 40% waited seven or more weeks). 

Most importantly for addressing this untenable situation, the authors tease apart the many factors that lead to these lengthy service delays and endorse legislation—Colorado’s recently passed Consumer Right To Repair Powered Wheelchairs—as a means of bringing immediate, dramatic improvements to the lives of wheelchair users.

The Fix is Nixed 

Almost everything breaks eventually, and good product design isn’t merely a matter of making gadgets that don’t need frequent service—it’s also a matter of making gadgets easy to fix then when they do break down.

Here, too, Medicare rules play a role. Medicare reimburses wheelchair vendors for parts and labor—but not for their technicians’ travel to examine, pick up, and return a wheelchair.

For wheelchair users with private insurance, repairs are delayed while they wait for their insurers to approve their repairs. 

All that means that repair is a money-losing proposition for large firms, so they underinvest in staff, training and facilities. 

But, as Stranded makes clear, manufacturers of Complex Rehabilitation Technology (CRT)—the formal classification for powered wheelchairs—have adopted repair-hostile tactics that make all of this much, much worse for wheelchair users.

Why Good Wheelchairs Go Bad

The PIRG report makes it clear that there are complex reasons why it’s so hard to get your wheelchair fixed—and also makes it clear that wheelchair users overwhelmingly support legislation that would let them get service at independent fix-it shops or fix their own chairs. Giving wheelchair users the right to repair won’t fix the structural problems with the industry, but it will fix their wheelchairs. That’s an important start.

So why is it so hard to fix wheelchairs? Writing for Kaiser Health News, Markian Hawryluk explains that the powered wheelchair industry is dominated by just two private equity-owned companies: Numotion and National Seating and Mobility, both of whom made deep cuts to their service budgets as part of their private equity owners’ plans to realize a profit on their investments.

But the wheelchair duopoly isn’t (just) a result of lax merger scrutiny and private equity buying-sprees. Medicare’s competitive bidding process “favors large companies that can achieve economies of scale in manufacturing and administrative costs, often at the price of quality and customer service.” To make things worse, Medicare doesn’t cover preventative maintenance, and will only replace chairs every five years. 

Let’s recap. Powered wheelchair users:

  • have to use chairs designed for indoor use even when they’re outside; 
  • the chairs are made  by low-bid contractors who skimp on quality; 
  • aren’t entitled to preventative maintenance; and
  • must make their chairs last for five years.

Small wonder that these chairs need a lot of service! 

Oh Great, There’s DRM in Wheelchairs Now

Wheelchair users don’t want to wait for repair, and so they often source their own parts and do their own repairs. When confronted with a choice between injury and immobilization or paying out of pocket for parts and tools, many wheelchair users feel they have no choice but to pay.

Home repairs that involve powered chairs’ electronic systems are a different matter. Not because electronics are more complex—but because manufacturers use “Digital Rights Management” (DRM): digital locks that are designed to block independent access.

DRM may be more familiar to you from music, ebooks, video games, and movies. While DRM has been around in various forms since at least 1979, it only came into its own with the passage of the Digital Millennium Copyright Act (DMCA) in 1998.

Section 1201 of the DMCA deals with DRM. It says that “trafficking” in a tool or even information that helps someone bypass an “access control” for a copyrighted work is a felony that can be punishable by up to five years in prison and up to $500,000 in fines. 

Most importantly, DMCA 1201 doesn’t limit itself to banning bypassing DRM in order to infringe copyright (for example, in order to make thousands of copies of a DVD and sell them on the black market). That has allowed companies to use copyright law to criminalize businesses that have nothing to do with copyright. A company that designs a product that has some DRM that prevents repair or maintenance or improvement can use Section 1201 to attack anyone who engages in those activities, because removing the DRM is itself against the law. To be clear, the  DMCA’s ban on bypassing DRM is unconstitutional, and gets in the way of many activities beyond repair.  That’s why we’re suing to overturn it.   

But in the meantime, the DRM in wheelchairs prevents wheelchair users and independent technicians from diagnosing routine problems with the chairs’ electronics. It also stops wheelchair users from making routine adjustments to their wheelchairs, as when “a wheelchair user with a balky wheel or failing motor may need to adjust the power wheelchair’s speed damping setting, which is accomplished using the administrative software” or when “a wheelchair user who installs a different tire on their chair for navigating inclement weather may want to access administrative software features to adjust the chair’s grip parameters.”

Access to power wheelchairs’ electronic systems is often restricted to people with cryptographic security dongles, as well as passwords. As the PIRG report notes, “without a [hardware] key, the diagnostic tool [for chairs with Dynamix DX control systems] can display parameter values and diagnostic messages, but nothing can be edited or written to a power wheelchair’s controller.”

DRM also restricts powered wheelchair users’ access to settings that allow them to fine-tune their controls. Arthur Torrey describes how badly tuned delays between input devices and steering make controlling a wheelchair “like driving with bungee cords.” He also describes how these restrictions prevent wheelchair users from increasing speed and handling restrictions to keep up with their skill at operating their chairs. 

Parts is Parts

The CRT duopoly charge shocking markups on their parts, extracting margins that put even the aerospace industry to shame. In Stranded, we read accounts like these:

  • “Had a flat tire. new (sp) innertube was $6 on Amazon. (National CRT supplier) Numotion wanted to replace both wheels at a cost of $300 to Medicaid and 6-8 weeks to get them. Got the innertubes in 2 days but they would not install them”
  • “Numotion took 4 months and charged $500 for a button that allows Bruce to power his wheelchair. Without it, he is stuck in bed. Got it overnight mailed from eBay for about $20”

 There are plenty of skilled technicians who can change a button or an inner-tube, including powered wheelchair users themselves. 

Robin Bouldoc discussed this with the PIRG researchers. Bouldoc’s husband has primary, progressive multiple sclerosis and uses a powered chair with a respirator and a device that allows him to control the chair using head movements.  She asked “Why can’t the local bicycle shop change the flat tire on our wheelchair?” 

Arthur Torrey is one of the other wheelchair users interviewed for the report. He is paraplegic, and feels confident that he can perform many routine repairs on his chair: “There’s nothing about manual wheelchairs or power wheelchairs that is that complex or difficult.” 

This was affirmed by wheelchair technicians interviewed for the report, who said that “most repairs to wheelchairs are straightforward and don’t require specialized skills or training, just a familiarity with mechanical devices.”

But despite this, the CRT companies refuse to ship parts to wheelchair users, blaming Medicare and Medicaid policies that refuse to reimburse them for parts that are sent to wheelchair users directly.

Right To Repair Wheelchairs

The Consumer Right To Repair Powered Wheelchairs Act (HB22-1031) has passed the Colorado legislature and Governor Jared Polis is expected to sign it into law. The legislative summary says  it all:

The bill requires a manufacturer to provide parts, embedded software, firmware, tools, or documentation, such as diagnostic, maintenance, or repair manuals, diagrams, or similar information, to independent repair providers and owners of the manufacturer's powered wheelchairs to allow an independent repair provider or owner to conduct diagnostic, maintenance, or repair services on the owner's powered wheelchair. A manufacturer's failure to comply with the requirement is a deceptive trade practice. In complying with the requirement to provide these resources, a manufacturer need not divulge any trade secrets to independent repair providers and owners.

Any new contractual provision or other arrangement that a manufacturer enters into that would remove or limit the manufacturer's obligation to provide these resources to independent repair providers and owners is void and unenforceable.

While this is the first successful Right to Repair law for wheelchairs, it bears a striking similarity to dozens of Right to Repair laws introduced in state houses that sought to protect your right to fix your phone, laptop, appliances, car or tractor. 

These laws have faced stiff opposition from an axis of powerful, anti-repair corporate interests, from Big Ag to Big Tech. But the tide is turning: New York State just passed an Right to Repair bill for electronics

Colorado’s wheelchair-specific Right to Repair bill has broken through. By forcing companies to bypass their own DRM on behalf of wheelchair users and independent repairers, the law sidesteps the DMCA’s prohibition on removing DRM. 

Making it easier for people who use powered wheelchairs to get them fixed won’t solve all the other problems with powered wheelchairs: it won’t solve the problem of being forced to use indoor chairs outdoors; it won’t solve the problem of a market concentrated into the hands of two companies that refuse to invest in repair, it won’t solve Medicare’s refusal to replace chairs when they wear out.

But safeguarding repair will help people who rely on wheelchairs. Making it possible for wheelchair users and the technicians they trust to fix their chairs means that while the fight to fix everything else goes on, wheelchair users will still have functional chairs—and they’ll be freed from the cruel, bureaucratic nightmare of wheelchair repair monopolies, giving them time to fight for the deep structural changes the sector so desperately, obviously needs.

Cory Doctorow

Speech-Related Offenses Should be Excluded from the Proposed UN Cybercrime Treaty

2 weeks 6 days ago

Governments should protect people against cybercrime, and they should equally respect and protect people's human rights. However, across the world, governments routinely abuse cybercrime laws to crack down on human rights by criminalizing speech. Governments claim they must do so to combat disinformation, “religious, ethnic or sectarian hatred,” “rehabilitation of nazism,” or “the distribution of false information,” among other harms. But in practice they use these laws to suppress criticism and dissent, and to more broadly clamp down on the freedoms of expression and association.

So it is concerning that some UN Member States are proposing vague provisions to combat hate speech to a committee of government representatives (the Ad Hoc Committee) convened by the UN to negotiate a proposed UN Cybercrime treaty. These proposals could make it a cybercrime to humiliate a person or group, or insult a religion using a computer, even if such speech would be legal under international human rights law.

Including offenses based on harmful speech in the treaty, rather than focusing on core cybercrimes, will likely result in overbroad, easily abused laws that will sweep up lawful speech and pose an enormous menace to the free expression rights of people around the world. The UN committee should not make that mistake.

The UN Ad Hoc Committee met in Vienna earlier this month for a second round of talks on drafting the new treaty. Some Member States put forward, during and ahead of the session, vague proposals aimed at online hate speech, including Egypt, Jordan, Russia, Belarus, Burundi, China, Nicaragua, Tajikistan, Kuwait, Pakistan, Algeria, and Sudan. Others made proposals aimed at racist and xenophobic materials, including Algeria, Pakistan, Sudan, Burkina Faso,  BurundiIndia, Egypt, Tanzania, Jordan, Russia, Belarus, Burundi, China, Nicaragua, and Tajikistan.

For example, Jordan proposes using the treaty to criminalize “hate speech or actions related to the insulting of religions or States using information networks or websites,” while Egypt calls for prohibiting the “spreading of strife, sedition, hatred or racism.” Russia, jointly with Belarus, Burundi, China, Nicaragua, and Tajikistan, also proposed to outlaw a wide range of vaguely defined speech intending to criminalize protected speech: “the distribution of materials that call for illegal acts motivated by political, ideological, social, racial, ethnic, or religious hatred or enmity, advocacy and justification of such actions, or to provide access to such materials, by means of ICT (information and communications technology),” as well as “humiliation by means of ICT (information and communications technology) of a person or group of people on account of their race, ethnicity, language, origin or religious affiliation.”

Speech Offences Don't Belong in the Proposed Cybercrime Treaty

As we have previously said, only crimes that target ICTs should be included in the proposed treaty, such as those offenses in which ICTs are the direct objects and instruments of the crimes and could not exist without the ICT systems. These include illegal access to computing systems, illegal interception of communications, data theft, and misuse of devices. So crimes where ICTs are simply a tool that is sometimes used to commit an offense, like the proposals before the UN Ad Hoc Committee, should be excluded from the proposed treaty. These crimes are merely incidentally involving or benefiting from ICT systems without targeting or harming ICTs.

The Office of the United Nations High Commissioner for Human Rights (OHCHR) highlighted in January that any future cybercrime treaty should not include offenses based on the content of online expression:

“Cybercrime laws have been used to impose overly broad restrictions on free expression by criminalizing various online content such as extremism or hate speech.”

Further, harmful speech should not be included among cybercrimes because of the inherent difficulties in defining prohibited speech. Hate speech, the subject of several proposals, is an apt example of the dangers raised by including speech-related harms in a cybercrime treaty. 

Because we lack a universally agreed upon definition of hate speech in international human rights law, using the term “hate speech” is unhelpful in identifying permissible restrictions to speech. Hate speech can mean different things to different people and capture a broad range of expressions, including awful but lawful speech. Vague or overbroad laws criminalizing speech can lead to censorship, both state-sanctioned  and self-censorship, of legitimate speech because internet users are left uncertain about what speech is disallowed. 

Hate speech is many times conflated with hate crimes, a confusion that can be problematic when drafting an international treaty. Not all hate speech is a crime: restrictions on speech can come in the form of criminal, civil, administrative, policy, or self-regulatory measures. Although Article 20 (2) of the UN International Covenant of Civil and Political Rights (ICCPR) made clear that any “advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence” must be prohibited by law, prohibition does not necessarily equal criminalization. 

Indeed, criminal sanctions are measures of last resort, invoked only in the most extreme situations. As explained, only the “most severe types of hate speech that may appropriately attract criminal sanction include 'incitement to genocide,' and particularly severe forms of 'advocacy of discriminatory hatred that constitute an incitement to violence, hostility or discrimination.'” 

International law already provides sufficient guidance on speech that can be restricted as inciting hatred, and thus should not be included in the treaty. Additional and conflicting provisions regarding online hate speech in the Cybercrime Treaty are unnecessary and unwise. 

Broad Speech Protection and Very Narrow Limitations on Speech

At the heart of any limitations on the right to free expression must sit the Universal Declaration of Human Rights (UDHR) and the ICCPR, to which the UN Member States that are negotiating the new UN Cybercrime treaty are parties. Article 19 of the ICCPR provides  broad protection of freedom of expression. It protects the right to seek, receive, and impart all kinds and forms of expression through any media of one’s choice. States may limit these rights in only very narrow circumstances.

Article 19(3) of the ICCPR lays down conditions any restriction on freedom of expression must meet, requiring that any limitation comply with the following test: it must be provided for by law (“legality”), designed to achieve a legitimate aim, be proportionate to that legitimate aim, and necessary for a democratic society. The UN Human Rights Committee’s General Comment 34 has established that these standards apply to online speech. Deeply offensive expression, blasphemy, defamation of religion, incitement to terrorism, and violent extremism are not categorically  subject to permissible limitations. Any limitations on those categories of speech must, like most other categories of speech, satisfy the Article 19(3) test. 

Both the UN Special Rapporteur on Freedom of Expression and the Committee on the Elimination of Racial Discrimination (CERD) have underlined that speech prohibitions must satisfy the Article 19(3) test. Moreover, they must primarily be civil sanctions: criminal sanctions are measures of last resort, invoked only in the most extreme situations, such as instances of imminent violence. The UN Human Rights Committee’s General Comment 34 and CERD General Recommendation 35 also confirm that any limitations on speech must comply with the Article 19 test. 

Incitement to Discrimination, Hostility or Violence: The Standard

Although incitement is a category of speech that may presently be restricted, existing International law provides sufficient guidance on how States should respond to it; its inclusion in the Cybercrime Treaty is not needed and will only sow confusion.

As mentioned before, ICCPR Article 20 (2) requires Member States to prohibit the advocacy of national, racial, or religious hatred that constitutes incitement to discrimination, hostility, or violence based on the following categories: nationality, race, color, ethnicity, language, religion, national or social origin, political or other opinion, gender, sexual orientation, property, birth, disability, or other status. 

In its 2012 report, the  UN Special Rapporteur developed a standard to assess Article 20 prohibitions that focuses on intent, incitement, and particular harm. First, the speaker must intend to publicly advocate and promote national, racial, or religious hatred towards the specific group. Next, the speech must “create an imminent risk of discrimination, hostility or violence” against the group members. Finally, incitement must aim at producing discrimination, hostility, or violence against the group.

To meet these standards at the national level, the Member States have the following obligations:

  • Adopt precise and unambiguous restrictions to combat advocacy of national, racial, or religious hatred that amounts to incitement to discrimination, hostility, or violence. Legal attempts to punish hate speech are often too vague or too broad. It is also unclear whether States prohibitions against “advocacy of hatred that constitutes incitement,” fall under ICCPR Article 20 or actually target legitimate speech. 
  • Only enact speech restrictions that have legitimate aims as prescribed under ICCPR Article  19 and 20 or CERD Article 4. Legitimate aim principles include respecting the rights and reputation of others and protecting national security, public order, or public health or morals. Even here, the restrictions must be narrowly tailored. There must be a pressing or substantial need, and restrictions must not be overbroad—banning speech because it’s critical isn’t a legitimate aim. Further, the protection of morals, which reflect social or religious traditions, shouldn’t be based on the principles of a single tradition. Under ICCPR General Comment 34, blasphemy laws, speech restrictions that discriminate in favor or against a certain religion, and prohibitions against criticism of religious leaders are not legitimate aims. 
  • Opt for measures that do not unnecessarily and disproportionately interfere with freedom of expression. When the Article 19(3) test is met, Member States must demonstrate that the speech in question poses an imminent threat of harm and  applies the least intrusive means of restricting speech to achieve a legitimate objective. In addition, the speaker's intent to cause harm must be examined

This test has a very high threshold, and many laws have failed to comply with such standards. Myanmar’s hate speech law contained an unlawfully vague definition of hate speech crime. Spain’s speech-related offenses did not sufficiently distinguish between the severity of the expression and the impact that speech in having to determine proportionate sanctions that comply with Articles 20(20) and 19(3). France’s Avia law also attempted to tackle hateful content online but was declared unconstitutional. 

Spread of Disinformation

There is even less agreement on a universal definition of disinformation in international human rights law. Disinformation laws are too often vague and overbroad, capturing protected expression. As Human Rights Watch explained, “false”  information can be hotly contested:

"The spread of disinformation that undermines human rights and online gender-based violence requires a government response. However, government responses to these human rights challenges that focus on the criminalization of content can also lead to disproportionate rights restrictions, particularly the right to freedom of expression and privacy."

All kinds of information and ideas are protected under ICCPR Article 19, even those that may “shock, offend, or disturb,” regardless of whether the content is true or false. People have the right to hold and express unsubstantiated views or share parodies or satirical expressions. As the UN Special Rapporteur on the freedom of expression noted, “prohibition of false information is not a legitimate aim under the international human rights law.”

The free flow of information is an integral part of freedom of expression, which is especially important in political speech on matters of public interest. While disinformation disseminated intentionally to cause social harm is problematic, the UN Special Rapporteur emphasized that so too are vague criminal laws that chill online speech and shrink civic space.

The 2017 Joint Declaration on Freedom of Expression and “Fake News,” Disinformation and Propaganda provides key principles under international human rights law to assist states, companies, journalists, and other stakeholders in addressing disinformation.  For example, Member States are encouraged to create an enabling environment for free expression, ensure that they disseminate reliable and trustworthy information, and adopt measures to promote media and digital literacy. 

In its Resolution 44/12, the UN Human Rights Council stated that responses to disinformation should always comply with legality, legitimacy, necessity, and proportionality principles. As with hate speech, vague prohibitions on disinformation will rarely meet the legality standard. For example, the Joint Statement of the UN Special Rapporteur, the OSCE Representative on Freedom of Media, and the IACHR Special Rapporteur for Freedom of Expression sounded the alarm about the rise of overbroad “fake news” bills in the context of the COVID-19 pandemic.  (Human Rights Watch documented the application of these laws, and EFF expressed its concerns on these bills, too). 

On the specific topic of electoral disinformation, the UN Special Rapporteur has said that electoral laws prohibiting the propagation of falsehoods in the electoral process may meet the Article 19(3) test. Additionally, such restrictions should be “narrowly construed, time-limited, and tailored to avoid limiting political debate.”

Despite these cautions, numerous proposals were presented to the UN Ad Hoc Committee that would create new cybercrimes of misinformation. Tanzania proposed to outlaw the “publication of false information.” Jordan suggests including the “dissemination of rumors or false news through information systems, networks or websites.” Russia, jointly with Belarus, Burundi, China, Nicaragua, and Tajikistan, called for prohibiting “the intentional illegal creation and use of digital information capable of being mistaken for information already known and trusted by a user, causing substantial harm.”

Once again, these vague provisions will hardly satisfy human rights standards. Their practical interpretation and application will have an adverse effect on fundamental rights, and result in more harm than good.  

The Way Forward—Exclude Offenses Based on the Content Of Online Expression

EFF joins its partners, including Article 19, AccessNow, Priva, and Human Rights Watch, in urging the UN Member States to exclude content-related offenses from the proposed UN Cybercrime Treaty. In a letter to the UN Ad Hoc Committee, EFF and more than 130 civil society groups warned that cybercrime laws have already been weaponized to target journalists, whistleblowers, political dissidents, security researchers, LGBTQ communities, and human rights defenders.  Member States don’t have any room for error when drafting a global treaty. They should find consensus to exclude speech-related offenses from the UN Cybercrime treaty. 


Meri Baghdasaryan

EFF to Inter-American Court of Human Rights: Colombia’s Surveillance of Human Rights-Defending Lawyers Group Violated International Law

3 weeks 2 days ago

EFF, Article 19, Fundación Karisma, and Privacy International, represented by Berkeley Law’s International Human Rights Law Clinic, urged the Inter-American Court of Human Rights to rule that Colombia’s existing legal framework regulating intelligence activities, and the unlawful and arbitrary surveillance of members of the Jose Alvear Restrepo Lawyers Collective (CAJAR) and their families, violated a constellation of human rights, including the rights to privacy, freedom of expression, and association, and forced CAJAR members to limit their activities, change homes, and go into exile to avoid violence, threats, and harassment.

Members of CAJAR, a Colombian human rights organization defending victims of political persecution, indigenous people, and activists for over 40 years, have had their communications intercepted by Colombian intelligence agencies and faced ongoing threats and intimidation since the 1990s, EFF and its partners said in an amicus brief submitted to the court in CAJAR’s lawsuit against the Colombian state. Since at least 1999, Colombian authorities have subjected CAJAR members to constant, pervasive secret surveillance on every facet of their professional and personal lives, including their locations, activities, finances, travel, contacts, clients, and protection measures.

The brief demonstrates that Colombia's intelligence law and unlawful communication surveillance practices violate the right to privacy and other human rights under the American Convention on Human Rights. The brief also provides evidence of the range of targeted and mass surveillance tools employed by the state. In short:

“While international law permits targeted surveillance in limited circumstances and with strict safeguards, mass surveillance is an inherently disproportionate interference with the international human rights to privacy.  But in the last few years, law enforcement and intelligence services in Colombia have purchased tools to expand their pervasive spying network and capture large amounts of communication data."

As the brief explains, Colombia employs both targeted and mass surveillance tools. Colombian authorities collect, monitor, and intercept, in real-time, individual audio and data communications from mobile and landline phones. Intelligence authorities intercept communications data without prior authorization or judicial oversight, with direct access to communication networks, despite the fact that Colombian law does not authorize any agency to engage in communications interception outside the confines of criminal investigations and without judicial oversight. Colombian intelligence services also have conducted intrusive operations exploiting software, data, computer systems, or networks to gain access to user information and devices.

This case presents an unprecedented opportunity for the court to examine whether Colombia’s intelligence surveillance practices and legal regime comport with the American Convention, the brief, filed on May 24, said. If it finds violations, the court can establish measures to be taken by Colombia to strengthen safeguards against government surveillance. These include requiring prior judicial authorization, effective independent oversight, and transparency measures, like notifying people targeted by surveillance to ensure effective remedies in case of abuse.

Testimony during the hearing presenting the case to the court in May has also provided clear indications of abusive government surveillance practices. An expert witness defending Colombia's Intelligence Law's compliance with human rights standards explained that the monitoring of electromagnetic spectrum–a surveillance measure authorized by the law–could entail monitoring conversations within an entire city zone (public hearing at 2:28:00, in Spanish). He mentioned the random nature of the monitoring, which is not targeted to specific persons, as a positive feature and justification for not requiring a prior judicial authorization. However, such random, dragnet surveillance can’t be deemed compatible to the American Convention’s necessity and proportionality standards.

The expert witness also said the use of malicious software for intelligence activities is regulated in Colombia “in the sense it is channeled within the monitoring tasks,” as per the Intelligence Law (public hearing at 2:00:13, in Spanish). Yet, the law does not contain any particular authorization for the use of malware technology. This means that, aside from proportionality concerns the use of malware raises, Colombia’s Intelligence Law does not clearly and precisely authorize the state to employ this type of technology–which the principle of legality requires under international human rights law.   

Colombia has argued that its Intelligence Law, approved in 2013 following media reports unveiling wrongful surveillance by intelligence agencies targeting human rights defenders and journalists, establishes specific circumstances under which intelligence activities can be authorized. The government claimed the law ensures that any intelligence action conforms to the principles of legality, proportionality and necessity, and provides human rights safeguards at various levels.

Yet, the brief explains that Colombia's intelligence legal framework has enabled abusive surveillance practices in violation of the American Convention and has not prevented authorities from unlawfully surveilling, harassing, and attacking CAJAR members. Even after Colombia enacted the new law, authorities continued to carry out unlawful communications surveillance against CAJAR members, using an expansive and invasive spying system to target and disrupt the work of human rights defenders, journalists, and others.

The brief builds on the most protective understanding of international human rights law and standards held by international courts and human rights bodies. It urges the Inter-American Court to establish rigorous human rights protections that limit state surveillance and prevent future violations. With this case, the court has a crucial opportunity to ensure that the American Convention's safeguards are applied to, and serve as a check on, unparalleled state surveillance powers employed in the digital age.  

Karen Gullo

Axon Must Not Arm Drones with Tasers

3 weeks 3 days ago

Taser and surveillance vendor Axon has proposed what it claims to be the solution to the epidemic of school shootings in the United States: a remote-controlled flying drone armed with a taser. For many many reasons, this is a dangerous idea. Armed drones would mission-creep their way into more every-day policing. We must oppose a process of normalizing the arming of drones and robots.

EFF has stated strongly before that drones and robots, whether they be autonomous or remote-controlled, should not be armed–either with lethal or “less-lethal” weapons. And we’re far from the only group to do so.

Police currently deploy many different kinds of moving and task-performing technologies. These include flying drones, remote control bomb-defusing robots, and autonomous patrol robots. While these different devices serve different functions and operate differently, none of them—absolutely none—should be armed with any kind of weapon.

Mission creep is very real. Time and time again, technologies given to police to use only in the most extreme circumstances make their way onto streets during protests or to respond to petty crime. For example, cell site simulators (often called “Stingrays”) were developed for use in foreign battlefields, brought home in the name of fighting “terrorism,” then used by law enforcement to catch immigrants and a man who stole $57 worth of food. Likewise, police have targeted BLM protesters with face surveillance and Amazon Ring doorbell cameras

We cannot state this strongly enough: if police get their hands on taser drones, they will not sit in a warehouse until the next emergency mass shooting situation. History has proven this. We will see them flying over protests and shopping districts. We will hear news stories about police using a drone to taser someone for vandalism, petty theft, or fleeing the drone. It will not be a matter of if, but when.

Police will be more likely to use this kind of force if the entire process feels like a video game—if they can send tens of thousands of volts through a person’s body with the push of a button far removed from that person. And the person at risk of being tased might not hear the drone’s commands or may be confused by the presence of a floating robot.

Police use of tasers have killed over 500 people since 2010, according to a study on the lethality of the technology done by USA Today in 2021.

The Axon Ethics Board has voted firmly against Axon moving forward with this project. 

Armed drones are just one part of a multi-part strategy from Axon for selling products that they claim might curb mass incidents of gun violence. Axon has also announced a partnership with Fusus, a company that specializes in consolidating private security camera feeds and giving police live access. EFF raised concerns in 2020 when police in Jackson, Mississippi, announced a pilot program with Fusus to get live access to video streams from private cameras, ranging from commercial security cameras to resident’s private Ring doorbell cameras.

This would create, with the permission of the camera owners but not the people who walk by them every day, a massive surveillance network. Like drones, it will ultimately be used by police much more often than in rare critical emergency situations.

We’ve seen this before. Time and time again, police conjure the extreme worst-case-scenario threat in order to deploy extraordinary powers, which end up being used in everyday acts of policing which disproportionately affect the lives of people of color, immigrants, and other vulnerable members of society. Before these tools are deployed, we demand that armed police drones never see the light of day. 

Axon has announced a Reddit AMA here on Friday, June 3, 2022, to ask questions about these new products. Ask questions about your concerns. You can ask questions like: How do you feel about making a dangerous technology that will likely be used in other less dire scenarios than their preferred use case?  Why has your ethics advisory board condemned the project? Will that stop you? 

Disclosure: EFF's Surveillance Litigation Director Jennifer Lynch serves on the Axon AI Ethics Board in her personal capacity.

Matthew Guariglia

San Francisco Police Nailed for Violating Public Records Laws Regarding Face Recognition and Fusion Center Documents

3 weeks 3 days ago

By unanimous vote, San Francisco's public records appeals body ruled last night that the San Francisco Police Department (SFPD) violated state and local laws when it failed to respond adequately to EFF's requests for documents about face recognition and the department's relationship with the Northern California Regional Intelligence Center (NCRIC), the Bay Area's fusion center.

The Sunshine Ordinance Task Force further ordered SFPD to conduct a fresh search for records and respond point-by-point to EFF's original records request within 5 days or potentially face sanctions.

In the summer of 2019, San Francisco became the first major U.S. city to ban government use of face recognition, a technology that extracts information about a person's face and compares that data to a database of images in order to establish identity. SFPD's compliance with the ban came into question in September 2020 when the San Francisco Chronicle reported that SFPD had circulated a bulletin containing a surveillance camera image of a suspect, and in response, NCRIC staff used face recognition on the image and forwarded the results to SFPD. EFF and other organizations were concerned that this practice could constitute an end-run around the ban.

EFF followed up by submitting a public records request to SFPD under the San Francisco Sunshine Ordinance asking for 11 different categories of documents, ranging from original bulletins and correspondence regarding the original case, general discussions over face recognition between SFPD and external parties, and all documents that establish the relationship between NCRIC and SFPD.

Despite seeking a time extension, SFPD provided only one document: an email statement to reporters regarding the incident in the Chronicle article. SFPD claimed some of the records were wholesale exempt because they were investigative. For the remaining items, SFPD claimed it could not locate any records, such as standard agreements that govern SFPD's formal partnership with the fusion center.

EFF filed a formal complaint with the Task Force and only then did SFPD "re-evaluate" its response. It then provided 20 pages of previously unreleased documents, including the bulletin and emails it received from other agencies that reviewed the image. The other documents remained elusive.

The gears of oversight often move slow, and so it took a year and a half for EFF's complaint to reach a full hearing before the Task Force.

During the hearing, EFF testified that SFPD only provided records after the agency faced a formal complaint. However, we also highlighted the missing documents, such as the fusion center agreements. As we told the task force: "It's difficult to understand how SFPD could not find any information considering the city has two members of SFPD's special investigations unit assigned to NCRIC and Chief [William] Scott is chair of the NCRIC executive board." If true, then SFPD would be irresponsibly engaged in an intelligence and data sharing partnership without any record of the rules, limitations, and obligations of the agencies involved. We also raised skepticism about SFPD's claim that it could not find a single email discussing face recognition technology in the year and a half since the ban took effect.

SFPD stood by its determination and provided no further insight into its decisions. Unpersuaded, the Task Force found that SFPD had violated Sections 6253(b) and (c) of the California Public Records Act for failing to provide records in a timely manner, as well as 67.27(d) and 67.26 of the Sunshine Ordinance for failing to keep withholding of information to a minimum and failing to justify such withholding.

In addition, pursuant to section 67.21(e) of the ordinance, the Task Force ordered SFPD to comply with the remainder of our request within 5 days. Under the law, if the agency fails to comply, the Task Force must report the violation to the district attorney or attorney general.

Initially, the Task Force voted to refer the matter to the San Francisco Ethics Commission for investigation of "willful failure" to comply with the law, a form of official misconduct. However, they rescinded that vote because it was unclear which official at SFPD should be named in the referral. Task Force members indicated that this option would remain available should SFPD fail to comply with its order.

No agency, and especially not the police department, should require a member of the public to file a complaint before they provide information to the public. And no agency should get away with claiming it can't find records that plainly should exist. As we said during the hearing, a transparent and accountable government can only function if these errors are confirmed and documented by an independent body. We applaud the Task Force for its ruling and look forward to seeing what, if any, records SFPD produces next week.

And if SFPD doesn't produce these records, the Task Force should escalate the case for further investigation or prosecution. 

Dave Maass

New York: Tell Your Assemblymembers to Pass This Landmark Repair Bill

3 weeks 4 days ago

New York’s legislature has the chance to make history and stand up for users’ rights by passing the Digital Fair Repair Act. Assemblymember Patricia Fahy’s bill, A7006-B, would require companies to give people access to what they need to fix their stuff by selling spare parts and special tools at fair and reasonable terms. It would also provide all customers and third-party repair technicians access to repair information, software, and give them the ability to apply firmware patches.

Take Action

New York: Speak up for Your Right to Repair

Asm. Fahy and the Repair Coalition have worked hard to stand up for users’ rights and stand strong for a bill that would be a landmark piece of legislation.

New York’s bill is poised to be the first broad right-to-repair bill to make it into law. While Colorado’s legislature rightly passed a narrow right-to-repair bill focused on wheelchair repairs, many of this year’s proposals have fallen in the face of strong opposition from industry trade groups. In California, for example, a general right-to-repair bill passed through the Judiciary Committee only to be stopped in the Senate Appropriations committee—without a public hearing. It was opposed by groups such as TechNet and the Telecommunications Industry Association. 

Big companies do not want independent repairers to have access to these repair materials because they can charge higher prices for parts and repair if they hold a monopoly, or even force customers to buy brand-new devices by making repair impossible. But that’s bad for consumers; without competition, we end up paying more to repair or replace devices, getting worse service, and we wind up with more devices in landfills. 

Establishing a right to repair in New York makes it easier for people to fix their broken devices, helps independent businesses, and helps the environment. New York’s legislature only has a couple of days to act on this important bill.

Tell your Assemblymember that the right to repair is important to you and urge them to vote “Yes.”

Take Action

New York: Speak up for Your Right to Repair

Hayley Tsukayama

Hearing Wednesday: EFF Testifies Against SFPD for Violating Transparency Laws

3 weeks 4 days ago
Police Department Withheld Documents About Use of Face Recognition

SAN FRANCISCO – On Wednesday, June 1, at 5 p.m. PT, the Electronic Frontier Foundation (EFF) will testify against the San Francisco Police Department (SFPD) at the city’s Sunshine Ordinance Task Force meeting. EFF filed a complaint against the SFPD for withholding records about a controversial investigation involving the use of facial recognition.

In September 2020, SFPD arrested a man who was suspected of illegally discharging a gun, and a San Francisco Chronicle report raised concerns that the arrest came after a local fusion center ran the man’s photo through a face-recognition database. The report called into question SFPD’s role in the search, particularly because the city’s Surveillance Technology Ordinance, enacted in 2019, made San Francisco the first city in the country to ban government use of face-recognition technology.

EFF filed a public records request with the SFPD in December 2020 about the investigation and the arrest, but the department released only previously available public statements. EFF filed a complaint with the Sunshine Ordinance Task Force for SFPD’s misleading records’ release, after which SFPD produced about 20 pages of relevant documents.

At Wednesday’s hearing, EFF Director of Investigations Dave Maass will ask the task force to uphold EFF’s complaint about the SFPD, arguing that San Francisco’s transparency policies won’t work well unless public agencies are held to account when trying to skirt their responsibilities.

San Francisco Sunshine Ordinance Task Force hearing

Dave Maass
EFF Director of Investigations

Wednesday, June 1
5:00 pm PT

Password: sunshine

For EFF’s original public records demand to SFPD:

For EFF’s complaint to the Sunshine Ordinance Task Force:

For more information on the hearing:


Contact:  DaveMaassDirector of
Josh Richman

Community Activists Reach Settlement With Marin County Sheriff for Unlawfully Sharing Drivers’ Locations with Out-Of-State and Federal Agencies

3 weeks 4 days ago
Activists and Civil Rights Advocates Say Sheriff’s Sharing Practices Threatened Safety of Marginalized Groups

SAN FRANCISCO—Community activists in Northern California today announced a settlement in their lawsuit against the County of Marin and Marin County Sheriff Robert Doyle, whose office illegally made the license plate and location information of local drivers, captured by a network of surveillance cameras, available to hundreds of federal and out-of-state agencies, including Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP).

Under the settlement, Sheriff Doyle has agreed to stop sharing license plate and location information with agencies outside of California to comply with state laws S.B. 34 and the California Values Act (S.B. 54). This means that federal and out-of-state agencies will no longer be able to query information collected by the county’s automated license plate reader (ALPR) cameras—a form of mass surveillance technology that is a threat to privacy and civil liberties, particularly for marginalized groups. The settlement is binding for any of Sheriff Doyle’s successors.

ALPR cameras scan tens of thousands of passing cars every month, recording their license plate number, date, time, and location. This information can be used to identify and track people, revealing where they live and work, when they visit friends or drop off their kids at school, and when and where they attend religious services or protests. When shared with ICE and CBP, the data facilitates the tracking, deportation, and incarceration of immigrant communities.

“This settlement is a victory for disfavored and marginalized people, including immigrants, who historically have been subjected to civil rights abuses through invasive surveillance by police,” said Vasudha Talla, Immigrants’ Rights Program Director at the ACLU Foundation of Northern California.

“It comes at an especially important time for civil liberties in California, which stands to become a refuge for marginalized groups, such as people seeking abortions or gender-affirming care, who find that their identities and rights are under attack in other states,” Talla explained. “Invasive and harmful surveillance should have no place in California communities, and this settlement is a step toward eliminating its harms.”

Longtime Marin community members Lisa Bennett, Cesar S. Lagleva, and Tara Evans filed the suit in Marin County Superior Court on Oct. 14, 2021, seeking to end the sheriff’s ALPR data-sharing practices.

“While we are glad to have achieved the core goal of our lawsuit, we remain concerned that Sheriff Doyle violated these state laws for so long and with so little transparency,” said Bennett. “In light of this violation of public trust, we are calling on the Marin County Board of Supervisors to establish an oversight body to ensure continued accountability.”

“We are pleased that Sheriff Doyle has agreed that California law prohibits the sharing of ALPR data with entities outside of California,” said Saira Hussain, Staff Attorney with the Electronic Frontier Foundation. “This logic applies to other agencies throughout the state. They should follow Marin County’s example.”

The plaintiffs are represented by the ACLU Foundations of Northern California, Southern California, and San Diego & Imperial Counties, the Electronic Frontier Foundation, and attorney Michael T. Risher.

For the agreement:

For more on this case:

Karen Gullo

Podcast Episode: Wordle and the Web We Need

3 weeks 5 days ago

Where is the internet we were promised? It feels like we’re dominated by megalithic, siloed platforms where users have little or no say over how their data is used and little recourse if they disagree, where direct interaction with users is seen as a bug to be fixed, and where art and creativity are just “content generation.”

But take a peek beyond those platforms and you can still find a thriving internet of millions who are empowered to control their own technology, art, and lives. Anil Dash, CEO of Glitch and an EFF board member, says this is where we start reclaiming the internet for individual agency, control, creativity, and connection to culture - especially among society’s most vulnerable and marginalized members.

Dash speaks with EFF's Cindy Cohn and Danny O’Brien about building more humane and inclusive technology, and leveraging love of art and culture into grassroots movements for an internet that truly belongs to us all. Privacy info. This embed will serve content from


This episode is also available on the Internet Archive.

In this episode you’ll learn about:

  • What past and current social justice movements can teach us about reclaiming the internet
  • The importance of clearly understanding and describing what we want—and don’t want—from technology
  • Energizing people in artistic and fandom communities to become activists for better technology
  • Tech workers’ potential power over what their employers do
  • How Wordle might be a window into a healthier web

Anil Dash is CEO of Glitch, the friendly developer community where coders collaborate to create and share millions of web apps, and a longtime entrepreneur and writer focused upon how technology can transform society, media, the arts, government, and culture. In addition to serving on EFF’s board, he serves on the boards of The Markup, a nonprofit investigative newsroom pushing for tech accountability; Data & Society Research Institute, which researches the cutting edge of tech's impact on society; and the Lower East Side Girls Club, which serves girls and families in need in New York City. He was an advisor to the Obama White House’s Office of Digital Strategy, served for a decade on the board of Stack Overflow, the world’s largest community for coders, and today advises key startups and non-profits including Medium, The Human Utility, DonorsChoose and Project Include.


Music for How to Fix the Internet was created for us by Reed Mathis and Nat Keefe of BeatMower. 

This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by their creators: 


Open Source and Free Software

Digital Rights and Individual Empowerment

Anonymity and Pseudonymity

Data Ownership

Apple and Encryption

Digital Surveillance Education

Intellectual Property and Remixes


Anil Dash: So Josh Wardle made a word game for his partner because she wanted to be able to play this word game every day. And at an architectural level, it is radical because it is simple and it is a throwback to an internet that many people have forgotten about.

What happened after that was the Glitch community, took the idea and ran with it. They made remixes. 

And so people have made, I mean the last count was well over a thousand remixes of Wordle on Glitch and that's sort of branched off into all these different worlds now. 

And what's been most amazing for me to see is the majority of remixes we've seen on Glitch have been from K-pop fans. So we have a huge community of mostly teenage girls who love K-pop, Korean pop music, which is global pop music now.

And so that's pretty remarkable that we have on an average afternoon, a handful of new apps made by young women, will pop up about the groups that they like and then people play it, they share their scores. The key takeaway here is pop culture tied directly to broad individual creation of independent websites that all run on their own addresses created by individual people with no surveillance, no tracking, no connection to any of the big silos, complete open source, the ability to take it and actually run it somewhere else.The web that we are told we are fighting for exists every day and millions of people are participating in it.


That's Anil Dash. He's the CEO of Glitch and he's also on the EFF's Board of Directors. Anil thinks a lot about many things, but today we're going to focus on how we can build a grassroots movement to support better technology.


Anil is going to tell us what we need to do to make tech be more relevant and to build involvement beyond the converted.


I'm Cindy Cohn, EFF's executive director.


And I'm Danny O’Brien special advisor to EFF. Welcome to How to Fix the Internet, a podcast of the Electronic Frontier Foundation.


Hi Anil, we're just delighted that you agreed to join us.


Hello. I'm so glad to be here.


Tell me the problem as you see it. Why is tech not reaching out in the way that it ought to galvanize movements, to get us to a better internet?


That's a big question, but I think about first learning from other movements. So we're in an amazing moment, for example of people fighting for a higher minimum wage in the US.




They call it the Fight for $15 and it's very clear. You know if you got paid 15 bucks an hour or not and you know what you're asking for. And I think about movements like that are so galvanizing because of their clarity, you'll know when you win.  And if somebody says this big tech company did something wrong about privacy. Okay. Yeah, I believe that. That's easy to believe. And they said, "What do we want?"   And if we don't know what it would take to win, then do we have a movement at all? I think that's the challenge we have, because as soon as we have that clarity, we won't have to do anything to galvanize many more people joining in and wanting to be part of it.


It is a hard task and it is something that someone has to explicitly do. I guess the question maybe we should all be talking about is, how do you even begin to think about that? What is better tech? What is a better internet?


I think one of the most important things is about knowing that this is not the beginning of a moment. I don't want to obsess too much about the Fight for $15, but I think it's a great example. There's a couple centuries of prior art there to learn from about what constitutes a fair wage for people. And so if you look at this moment one of the reasons we have as many challenges as we do in tech is there's almost this insistence that nothing came before. We're the beginning of history all the time.




And I have enough gray in my beard and everybody has that enough mileage on their sneakers to know this isn't the first time we've had these issues before. In many cases it's not the first time the people who are making these mistakes have made these mistakes. This is not the first time anybody's thought about these problems and what could we learn about what did and didn't work in the past?


I want to flip this around just a little bit. So tell us your vision. What if we get this all right? And then maybe we can back into our slogan a little bit.


That's the hard part, isn't it? It's like I know what I don't like, it's almost the inverse of, I'll know it when I'll see it. There's very obviously this trickle down architecture. First we'll help the people that have the fastest phones, the fastest connections, the most wealth, the most security, the most stability and then everybody else will benefit later. And that part always gets overlooked. So I think there's an orientation to service, an orientation to alignment to who you're serving that is a North star, so that's one part. It sounds nebulous, but it's far more falsifiable. It's far more testable than you might imagine.

I think very often you can say, can this technology be used by somebody who's unhoused? Can this technology be used by somebody who's unbanked? Can this technology be used by somebody with an intermittent connection? Can this technology be used by somebody who, if they revealed some aspect of their identity to their employer, to their government would be in danger? A very short list. We can just rattle off five of them and we can probably come up with five more and cover a couple billion people that we care about. That list lives in my head because I do product work every day, I'm lucky I get to work on a product with a team that cares about that. But I think that's got to be the North star, at least you're making informed decisions there, but I don't think that's the process for much of especially Silicon Valley, but in general.


One of the parts we struggle with as a movement, as much as anything is, what do you call that relationship?




So there's customer. There's user. Or consumer, another one that I feel really uncomfortable with.


That's grim.


That's a really grim one.


Yeah. Because it's so hard to explain because you go, you're not just consuming. You're not just this spigot that receives these things.


Yeah. There's a dehumanizing rhetoric. I think the language reveals the thought. This happened when, I used to say I was a writer I write on the internet and now the default stance is that people generate content.




When the phrase user generated content came out, it was such an absurd framing that I think those of us who were users who were generating content were like, this sounds like a parody. It's nothing you would ever say to a person. And now I talk to young people who introduce themselves to me as content creators.





And the idea that. And I'm the old guy talking about the 20th century idea of don't sell out and you're not going to license your song for a commercial, like this stuff. That's very anachronistic. I'm aware that time has passed for that moment for a lot of people. But the idea that you would willingly describe something that comes from your heart and soul as content, it's extremist framing. It's actually very radical framing and I think that sense that we forget how radical the thought processes have gotten for lots of folks like the Overton window has shifted so radically that the language is only evidence of it having happened. And so I keep coming back to that of we concede so much just by how we even talk about things. 


So who are we? What should we be?


I struggle with this a lot because I help run Glitch and it's a company where people make apps, but we talk a lot about having a community. And I'm struck because I think everybody does. I think every platform I've ever read in the last 20 years is we have a community, a community of users. We have all these things and it could be the most hostile place in the world. It can be spying on their users. It can be deleting their data. It can be undermining their jobs. And it's like, "We got a community."


Yeah. When Zuckerberg calls Facebook a community, it just [inaudible 00:13:36].


These words don't mean anything and I struggle because I do mean it. And I take it very seriously in the same way that I attend a community board meetings in my neighborhood really seriously because it's a community. And so the city council members roll their eyes when they see me show up, because they know I'm going to be there. And that's what community- is enough to be exasperating to the people in charge. And that's what I tell the team all the time. We're in charge. We should be exasperated. That's the definition of being a community. That's not happening at 99% of the organizations that talk about having communities. So we don't have a word for what's right.


It is strange to have people defining themselves from the perspective of the platform when they're the artist. 

That’s right. 


You're a content creator only from the perspective of the people who built the platform that you're creating the content for that's whose perspective it is for many people they're an artist.


I think of a pop culture example like Zach Snyder, he makes a superhero film and then they don't put it out the right way and then he goes to the studios and he's like, "You're going to put out my version" and they spend whatever, they spend a hundred billion dollars to do it. And he's not saying I'm a content creator for HBO. He's not saying that, that's not what he's doing. Steven Spielberg is not a content creator for a movie studio. And I think there's that sense of owning your work and knowing what you are in the world that is denied by people who are sort of born into that framing. You take somebody like Beyonce, she is not a content creator. She's like, "I know what I'm worth. I'll be on Netflix, I'll be on Spotify. I'll be wherever you need to be, but you're going to pay me. You're going to give me control. And I'm an artist." Inarguably. And I don't think you're going to have most generation Z coming up with that sense of self-agency, control, and ownership, even though that's what the internet promised them. And when we were all excited about the web in the early days of the web. It was like everybody's going to own their own work and they're going to have this control. 


There are some things where people go, "Oh, we'll start a union, that will develop a relationship between us and the company that we work for." And so we see a renewal in that as people try and work out a way of managing that power relationship.




It's hard to imagine a union of users, but maybe we could. How can people regain or have that control over the technology that seems to dominate their lives?


One, I wouldn't concede that at all. I think absolutely users can organize and say, we want to own and control our data, but we already have examples of this. I think of the Delete Uber hashtag trending on Twitter. And that was material enough that it got disclosed in the next quarterly reports of the company and in their IPO filing. And that was a couple of folks that started it and it caught momentum and it had real impact on a multi-billion dollar company. I think there's been media based lobbying about the algorithms of Facebook and companies like that. 

I think the biggest thing is obviously regulation is the one people talk about, but we've let go at least in most of the Western world of public shame as a mechanism, we're in a post shame world. But there are still people in Silicon Valley that don't want to be ashamed. And people that work in tech that don't want to be ashamed. I think the workers are a really great place to go. I think telling people like, "Do you want to be complicit in what this company is doing and you have other options." Because tech workers are the most demand workers in the world. I think those are points of leverage that have not been explored at all in terms of driving change.


We recently got Apple to back down on some of its breaking of end to end encryption. We did some high profile things like flying a banner over Apple when they were having a big meeting and things like that.


You got to have fun.


You should have fun. I don't know, if you can't dance, I don't want to be part of your revolution. The people inside Apple, the workers were the ones who were really pressuring to make a change in course. And I feel like we've tried many times to talk to tech workers about things, and I feel like we just need to keep working until we get it right. 



And I feel like we haven't quite figured that out and if you've got ideas, of course I'm all ears.


I don't know the answer, but I know the traits of the answer. I think one of the things is to say a movement has to have a name and an identity enough that people know that they're part of it.  If you are a worker at a company that is building a product that is not going to respect people's privacy but you're a good actor and you'd like to fix that. How do you throw up the bat signal? What hashtag do you use? If you wanted to say, I want to galvanize everybody in the world who cares about this?  There is a signal you can throw up that is going to galvanize the people who care about it and are aligned with you. And that doesn't mean everybody's going to be agreeing. It just means that you're going to be able to get the eyes of the people who do agree with you. And yet what I find across the board if I say, "What would be acceptable to you?" Say to somebody, "What's a technology that would be okay for you?" And for me, my list, it's like, I want environmental concerns addressed. I want ethical concerns addressed. I want user privacy concerns addressed. I have a list, right. But that's not where the conversation is. 


I think one of the things is that it's very easy to fall into just two responses. One is to be extremely excited about the technology and the other is to say I don’t want this and I want it to go away. And neither of those things are going to play out that way. It is not going to usher in a new utopia, but is not going to go away either. And so what you have to do is to work out a way of steering it, sort of riding this strange new force in the direction that you want it to go.


We can't evaluate whether it meets our ethical, moral, social, cultural standards, unless we know what our standards are.




And it's so interesting because techies love technical conformance. We love to be like, "This is in compliance with the spec". And then we sort of say, "Okay, but what's our spec for the world?" We don't have one.



So what I hear you saying, which I think is really right, which is we need to ask ourselves, how is our technology that we're building going to impact the most marginal people in society? I think that if that question has been asked and answered in a good way, that's part of our better world. What else? What else in our better world do you see?


It's such a tricky question, I think there's a lot of aspects around agency and ownership. One of the things we can sort of picture very easily is, "Can I take my ball and go home?"




I think that's just sort of one of those fundamentals and it's a very old fashioned idea. It seems to be coming back into vogue with a lot of the Web3 enthusiasm. It's funny because the early days of web 2.0 the hype was data is the new intel inside and there was this sort of trust based idea, you'd be able to export your data or something. It's like everybody will be nice to you. And it turns out actually that doesn't work at all. And everybody just went to fully locked into every platform that they use, so that to me feels like a thing that's wide open to be reinvented.


And Facebook even let you download stuff, but it's a bunch of gibberish, right.


Right. That's exactly the example I was going to say is, I get this giant... Mine is like a gigabyte plus of like my archive. And I'm like, "What am I going to do with this? Take it to some other app?" Like this doesn't have any use in the world. And so it became this sort of fig leaf, just like open source was often used as like, "We're not evil because we're open source." I think a lot of that happened with, "You got your data now, aren't you happy?" And it was the letter, but not the spirit of openness.


“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

One of the backdrops of everything we're talking about is that we have these big, massive postindustrial combines like Facebook and Google, and we have to lobby them or regulate them, or shame them, or motivate the people within them who have power to change them.  But that really wasn't the dream originally with the internet., the internet was going to be this thing that would empower people to actually be able to take control of their own lives, take control of their own technology. Do you think that's still a possibility, is there still a role that technology can play rather than political action to empower people?


The ideal is still possible. The challenge is the society around the ideal. The architecture of the internet is such that we were all supposed to have our own domains and run our own email and our own website on them and the like. And that ain't how it played out. It can never be solely technological, but you can absolutely imagine technology as part of that solution. We can imagine certainly the combination of technology and regulation mandating some degree of interoperability on such things. But I think there actually is one that is really culpable in this not having more users which is, the people who care deeply about these issues and have the technical skill in order to build alternatives, generally are so invested in being gatekeepers of their own technical mastery, that they don't make them usable for normal people. And so there's this interesting thing where it’s like, "I look down on having all my data in one of these giant silos from the big companies, but also I look down on users who don't know how to run their own server on the internet."


This comes back to this thing of, you'll still a long way from the people who are most vulnerable and most in need of this technology.


That's right.

There are camps that believe in only the Tech Titans having control and then a camp that believes in nominally individual users having control. But in real practice, neither of them is very good about caring for the most vulnerable anyway.


You work on Glitch and Glitch is for those of you who don't know, it is a kind of a system for helping people build just the tools and websites and programs that people can use on the web themselves. It's design to be an easy to use way of building these apps. Do you think about embodying these values when you are designing stuff like Glitch or do you see these as separate worlds?


Oh, yeah. Every day. This is why I do it.


Could we have an example?


I'll give you an example from outside because I'm very proud of what we do, but I want to be clear its part of a larger movement. So I think the example that comes to mind for me in 2022 is Wordle.


Oh yeah.


So word Wordle is amongst many other things in act of love. So Josh Wardle made a word game for his partner because she wanted to be able to play this word game every day. And at an architectural level, it is radical because it is simple and it is a throwback to an internet that many people have forgotten about. It is a single page on the web that you play this game on. It doesn't have tons of tracking, you can't log into it.

It doesn't even attempt to finagle your data out of you and it's fun. And it can't try to steal your attention because you're limited to only playing it once per day. And if you're spending more than five minutes playing the game, you're really bad at it and probably not having fun. So it's sort of got a ceiling


Maybe 10 but okay.


Okay, 10 we'll put 10, but the point is, if you're spending hours a day, as much as people are spending on TikTok, then that's not the place for you. This is not the game for you. And so I think the thing is that it is structurally aligned with a healthy web and it is the biggest phenomenon in the consumer internet this year. And it's made by a guy, by a person. And so I keep calling it the Wordle wide web, like that version of the internet we were told is dead and can't succeed. It has to be in an app store. It has to be distributed through Facebook. It has to surveil you and has to be getting VC money and all this list of things. And it makes obvious that is a lie. No surprise to me what happened after that was the Glitch community, took the idea and ran with it. They made remixes. 

And so people have made, I mean the last count was well over a thousand remixes of Wordle on Glitch and that's sort of branched off into all these different worlds now. So there are really tellingly identity based ones. There's one of the earliest remix, this was called Queerdle. And it's like, this is queer culture. 

And then because of how remix culture works online and because the web is open, it went whole different direction. Like the last couple weeks has been... So there's Heardle, H-E-A-R-D-L-E. And those are song guessing and this is sort of like, those were about guessing words, but this is name that tune. This is not a new idea, And what's been most amazing for me to see in these last two weeks is the majority of remixes we've seen on Glitch have been from K-pop fans. So we have a huge community of mostly teenage girls who love K-pop, Korean pop music, which is global pop music now.

And so they pick their favorite band and they take their songs and they make a Heardle remix just for the group that they like or just for the genre that they like. And so that's pretty remarkable that we have on an average afternoon, a handful of new apps made by young women, will pop up about the groups that they like and then people play it, they share their scores. The key takeaway here is pop culture tied directly to broad individual creation of independent websites that all run on their own addresses created by individual people with no surveillance, no tracking, no connection to any of the big silos, complete open source, the ability to take it and actually run it somewhere else. When you get it downloaded, it's not something that's tied into the system. The web that we are told we are fighting for exists every day and millions of people are participating in it. But now it's just becoming visible again.


I love that story.


Does it change that calculus that Wordle itself was brought up by the New York Times and actually does have tracking on it and you do log in?


Yeah, kind of but I was sort of torn at the end of the day, if somebody makes a great app for their loved one and then they get a million dollars. And sell it to New York Times, good for them. I think a younger me would've been much more radicalized by that, but I'm like, you know that's not the worst outcome. But actually part of it is that my impression from Josh who made it, he used to work at Reddit and everything else. He's probably going to get paid more for that than he did from working at Reddit or something like that. So the idea that there's more of a return as opposed to, whether it's the venture backed model or the publicly traded companies model, like those don't always reward their workers commensurate to the value that they create.



And so I think that sort of has to be the benchmark as opposed to keep it pure and don't make any money on this thing, because I'm like, we tried having that as our aesthetic, maybe as late as the 90s. The Beatles had their song license to the Nike commercial and a bunch of Boomers got mad. And that idea of caring about that level of selling out at a time when every rapper has their own perfume and clothing line and whatever is just anachronistic. So I think that sense of the purity of it being non-commercial it's actually not a battle we could win, but the idea of agency, control, creativity, response, connection to culture and genuine democratization to a demographic that has been overlooked and disrespected by the tech platforms, that is all doable.


The key there is the openness. The fact that somebody can take this thing and turn it into something that works for themselves, just their three friends or their whole community..I don't care if the New York Times takes Wordle and turns it into something I don't want, I got a hundred other choices.

Danny :

The thing that is also really important here is the thing that isn't happening, which is that the New York Times is not suing all of these Wordles, out of existence, which is important because the reason why they're not doing that is they know that would be an absolute nightmare. They have the legal capability to do that. But the norm, the norm is that you do not do that on the internet because the internet will hit back.


I'm wondering if you have thoughts about how do we bring people in those communities into this movement. 



And how do we get the K-pop kids to recognize that we need them to stand up. How do we mobilize those communities better? Because I think that's something we struggle with in tech a lot.


There are so many natural affinities that I think we can do better at connecting to. One of the ones that jumps out to me is, in seeing the mobilization against police violence here in New York City after the death of George Floyd, the first thing that organizers were doing at marches was essentially teaching digital security to people, almost all young people coming out to these protests. And these marches and they were walking them through what to do with their phones and what to install and how to run their browser. It was extraordinary because they saw the first step of participating in protests or civil disobedience is digital fluency. And so it was incredible. And now I think it's wonderful EFF has responded with resources for any kind of protestor, and I think that's incredibly powerful. 

But even going and beyond that, I do think everybody cares about intellectual property now. Everybody in the world cares about it, like it is such a broad thing. And it used to be a couple lawyers cared or a couple like obsessives cared and now it is every day and absolutely fan culture and remixed culture, which are just culture, they're all society now, are entirely predicated on a deep level of fluency in sometimes arcane aspects of intellectual property law. And so that to me is like, that's fertile ground. 

I'm a big fan and scholar of Prince's work. And seeing his very zealous lawyers go in and take an overly aggressive stance if for understandable reasons about IP. Was a huge learning experience for that fan community. They were not intrinsically motivated to learn about these things, but at the same time they were also learning about trademark. Prince had changed his name to a symbol and he had a trademark on the symbol and they learned the difference between copyright and trademark and he filed for a patent, they learned a patent.

And I was like, fan communities teach each other a lot. I'm struck pop music fans in general now, their level of knowledge about corporate structures and licensing agreements and publishing agreements is off the charts. All those things were things that were somewhat esoteric then that are fundamental to like a 13 year old's fandom now.

Anybody who has an artist they love, or a fandom that they're part of a community like that real community, not the tech version of community, but a real community. That's the entry point culture and art that moves us is the entry point to which we start to learn about these things, because then it's not abstract and then it's not esoteric and off putting. It is about something that's meaningful to us. That's how we bring people in, is to say these things you care about are directly shaped, directly affected by policy, by the practice of these companies, by the way that things are fairly or unfairly enforced. I think we can make things relevant to a lot broader audience.


Anil, thank you so much for taking this time to talk with us. What a fascinating conversation going in a bunch of different directions that were not the ones we anticipated, but always fun.


From Wordle to Prince. I don't think those were the things that I had scribbled on the back of an envelope when we came in.


With K-pop in between.


Yeah. Thanks so much.


Big, thanks to Anil for joining us today. The first thing we talked about is something that we've heard consistently across this podcast is that we need the people who are affected by technologies and especially those who are the most marginalized, the technologies need to serve them. And this is just so consistent across all the people that we talk to.


And also you can't have an approach to technology unless you know what you are ultimately aiming for. You need to be able to steer the technology. And that means having very clear messages about what you want. When you put up a bat signal for grassroots movements, they need clarity. They need to know what you stand for.


I loved his example of how we've taken the idea of being an artist and turned it into a content creator and how it's become just so embedded that the people who are making things and doing things are a piece of somebody else's story rather than the main story.


In terms of strategy and tactics, I do think that this emerging possibility that tech workers can influence positive change at tech companies. I think again, that's a recurring theme, but I think he put a real point on it. I'm not sure about the term shame actually. I kind of disagreed with the Anil there. I think that what he's describing is correct, but I don't think the method is shame. I think it's actually just illustrating, getting people to see the consequences of what they do and highlighting that they have the power to change that.


It's just basic accountability and that doesn't have to be a personal decision that you're a bad person. It's just making people accountable for the consequences of the things they build and the way that their businesses work.

I also love that the world needs more Wordle and all the various Wordles all around. I love that as an example of how sometimes we get locked into this mentality that everything that happens online, happens on Facebook or one of the other giants and here's this tremendous thing that didn't come from there at all, and is now expanding and growing and being adopted by communities all around the world for their own purposes, quite apart from the tech giants.


Corynne McSherry, the legal director at EFF is always talking about how everybody pays attention to Facebook and forgets that the web we're defending is still there, still an engine of creativity in empowerment.

And out of left field, I think that while we're talking about political action, it's still true that digital security has become an important first step in protests. And maybe that's the thing that leads us on to other considerations from like intellectual property to the importance of encryption, all of this stuff is already ingrained in people's lives, so all it takes is pointing out that they're already taking it seriously. We just need to work out a way of making all of those things work for them.


I love his emphasis on art and how art is one of the ways that we can bring people into an understanding of how they're impacted by technology. But really seize the means of making their art, sharing their art, talking about their art and appreciating other artists. The fan communities as well, that these are all people who are already engaged with their technology. And so for us, as people who are trying to help make a better world, we just have to find a way to reach those communities and help empower them to use their voice to make things better.


Thank you for listening. If you want to get in touch about the show, you can write to us at Check out the EFF website to become a member or donate. 

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. You can find their names and links to their music in our episode notes, or on our website at

How to Fix the Internet is supported by the Alfred P. Sloan foundation's program in public understanding of science and technology. 

I’m Danny O’Brien.

CINDY: And I’m Cindy Cohn. 

Josh Richman

Massachusetts' Highest Court Upholds Cell Tower Dump Warrant

4 weeks 2 days ago

This blog post was drafted with help from former EFF Legal Intern Emma Hagemann.

Massachusetts’ highest court has upheld the collection of mass cell tower data, despite recognizing that this data not only provides investigators with “highly personal and private” information but also has the potential to reveal “the locations, identities, and associations of tens of thousands of individuals.”

The case is Commonwealth v. Perry, and in it the Massachusetts Supreme Judicial Court (SJC) addressed the constitutionality of “tower dumps” of cell site location information (CSLI).

 A “tower dump” occurs when a phone company provides law enforcement with data on all devices that connected with a specific cell tower during a specified period of time. Because each cell tower covers a particular geographic area, police can infer from the data that the device owners were in that area at the time. Tower dumps can identify hundreds or thousands of phones—or, in this case, “more than 50,000 individuals . . . without any one of them ever knowing that he or she was the target of police surveillance.”

In Perry, after a series of six store robberies and one homicide, law enforcement sought and obtained two tower dump warrants. Together, the warrants covered seven cell towers on seven different days over the course of a month. Officers cross-referenced the tens of thousands of phone numbers they obtained to identify devices that pinged multiple towers on the days the crimes occurred. Through this process, they were able to identify Mr. Perry as a suspect. Mr. Perry moved to suppress the evidence.

EFF, along with ACLU and the Massachusetts Committee for Public Counsel Services, filed an amicus brief in the case, arguing that a tower dump is a general search that violates the Fourth Amendment and Article 14, Massachusetts’ constitutional equivalent. Like the general warrants reviled by the Constitution’s drafters, tower dumps are irremediably overbroad because they sweep up the information of hundreds or thousands of people that have no connection to the crime under investigation. These searches lack probable cause because the police can’t show a reason to suspect the thousands of innocent people whose information is caught in the dragnet had any link to the crime. They also fail constitutional particularity requirements because the scope of the search is not appropriately limited. We further argued that, even if the court upheld tower dumps, it should impose strict minimization requirements as a safeguard against abuse; the government must demonstrate that the tower dump is necessary and must delete any device data unrelated to the crime as soon as possible. 

Although the court declined to adopt a rule that cell tower dumps are always unconstitutional, it didn't preclude such an argument in a future case. It recognized that these searches not only allow police to track individuals into private, constitutionally-protected areas and, by tracking call data, provide police “significant insight into the individual’s associations,” they also make it possible for police to piece together people’s patterns of behavior. Because the police requested tower dumps in multiple areas over the course of multiple days, the data not only could establish “where an individual was and with whom he or she associated on one occasion, but also where the individual had been and with whom the individual had associated on multiple different occasions.” If a warrant were not sufficiently limited in scope—if it allowed police to select any phone number at random from the 50,000 and determine the identity of that individual, their location, and with whom they had communicated—it would “undoubtedly violate” constitutional particularity requirements.

Nevertheless, the court here held the police had sufficiently limited the scope of the search. Police had reason to believe the crimes were connected and committed by the same people, and police explained in their affidavit supporting the warrant that they had requested multiple tower dumps to look for commonalities among the records—phone numbers that appeared in more than one location. Because one of the warrants also established probable cause to believe the suspect had used a phone in commission of the crime, the court upheld that warrant. The court suppressed the evidence from the other warrant, finding it failed to establish these same facts.

The court did mandate important limitations on these searches going forward. These include requiring a judge to issue the warrant and requiring the warrant to include protocols for the prompt and permanent disposal of any data that is not related to the crime under investigation. However, while these minimization requirements are important, overall, the result in Perry is disappointing. Requiring only that police state that they intend to “identify and/or verify commonalities” in the data on thousands of people is a low bar.

Perry could also have troubling implications for other dragnet search technologies like geofence warrants. The court asserted that the thousands of innocent individuals swept up in a tower dump are not subjected to a “search” in the constitutional sense because, although police collected their data, police didn’t take the further step of analyzing it. Like tower dumps, geofence warrants allow the government to search the location information of many innocent people to try to identify a suspect. Several courts have already recognized the mass privacy violations inherent in geofence data dumps, regardless of whether any police conduct any analysis on the collected data. These courts have ruled geofence warrants are unconstitutional for reasons similar to those we raised in our Perry amicus brief, and we hope that the Supreme Judicial Court would take a fresh look at these arguments if or when it rules on the constitutionality of geofence warrants.

We will continue to challenge cell tower dumps, geofence warrants, and other forms of location surveillance in other cases going forward.

Jennifer Lynch

Patent Troll Uses Ridiculous "People Finder" Patent to Sue Small Dating Companies

4 weeks 2 days ago

Finding people near you with shared interests, and talking to them, has a very long history in human culture. We’re social animals. We need to find other people close to us to work together with, play games with, and build relationships and families with. Modern online social networks are built on top of those basic human needs. 

The technologies we humans use to do these things are ever-changing, but the basic concepts aren’t. Software that promotes new types of social networking is a terrible fit for the patent system, which hands out hundreds of thousands of 20-year monopolies each year on inventions that are supposedly new, but often aren’t. No one should be able to patent an “invention” that simply describes a method of finding like-minded people. 

Unfortunately, that seems to be just what happened with a patent we looked at recently. A patent troll called Wireless Discovery LLC sued eight different social and dating apps for patent infringement, claiming that they infringe U.S. Patent No. 9,264,875, which claims “location-based discovery” based on people’s “personal attributes.” Wireless Discovery, which was created just before its patent was granted in 2016, sued eight different online dating apps in April—most of them small apps

Claiming the “Mobile Social” World 

Wireless Discovery lawyers say that a simple combination of basic computing services is enough to infringe their patent. Its claim chart makes it explicit what is required to infringe the ‘875 patent. For its lawsuit against the dating network Zoosk, the claim chart describes how: 

  • Zoosk has a website that mobile devices can connect to.
  • Zoosk’s server collects information from the mobile devices, including location and unique device identifiers. 
  • Zoosk users can send and accept invitations to connect with and send messages to each other. 
  • Zoosk shares profile information of connected users, who are “members of a same social network” (i.e., they’re on Zoosk)
  • Zoosk can connect users who are in the immediate vicinity of each other, or a particular distance away. 

And that’s it. 

The Wireless Discovery patents were originally filed for a company called Ximoxi, which tried to market a type of “electronic social cards.” Ximoxi founder Ramzi Alharayeri, who is the named inventor on the patent, said in a 2012 press release that his software was “the first social-discoverability application that works on iPhone, Android and Blackberry alike.” 

In 2016, however, the Ximoxi website said that the app was still “under development.” It continued: “After releasing our Beta, we needed to go back to work for bugs fixes and features improvements.” Today, the Ximoxi website is defunct

Not the First 

The idea of connecting nearby users of a social network isn’t an idea that should be patentable at all. But it’s also worth noting that—even if Ximoxi executed this concept in some way—it was far from the first. The Ximoxi patent was filed in 2014, but claims that it’s a continuation of a patent that was originally filed in October 2008, the time that Ximoxi was founded. 

Location-based social networking on mobile devices is quite a bit older than that, though. It was conceptualized, and used, well before smartphones were common. The New York City-based app “Dodgeball” dates back to 2000. It was acquired by Google (with plenty of press coverage) in 2005. By then, it was clear that different types of social mobile apps were going to be taking off. A paper presented at the 2006 IEEE engineering conference notes the growth of mobile social networking: “An entire sub-industry of the wireless sector is slowly being created as companies such as Dodgeball, Playtxt, and begin to capitalize on this new phase in the mobile technology platform.” 

None of this earlier technology was presented to the U.S. Patent and Trademark Office in Wireless Technology’s patent applications. 

Unfortunately, this is all too common. It’s how many software patents get issued—examiners have just 18 hours, on average, to complete the examination, and the applicants can come back with endless revisions. Ultimately, persistent applicants get patents, even when they don’t have a great case for one. 

And those monopolies do serious damage. Most patent lawsuits filed in the past several years aren’t disputes that result from a company trying to defend the market for its product. Rather, they are initiated by patent trolls—companies with no products, that simply use patents to demand payment from others. In 2021, 87 percent of high-tech patent disputes in federal courts were filed by companies or people that make most of their money from patent licensing. 

It’s very difficult, and expensive, to get patents thrown out in court, even when the technology described in the patent existed long before the patent was filed. That’s why 10 years ago, Congress created a more robust review system for already-granted patents, called inter partes review (or IPR). Over the years various patent owners have tried to weaken the IPR system, encouraging the patent office to reject many IPRs on technicalities, or even saying IPRs are unconstitutional. Thankfully, those efforts have all failed. 

There’s a bill in Congress that would strengthen IPR, closing some of the loopholes that patent owners have used over the years to dodge the IPR process. Right now, passing the Restoring America Invents Act, as it was introduced, is the best thing we could do to weed bad patents out of the system. 

Related documents: 

Joe Mullin

11th Circuit's Ruling to Uphold Injunction Against Florida’s Social Media Law is a Win Amid a Growing Pack of Bad Online Speech Bills

1 month ago
There’s a lot to like in the 11th Circuit Court of Appeal’s ruling that much of Florida’s social media law—the parts which would prohibit internet platforms from removing or moderating any speech by or about political candidates or by “journalistic enterprises”—likely violate the First Amendment and should remain on hold. The decision is a win for free speech that stands in stark contrast with a 5th Circuit Court of Appeals May 12 ruling allowing a similar, constitutionally questionable Texas law to go into effect.

While an emergency application to block the Texas law is pending before the Supreme Court (we filed a brief urging the court to put it back on hold), we are relieved by the 11th Circuit’s ruling in NetChoice v. Florida. The court recognized two crucial First Amendment principles flouted by Florida’s law: that platforms are private actors making editorial decisions, and that those decisions are inherently expressive. When platforms remove or deprioritize posts, they are engaging in First Amendment-protected speech, the court said.

Tornillo Shows the Way

Florida S.B. 7072, signed into law by Gov. Ron DeSantis a year ago, prohibits large online intermediaries from terminating politicians’ accounts or taking steps to deprioritize posts by or about them or posts by “journalistic enterprises” (defined to include entities that have sufficient publication and viewership numbers). The law would override the sites’ own content policies. Florida passed the law to retaliate against platforms for supposedly censoring conservative voices. Interestingly, the 11th Circuit noted that the perceived bias in platforms’ content-moderation decisions “is compelling evidence that those decisions are indeed expressive” First Amendment-protected conduct.

In ruling that the law likely violates the First Amendment, the 11th Circuit pointed to the Supreme Court’s unanimous 1974 ruling in Miami Herald v. Tornillo, which established that the editorial judgements made by private entities about whether and how to disseminate speech are protected under the constitution. In Tornillo, the court rejected a Florida law requiring newspapers to print candidates’ replies to editorials criticizing them. Subsequent Supreme Court rulings, protecting decisions by parade organizers and cable operators about what third party-created content they disseminate, further underpinned this free speech principle, the 11th Circuit said.

EFF and Protect Democracy filed an amicus brief with the 11th Circuit arguing that internet users are best served when the First Amendment protects platforms’ rights to curate speech as they see fit, free of government mandates. That right allows for a diverse array of forums for users, with unique editorial views and community norms. The court agreed, recognizing that “by engaging in this content moderation, the platforms develop particular market niches, foster different sorts of online communities, and promote various values and viewpoints.”

We were pleased to see the three-judge panel use examples we cited in our brief to demonstrate the variety of communities platforms seek to appeal to through moderating content, from Facebook, which removes or adds warnings to posts it considers hate speech, Roblox, which prohibits bullying and sexual content, and Vegan Forum, which allows non-vegans but doesn’t tolerate “member who promote contrary agendas,” to ProAmericaOnly, which promised users “NO BS/NO LIBERALS.”

Platforms Aren’t Mere Hosts or Dumb Pipes

Florida argued that S.B. 7072 doesn’t violate the First Amendment because platforms don’t review most posts before publication and therefore aren’t making expressive decisions as to user content. But the law doesn’t target speech that isn’t reviewed—it is specifically aimed at speech that is removed or deprioritized, the court noted.

The panel also knocked down Florida’s argument that the law doesn’t implicate free speech rights because it only requires platforms to host speech and not necessarily agree with it. The court said that unlike the private entities—shopping centers and law schools—in the cases cited by the state, social media platforms have expression as their core function, which is violated by S.B 7072.

Finally, the court rejected Florida’s argument that large social media services are common carriers—entities that, in the communications context, provide facilities so anyone and everyone can communicate messages of their own design and choosing. While platforms sometimes say they are open to anyone, in practice they have always required users to accept their terms of service and community standards. So, in reality, social media users are not free to speak on the platform in any way they choose—they can’t post comments that violate the platform rules, the court noted.

The 11th Circuit also cited Supreme Court precedent in Reno v. ACLU, where the high court said internet forums have never been subject to the same regulation and supervision as the broadcast industry. Further, Congress excluded computer services like social media companies from the definition of common carrier in the Telecommunications Act of 1996.

Florida can’t just decide to make social media platforms into common carriers either, the 11th Circuit said, declaring “neither law nor logic recognizes government authority to strip an entity of its First Amendment right merely by labeling it a common carrier.”

Borrowing language from our brief, the court said social media platforms have historically exercised editorial judgment by moderating content, and a state can’t force them to be common carriers without showing there’s a compelling reason to strip them of First Amendment protections. This is important because it recognizes that platforms have curated content since day one.

The court let stand, at least for now, S.B. 7072 provisions requiring platforms to inform users before changing moderation rules, give users who request it the number of people who have viewed their posts, and give deplatformed users an opportunity to retrieve their data. EFF had argued, and the lower court had ruled, that while some of these transparency requirements may be acceptable in another context, they were impermissible here as part of S.B. 7072’s overall unconstitutional retaliation.

The 11th Circuit has drawn important lines in the sand. But it won’t be the last. With lawmakers in Georgia, Ohio, Tennessee, and Michigan considering similar bills, it’s likely more courts will be called on to decide whether platforms have a First Amendment right to moderate content on their sites. We hope they follow the 11th Circuit’s lead.
Karen Gullo

California Bill Would Make New Broadband Networks More Expensive

1 month ago

The state of California is primed to bring 21st-century fiber access at affordable rates to every Californian. Last year’s unanimous passage of S.B. 156, a historic multi-billion investment in broadband, means every California community has the resources available to chart a long-term course toward building fiber networks. The Department of Commerce’s recent National Telecommunications Information Administration (NTIA) proposal to allocate $48 billion from the bipartisan infrastructure bill for building broadband networks also supplements California’s efforts by centering affordability and future-proof fiber in its disbursement policy. Lastly, the California Public Utilities Commission (CPUC) criteria to access federal funding further codifies a commitment toward affordability and fiber infrastructure for all.

All of these efforts will help bring every Californian affordable fiber internet access. But a bill in the California legislature threatens to undo all of that good work. A.B. 2749, authored by Assemblymember Quirk-Silva, would prohibit the CPUC from requiring providers to offer affordable service to all Californians, and force them to wrongly treat fixed wireless offerings as equivalent to fiber infrastructure. It would also place a completely arbitrary 180-day review shot-clock on the review of applications to federal funding, which will short-circuit public provider efforts to deliver fiber.

All these provisions run contrary to both the established goals of the Biden Administration and the Newsom administration to deliver affordable, future-proof fiber to all. A.B. 2749 has passed the Assembly and is now headed to the Senate. If this bill—which is supported by industry providers like AT&T and Frontier Communications—were to pass, areas that currently do not even have basic service, primarily rural and urban poor areas, would suffer most of all.

A.B. 2749 Cuts Off Affordability to Most Californians 

The CPUC is supposed to provide taxpayer-funded grants to companies that build internet infrastructure. The bill prohibits the CPUC from requiring these grantees to offer a service at a fixed price for more than five years. The CPUC is also prohibited from setting a specified rate or setting a ceiling for rates. The only limited exemption to these bans on affordability are for ‘low-income’ households. This means a family of four making less than $55,000 a year would be protected from broadband price gouging, but the vast majority of Californians would not. Put another way, at a time of record inflation, the Californians getting broadband for the first time will be subject to uncontrolled monopoly pricing on infrastructure that their own tax dollars built.

To fully appreciate how egregiously anti-affordability this bill is, however, you need to understand one thing. The CPUC’s evaluation criteria for infrastructure grants heavily favors both a 10-year price commitment and the creation of a $40 plan at 50/20 mbps. (That's 50 megabits per second for downloads, and 20 for uploads.) This means that the infrastructure built with your taxpayer dollars must provide you at least a $40 service and must maintain that commitment for the first 10 years. Additionally, the CPUC will update and increase the 50/20 mbps standard over time as a response to constantly rising needs and the easy scalability of fiber networks.

A.B. 2749 says the CPUC cannot require internet service providers to provide a basic service tier and regulate the pricing. If it were to pass, your taxpayer dollars will pay an Internet Service Provider’s (ISPs) construction costs and then that ISP can still charge you high rates on the networks built with your money. Despite a supermajority of Americans viewing broadband access to be as important as water and electricity and having no choice in provider, the market is such that you have to grit your teeth and accept the high prices set by monopolistic ISPs. A.B. 2749 would further entrench this exploitative status quo. 

Wireless Service is Not and Will Never Be Equivalent to Fiber

This bill would also force the state, for the purpose of grants, to treat wireless offerings on equal terms with fiber infrastructure. They are not. There is also often an assertion that, given lower population density, rural areas can be covered at much lower costs by wireless networks than by putting cables in the ground. They can not. High-speed wireless depends wholly on excess capacity from underlying fiber wireline infrastructure. In other words, it is impossible to deliver fast wireless without excess multi-gigabit capacity from the wires in the ground.

This is why the Biden Administration’s recent guidance to the states emphasizes that states must deploy fiber into rural areas to ensure long-term economic development. EFF has noted time and again in our research that fiber is the only infrastructure that can be upgraded to achieve the performance needed for decades to come without significant new investment. It is low latency, high-bandwidth, and extremely reliable. Once fiber is built to an area, that area can be cheaply, reliably, and adequately served with future-proofed internet for the next 30-70 years.

It should come as no surprise then that AT&T, one of the nation’s largest wireless providers, is supporting a bill that forces the state to treat wireless as the same as fiber, ignores the fundamental engineering disparities, and takes funding away from building fiber infrastructure to subsidize wireless plans. They’d like nothing more than to pad their profits with taxpayer dollars and hamper competition.

We Need to Build Once and Build Right, Not Create and Impose Arbitrary Deadlines

A.B. 2749’s arbitrary 180-day review deadline for all applications to federal funding is yet another attempt to help out big ISPs. If the CPUC does not affirmatively act on the application within this time period, it would be approved automatically. EFF, in our work with local providers—including public and private providers—and new entrants, find no need for the state to establish a review shot clock. These providers aren't asking to receive funds more quickly. They are more interested in deploying their networks correctly. They are undergoing extensive feasibility studies and analyses on how to deliver fiber infrastructure to all Californians. They want to build once and build right, so their communities have the affordable, future-proof fiber service they need. 

This arbitrary deadline only benefits those with deep pockets and the resources to flood the CPUC with early applications. This unnecessary deadline will only serve to send money to companies such as AT&T. Without the opportunity for proper, deliberate vetting, on top of the anti-fiber and anti-affordability provisions,this bill will cause the state to squander taxpayer dollars and do very little for broadband access. In effect, the state would waste our once-in-a-generation opportunity to build affordable fiber to serve all Californians. Future-proof fiber is expensive and takes time, but it will be more expensive and take even longer if we waste valuable resources today on broadband options unsuited for long-term economic development.

If we’re going to do it right, A.B. 2749 cannot be passed into law.

Chao Liu

Platform Liability Trends Around the Globe: Taxonomy and Tools of Intermediary Liability

1 month ago

This is the second installment in a four-part blog series surveying global intermediary liability laws. You can read additional posts here: 

The web of global intermediary liability laws has grown increasingly vast and complex as policymakers around the world move to adopt stricter legal frameworks for platform regulation. To help detangle things, we offer an overview of different approaches to intermediary liability. We also present a version of Daphne Keller’s intermediary liability toolbox, which contains the typical components of an intermediary liability law as well as the various regulatory dials and knobs that enable lawmakers to calibrate its effect.

Five-part continuum of Intermediary Liability 

Liability itself can be distinguished on the basis of remedy: monetary and non-monetary liability. Monetary liability results in awards of compensatory damages to the claimant, while non-monetary liability results in orders that require the intermediary to take steps against wrongful activities undertaken through the use of their services (usually in the form of injunctions to do or refrain from doing something). 

Monetary remedies are obtained after establishing an intermediary’s liability—which ranges from strict, fault-based, knowledge-based, and court adjudicated-liability to total immunity. Various configurations along this spectrum continue to emerge, as regulators experiment with regulatory dials and knobs to craft legislation tailored toward specific contexts.

The categories introduced in this section should be understood as general concepts, as many regulatory frameworks are not clear-cut or allow for discretion and flexibility in their application.

Under strict liability regimes, online intermediaries are liable for user misconduct, without the need for claimants to prove any fault or knowledge of wrongdoing on the part of the intermediary. As liability can occur even if the intermediary did absolutely nothing wrong, strict liability regimes make intermediaries overly cautious; they tend to conduct general monitoring of user content and minimize their exposure to claims by ‘over-removing’ potentially unlawful material. Thus, strict liability regimes heavily burden online speech by encouraging the intermediaries to censor speech, even that which is not harmful.

Fault-based approaches impose liability when the intermediary fails to meet specified ‘due diligence’ obligations or a particular duty of care. For example, intermediaries may be obligated to remove certain types of content within a specific time frame and/or prevent the (re)appearance of it. The UK’s draft Online Safety Bill for example, imposes duties of care which are pegged to certain broad and potentially subjective notions of harm, which are in turn likely to require platforms to engage in general monitoring of user content. Negligence-based liability, as per the UK model, takes a systematic approach to content moderation, rather than addressing individual pieces of content. 

The required standard of care under such liability approaches can vary on a continuum from negligence (as established by the actions of a reasonable person) to recklessness (a substantial deviation from the reasonable action). Fault-based liability systems are very likely to effectively require a certain degree of general user monitoring and can lead to systematic over-removal of content or removal of content that may be undesirable, but which is otherwise perfectly legal. 

Knowledge-based approaches impose liability for infringing content when intermediaries know about illegal content or become aware of illegal behavior. Knowledge-based liability systems usually operate via notice and takedown systems and thus typically do not require pervasive general monitoring. There are different types of notice and takedown systems, which vary in design and provide different answers to the question about what constitutes an effective notice. What constitutes knowledge on the part of the intermediary is also an important question and not straightforward. For example, some jurisdictions require that the illegality must be “manifest”, thereby evident to a layperson. The EU’s e-Commerce Directive is a prominent example of a knowledge-based system: intermediaries are exempt from liability unless they know about illegal content or behavior and fail to act against it. What matters, therefore, is what the intermediary actually knows, rather than what a provider could or should have known as is the case under negligence-based systems. However, the EU Commission’s Proposal for a Digital Services Act moved a step away from the EU’s traditional approach by providing that properly substantiated notices by users automatically give rise to actual knowledge of the notified content, hence establishing a “constructive knowledge” approach: platform providers are irrefutably presumed by law to have knowledge about the notified content, regardless of whether this is actually the case. In the final deal, lawmakers agreed that it should be relevant whether a notice allows a diligent provider to identify the illegality of content without a detailed legal examination.

The Manila Principles, developed by EFF and other NGOs, stress the importance of court adjudication as a minimum global standard for intermediary liability rules. Under this standard, an intermediary cannot be liable unless the material has been fully and finally adjudicated to be illegal and a court has validly ordered its removal. It should be up to an impartial judicial authority to determine that the material at issue is unlawful. Intermediaries should therefore not lose the immunity shield for choosing not to remove content simply because they received a private notification by a user, or should they be responsible for knowing of the existence of court orders that have not been presented to them or which do not require them to take specific remediative action. Only orders by independent courts should require intermediaries to restrict content and any liability imposed on an intermediary must be proportionate and directly correlated to the intermediary’s wrongful behavior in failing to appropriately comply with the content restriction order.

Immunity from liability for user-generated content remains rather uncommon, though it results in heightened protections for speech by not calling for, greatly incentivizing, or effectively requiring the monitoring of user content prior to publication. Section 230 provides the clearest example of immunity-based approaches in which intermediaries are exempt from some liability for user content—though it does not extend to immunity violations of  federal criminal law, intellectual property law or electronic communications privacy law. Section 230 (47 U.S.C. § 230) is one of the most important laws protecting free speech and innovation online. It removes the burden of pre-publication monitoring, which is effectively impossible at scale, and thus allows sites and services that host user-generated content—including controversial and political speech—to exist. Section 230 thus effectively enables users to share their ideas without having to create their own individual sites or services that would likely have much smaller reach.

The Intermediary Liability Toolbox

Typically, intermediary liability laws seek to balance three goals: preventing harm, protecting speech and access to information, and encouraging technical innovation and economic growth. In order to achieve these goals, lawmakers assemble the main components of intermediary laws in different ways. These components typically consist of: safe harbors; notice and takedown systems; and due process obligations and terms of service enforcement. In addition, as Daphne Keller points out, the impact of these laws can be managed by adjusting various ‘regulatory dials and knobs’: the scope of the law, what constitutes knowledge, notice and action processes and ‘good samaritan clauses’. In addition, and much to our dismay, recent years have seen some governments expand platform obligations to include monitoring or filtering, despite concerns about resulting threats to human rights raised by civil society groups and human rights officials.

Let’s now take a brief look at the main tools lawmakers have at their disposal when crafting intermediary liability rules. 

Safe harbors offer immunity from liability for user content. They are typically limited and/or conditional. For example, in the United States, Section 230 does not apply to federal criminal liability and intellectual property claims. The EU’s knowledge-based approach to liability is an example of a conditional safe harbor: if the platform becomes aware of illegal content but fails to remove it, immunity is lost.

Most intermediary liability laws refrain from explicitly requiring platforms to proactively monitor for infringing or illegal content. Not requiring platforms to use automated filter systems or police what users say or share online is considered an important safeguard to users’ freedom of expression. However, many bills incentivize the use of filter systems and we have seen some worrying and controversial recent legislative initiatives which require platforms to put in place systematic measures to prevent the dissemination of certain types of content or to proactively act against content that is arguably highly recognizable. Examples of such regulatory moves will be presented in Part Three of this blog series. While in some jurisdictions platforms are required to act against unlawful content when they become aware of its existence, more speech-protective jurisdictions require a court or governmental order for content removals.

Intermediaries often establish their own ‘notice and action procedures when the law does not set out full immunity for user-generated content. Under the EU’s eCommerce Directive (as well as Section 512 of the Digital Millennium Copyright Act (DMCA), which even sets out a notice and takedown procedure), service providers are expected to remove allegedly unlawful content upon being notified of its existence, in exchange for protection from some forms of liability. Such legal duties often put users’ freedom of expression at risk. Platforms tend to behave with extra caution under such regimes, often erring on the side of taking down perfectly legal content, in order to avoid liability. It is far easier and less expensive to simply respond to all notices by removing the content, rather than to expend the resources to investigate whether the notice has merit. Adjudicating the legality of content—which is often unclear and requires specific legal competence—has been extremely challenging for platforms.

Intermediaries also exercise moderation by enforcing their own terms of service and community standards, disabling access to content that violates their service agreements. This is seen as “good samaritan” efforts to build healthy civil environments on platforms. This can also enable pluralism and a diversity of approaches to content moderation, provided there is enough market competition for online services. Platforms tend to design their environments according to their perception of user preference. This could entail removing lawful but undesirable content, or content that doesn’t align with a company’s stated moral codes (for example, Facebook’s ban on nudity). As discourse is shifting in many countries, dominant platforms are being recognized for the influential role they play in public life. Consequently, their terms of service are receiving more public and policy attention, with countries like India seeking to exercise control over platforms’ terms of service via ‘due diligence obligations,’ and with the emergence of independent initiatives which aim to gather comprehensive information about how and why social media platforms remove users’ posts.  

Finally, the impact of notice and action systems on users’ human rights can be mitigated or exacerbated by due process provisions. Procedural safeguards and redress mechanisms when built into platforms’ notice and action systems can help protect users against erroneous and unfair content removals.

Regulatory Dials and Knobs for Policy Makers

In addition to developing an intermediary liability law with the different pieces described above, regulators also use different “dials and knobs”—legal devices to adjust the effect of the law, as desired. 


Intermediary liability laws can differ widely in scope. The scope can be narrowed or expanded to include service providers at the application layer (like social media platforms) or also internet access providers, for example. 

Intervention with content 

Does a platform lose immunity when it intervenes in the presentation of content, for example, via algorithmic recommendation? In the US, platforms retain immunity when they choose to curate or moderate content. In the European Union, the situation is less clear. Too much intervention may be deemed as amounting to knowledge, and could thus lead to platforms losing immunity.


In knowledge-based liability models, a safe harbor is tied to intermediaries’ knowledge about infringing content. What constitutes actual knowledge therefore becomes an important regulatory tool. When can a  service provider be deemed to know about illegal content? After a notification from a user? After a notification from a trusted source? After an injunction by a court? Broad and narrow definitions of knowledge can have widely different implications for the removal of content and the attribution of liability. 

Rules for notice and action

In jurisdictions that imply or mandate notice and action regimes, it is crucial to ask whether the details of such a mechanism are provided for in law. Is the process according to which content may be removed narrowly defined or left up to platforms? Does the law provide for detailed safeguards and redress options? Different approaches to such questions will lead to widely different applications of intermediary rules, and thus widely different effects on users’ freedom of expression. 

In the next installment, we’ll explore recent developments and regulatory proposals from around the globe. The other blogs in this series can be found here:  Part One: From Safe Harbors to Increased Liability
Part Three: Recent Noteworthy DevelopmentsPart Four:  Moving Forward 
Christoph Schmon

Podcast Episode: Securing the Vote

1 month ago

U.S. democracy is at an inflection point, and how we administer and verify our elections is more important than ever. From hanging chads to glitchy touchscreens to partisan disinformation, too many Americans worry that their votes won’t count and that election results aren’t trustworthy. It’s crucial that citizens have well-justified confidence in this pillar of our republic.

Technology can provide answers - but that doesn’t mean moving elections online. As president and CEO of the nonpartisan nonprofit Verified Voting, Pamela Smith helps lead the national fight to balance ballot accessibility with ballot security by advocating for paper trails, audits, and transparency wherever and however Americans cast votes.

On this episode of How to Fix the Internet, Pamela Smith joins EFF’s Cindy Cohn and Danny O’Brien to discuss hope for the future of democracy and the technology and best practices that will get us there. Privacy info. This embed will serve content from


This episode is also available on the Internet Archive.

In this episode you’ll learn about:

  • Why voting online can never be like banking or shopping online
  • What a “risk-limiting audit” is, and why no election should lack it 
  • Whether open-source software could be part of securing our votes
  • Where to find reliable information about how your elections are conducted

Pamela Smith, President & CEO of Verified Voting, plays a national leadership role in safeguarding elections and building working alliances between advocates, election officials, and other stakeholders. Pam joined Verified Voting in 2004, and previously served as President from 2007-2017. She is a member of the National Task Force on Election Crises, a diverse cross-partisan group of more than 50 experts whose mission is to prevent and mitigate election crises by urging critical reforms. She provides information and public testimony on election security issues across the nation, including to Congress. Before her work in elections, she was a nonprofit executive for a Hispanic educational organization working on first language literacy and adult learning, and a small business and marketing consultant.


Music for How to Fix the Internet was created for us by Reed Mathis and Nat Keefe of BeatMower. 

This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by their creators: 


Voting Security:

Security Through Obscurity


Pam: It's not like banking and shopping, and it's not like banking and shopping online and other things that don't require secrecy and disassociating the identity of the person doing the transaction from the content of the transaction. And that's why internet voting is so challenging. If you were to send in your ballot from remotely and then call the election official and say, "Hey, it's Pam. I sent my ballot, I voted for candidate A, is that what you've got?" That's not how elections work first of all. But if it were, why not just do that and not do the send. Just say, "Hey, I want to vote for candidate A, could you mark that down for me?" That would actually be safer. It wouldn't be private, but neither is internet voting.

Cindy:  That's our guest, Pam Smith. She's the CEO of Verified Voting. And today she's going to be joining us to explain how digital technologies can help secure elections but we are also going to talk about how we need to keep a clear separation between our actual votes and the internet. 

Danny:  Pam's going to spread some light and tell us how we can protect the entire process, from voter registration to vote verification through to a risk limiting audit. She'll tell us how to build a system that lets everyone feel comfortable that the candidate with the most votes was actually the one chosen.

Cindy:  I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

Danny:  And I'm Danny O'Brien, special advisor to the EFF. Welcome to How to Fix the Internet, the podcast where we explain some of the biggest problems in tech policy and examine the solutions that'll make our lives better.

Cindy:  Hi, Pam. You and I go way back and I currently serve on the board of advisors of Verified Voting.  And I'm so excited to have you here today so we can dig into these things. 

Pam:  It's great to be with you again.

Cindy:  So we find ourselves in a very strange situation, you and me and others who care about election integrity, where some of the arguments that we have been using for many, many years to try to make our elections more secure are being picked up and used by people who I would say don't have that same goal. 

Pam:  Well I think people legitimately want to know that elections are righteous, why wouldn't they? But I think the undermining of the public's ability to trust and to know how to trust in elections is really one of the more severe dangers to democracy today. As long as there have been elections, there have been problems, issues, challenges, and even tampering with elections, that's not new. Those issues are different at different points in history. Starts out with who gets the vote and who doesn't. But also back in the day, communities used hand count votes with the whole public watching. And it was very transparent, it was low tech, no problems, but it was also not private, not secret, and there were very few voters. 

Now elections are carried out with software and computerized systems in most aspects of elections and things can be hacked and tampered with and can have failures and bugs and glitches. People need to understand technology touches their elections in many places. How do we know that it's secure? So what we do is look at what are the basics in securing elections. It’s the same as securing anything computerized, it's keeping systems up and running, it's protecting data from both malfeasance and malfunction, and it's being able to recover when something goes wrong, having that resilience.

Cindy:  Could you give us an example of one of the things that people were very worried about, that election officials could easily explain? 

Pam:  Well, probably the biggest one, and this was anticipated, was the fact that not all the votes are going to be done being counted on election night, they're just not. And especially in 2020, where you add one more layer of complexity called a pandemic. So it made a lot of things different. When the ballots come in, if they came in before election day, my county prepares them for counting and runs a tally. First thing after the polls are closed, they can report out those absentee ballots. But those are just the ones they've already gotten in, that's not the polling place ballots, that's not the ones we allow to arrive late as long as they were postmarked on time.

So there's many more ballots to be added into that count, that's just the initial count. I think people don't know that the initial count is not the official count, and that's important to know. It takes a while for all of the ballots to be processed and counted, even to make sure that they were legitimate ballots and included properly in the count. And that end part is called certification of the election. When we certify in each jurisdiction, that's the final count.

Cindy:  And this is the difference between elections in the United States in elections in a lot of places around the world, we vote on a lot of things.

Danny:  It's true.

Cindy:  And we have complicated ballots that might change across the street depending on what precinct or whatever that you're in. Even in a place where people live very close together, there are different kinds of ballots because people are voting for their very local representative as well as all the way up to the federal level. And elections are generally governed as a legal matter locally as well. So the US constitution guarantees your right to vote, but how that happens varies a lot. One of the things that Verified Voting created a long time ago, but which I still think is a tremendously useful tool, is something called The Verifier, which is a website that you can go to and type in where you live and it will tell you exactly what counting technologies are used. 

Danny:  And I think this touches on the key point here, how technology can complicate or even undermine people's trust in what is already a very complicated system. Again, a lot of the conversations in the last election were about, has this been hacked? And how do we prove whether it has or it hasn't been hacked? I know Verified Voting and EFF were very involved in the early effort to require paper records, a paper trail of digital voting technology, what we call voter verified paper records back in the 2000s. So can you just talk a little bit about where the role paper, of all things, plays in a more high tech voting system?

Pam:  It's interesting to note when we got started back in 2004, there were only about eight states with a requirement to use paper and only about three had a requirement to check the paper later with an audit.

Danny:  And when you say paper here, it's literally a printout. You vote and then there's a paper record somewhere that you voted in a certain way.

Pam:  It's a physical record that you get to check to make sure it was marked the way you intended it.

Danny:  Got it.

Pam:  You may be using an interface, a machine that prints that out, but you may be marking a physical ballot by hand as well. And it's that physical record of your intent that is the evidence for the election. 

So here's the thing about paper, you need to know that you can cast an effective ballot and that means you're getting the right ballot, that it's complete, there's no missing candidates or contests on it, it's feasible to mark. If you have to use an interface, that that interface is working, up and running, and that you have a way to check that physical ballot and cast it safely and privately. Then that ballot gets counted along with all the other ballots and you need a way to know it was counted correctly.

And that you can demonstrate that fact to the public to the satisfaction of those who are on the side of the losing candidate or issue, and that's the key. If you have that... This is what was said about the 2020 elections, Chris Kreb, who was at the cybersecurity agency at DHS on elections and he called the 2020 election the most secure in American history. The leg he had to stand on for that was the fact that almost all jurisdictions were using paper, that almost all jurisdictions were doing some audit to check after fact. And that's why it matters, you have to have that record.

Danny:  I know that some of the work that's come out of what you've been doing then has been this idea of risk limiting audits.  I'm addressing this to both of you, because I know you both worked on this, but the risk limiting audits and how they work.

Pam:  Audits get done in a variety of industries, there are audits in banking, there's all kinds of audits, the IRS might audit you. It's not always seen as such an attractive word. But in elections, it's really important. What it means is you are counting, you're doing a hand to eye count, you're visually looking at those paper ballots and doing a comparison of a count of a portion of those ballots with the machine count. So software can go wrong, it can be badly programmed, it could have been tampered with. But if you have that physical record that you can then count a portion of and check and make sure it's matching up, and if it's not figure out where the problem is. That's what makes the system resilient.

A risk limiting audit is one that relies on the margin of victory to determine how much you have to count in order to have a strong sense of confidence that you're not seating the wrong person in office. So it's a little bit like polling. If you poll on a particular topic, you want to know how the public feels about something, you don't have to ask every single person, you just ask a percentage of them. You make sure it's a good cross section, you make sure it's a well randomized sample. And all other things being equal, you're going to know how people feel about that topic without having to ask every single person. And with risk limiting audits, it's the same kind of science, it's using a statistical method to determine how much to count.

Cindy:  We worked really hard to try to make sure that there was paper. And then we realized that we had to work really hard to make sure that the paper played the role that it needed to play when there are concerns. If you only do this when you're worried that there's a problem, you're really not fixing the situation. It needs to be something that happens every time so people can build their trust in the things.

But also it needed to be lightweight enough that you could do it every single time and you don't end up with these crazy debacles, like we saw in Arizona.  Can you give us an update? How's it going trying to get risk limiting audits regularized in the law? I know this is an area where you guys do a lot of work.

Pam:  Well, this extremely geeky term, risk limiting audits, is actually getting wide traction. So it's good news.

Danny:  Good.

Pam:  People I think are understanding it. And one of the things that we do is support election officials through the process. So maybe their state passes a law that says you'll do risk limiting audits, we help them understand how to do it and answer all the questions that might come up when they're doing it. They then use that to demonstrate to the public that it's working right and it's a tool that they are really adapting to and adopting well. There's more to do. And I think what's important to know is that really any audit is going to have some utility in telling you how your equipment's working. Risk limiting audits are a more robust form of auditing. And they will let you not do as much work if the margin is wide and they will call for more work if the margin is very narrow, but you want that anyway. You might go to a full recount in a very tight margin, talking about Florida 2000, that margin would probably necessitate that full hand recount anyway. But doing a risk limiting audit, you can get to that kind of confidence.

Danny:  “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

Cindy:  Let me flip to where I love to go, which is so What does it look like if we get this right? What are the values? What does it look like if we have a world in which we have technology in the right places, in our systems, but also that we can trust it.

Pam:  I think that getting it right means that voters know the election was fair because it was conducted securely. And they know how to know that. That they know where the ground truth is and how to figure it out, that they're participating actively in watching, that they're not being hindered by failed technology at whatever point that intersects with the election. Whether it's registration or checking into the polling place or actually using a device to mark your ballot or the counting process, that nowhere along on the path they're being hindered in that process. And that means more people can participate who want to participate. This doesn't address things like voter suppression, that's a separate different issue. And it's an issue about security because elections really are only secure if everybody who wants to gets to participate and can cast an effective ballot.

Cindy:  Could you explain why we want to fix the internet, we want to make the world better, and why voting over the internet isn't on the list of things that we think would make a better world?

Pam:  One of the things that we talked about is the importance of the paper. That the voter gets to check at the time they're voting and make sure it represents what they meant to vote for. When you use the internet to transmit votes, you lose that. What arrives at the election's office, if it arrives at the election's office, may or may not represent what the voter thought they intended to vote for. And there's no real way to control for of that right now. Maybe in some future on a different internet that was designed for security and not just for open communication, it's possible to do. But you have all kinds of issues with internet voting that include things like voter authentication attacks, malware on the voter's devices, not just in the election's office, a denial of service attack, server penetration, spoofing, all kinds of things can go wrong.

Cindy:  And ballot privacy is tremendously important if you really want to make sure that people can vote freely for who they want. You don't want them subjected to either their boss or the other people who live with them or their community, being able to see how they vote. That's not a free vote, that can often be a coerced vote. So a secret ballot is just a piece of how elections work, not just in the US but in most places of the world for really good reason.

Pam:  The internet has other ways in which it's hazardous to elections health. It can be used for attacks on election officials, which we're seeing a lot these days, attacks on votes, attacks on voters’ registration. We saw in 2016 state databases being tampered with from afar. And other kinds of information hacks. Just really by way of disinformation, attacks on democracy and understanding how to know what you need to know. If we're thinking of about what would the world look like if we got it right, election officials are protected, votes are secure, and voter registration is secure and there's ways for people to check and make sure of that. And fail safes in case something happens last minute. So all of those kinds of things are really important. Fighting disinformation is probably as important as the rest.

Danny:  I thought it was very fascinating in the last couple of elections in the US, I was talking to the cybersecurity side of all of this, it's very difficult to get to the bottom of these things. But one thing really stuck with me, which is that the officials I was talking to said, "Well, look, most people's model of this is someone is hacking to change the results to favor a particular person. But in fact, if you want to introduce instability into a country, the best thing you can do is just undermine faith in the system itself. You don't actually have to achieve a result, you just have to inject a sufficient amount of ambiguity into the result. Because once that trust is gone, then it doesn't matter what the result is because the other side is going to assume something happened behind the scenes. So is part of this to make the whole system transparent in a way that the average person can understand what's going on? 

Pam:  We don't expect voters to have confidence, our mission has never been make voters feel confident, it's not about that. It's about giving them justified confidence that the outcome was right. And that's different. 

Cindy:  But let's just say I hear that there's a problem in a critical place. What do I ask myself? And what do I look for to be able to tell whether this is a real problem or perhaps not a real problem that's being overblown or just misunderstood?

Pam:  Well, I think you want to know what the election official says. There are rare exceptions, but nearly all the election officials I know they're simply heroes frankly. They're working with minimal budgets and doing very long hours on very tight deadlines that are unforgiving. But what they do is really to address problems, anticipate problems, avoid them, and if they come up, address them. So you need to know what the election official is saying. If it's observable, go observe. If there's a count happening that you can watch, go watch that count. But you can't get your information, from somebody's cousin on Facebook.

Cindy:  Give us an example of where there was a concern and we were able to put it to rest or there was a concern and it went forward.

Pam:  One of the things we'd hear on election day at election protection was we'd get a call from somewhere and they'd say, "I've marked my ballot and I wanted to go cast it in this scanner like I usually do. But they told me not to and they put it in a separate bin." Why did they do that? Are they taking those ballots away? Are those not going to be counted? What's happening there? And we are able to tell them that there is actually a legit reason for that.  What happens sometimes in a ballot scanner is that the bin gets full, that the ballots don't fall in a straight line, and it may be jammed. And if it's jammed, you don't want the ballots to get destroyed by trying to keep feeding more and more in. That bin has actually got a name, it's the auxiliary bin, it's the extra bin for when this happens. And it is attached to the ballot box. And what happens is once they clear that jam, which they may not be able to do in the middle of the busiest time of voting, is to feed those through.

Danny:  All right.

Pam:  That actually is a real simple problem with a simple resolution. But when you can tell people, "This is how that works" it puts their mind at rest.

Danny:  Which brings me, I think, to something else that people often, both on the left and right, worry about, which is the companies behind these machines. How can we reassure people that there isn't something being underhand in the very design of the technologies?

Pam:  We used to say that it shouldn't matter if the devil himself designed your voting system, as long as there's paper and you're doing a robust check on the paper, you should be able to solve for that. That's what makes it resilient and that's why we want to make sure every voter, not just 90% or more, but all of the voters are living in a jurisdiction where that paper record is there for them to check

Cindy:  I just think overall, this is technology, it needs to be subjected to the same things we do in other technology to try to continue to make it better. And that includes a funding stream so that when new technology is available, local election officials can actually get it.

Pam:  Elections are woefully underfunded. And there's a conference that happens in California every year that's called New Laws. This is a conference that election officials hold so that they can examine all the new laws that have been passed that affect how they run elections. It happens every year. So they are constantly and continuously having to update what they do and make changes to what they do. Oftentimes there are unfunded mandates that have to do with what they do. Asking them to do additional things is hard, especially if you're not going to pay for it. So it's really important that there is federal funding for elections that gets down through to the states and to the counties to support good technology. But with things like internet voting, the most dangerous form of voting, that doesn't have to go through any certification because no one's been able quite yet to write standards for how you would do this securely.

Cindy:  Because you can't right now.

Pam:  Because you can't.

Cindy:  With our current internet.

Pam:  Not that we don't want to, you just can't.

Danny:  I have one more thing to throw in which people often, often say, "Oh, we should do it like this." I'd love to know your opinion on it because our community is often like, "Well, we need an open source voting machine or a voting system. And that would fix a set of problems." Certainly the idea is that would be more transparent and you would feel more confident about it. Do you think that's an answer or part of the answer?

Pam:  I think it's a very good thing. It's what some people might call necessary but not sufficient. You still are going to need to do audits, you're still are going to need paper, you still need a resilient system. But open source helps make sure that you can anticipate some of the issues right away because there are lots of eyes on the problem. With voting technology though, it gets tricky. It's not quite the same as other kinds of open source because who's responsible for what's the most current iteration? This isn't something that people can just keep applying fixes to randomly, there has to be a known version that's being used in a particular election. So there has to be an organization or entity that governs how that's being used.

Cindy:  Understanding how this technology works is tremendously important for all of our security. And it's the classic security through obscurity doesn't work, that our friend, Adam Savage just reminded us of this. This is a whole other wing of secure elections, but the only way you know something is secure is that a bunch of smart people have tried to break it and they can't. 

Pam:  Don't leave weak spots if you can help it because if somebody's looking to tamper, they're going to find the weakest point. So it really is crucial to try and secure all parts of our elections. 

Danny:  What's the end game here? You're clearly deeply in the trenches trying to incrementally improve these systems. But do you ever have a dream where you envisage a world where maybe we do have a solution to voting on the internet or we do use a new technology to make things better?

Pam:  Moving towards those options includes things like if you need to vote by mail, you can vote by mail. If you want to vote in person in a polling place, that's available to you. If you need an accessible device, one that's really, really accessible and usable, it's available to you. And it works and it was set up before you got there so it's readily available. I think knowing that every jurisdiction is using a system that's resilient to any kind of failure, hurricane, power outage, anything, that there's a physical ballot to mark, that it's easy to check, it's a usable ballot not confusing, so that you end up missing contest or anything like that. It's designed well, ballot design is really important. All of those small pieces are only possible if there's enough funding for elections. If we believe in our democracy and we believe in having good elections, then that means having good voting systems, good practices, and the resources to carry those out.

Right now, election officials really struggle to recruit enough poll workers for every election. Of course, that got a little harder with the pandemic going on. Many poll workers are of an older age cohort, so we need younger poll workers. And a lot of really smart programs have led to recruiting high school students to be poll workers and it's been magical. So I think really getting everyone engaged, getting everyone to understand where they can find the ground truth about elections, and feeling the confidence that they need to really happily participate and celebrate being part of this democracy, that's the most important thing. And that's what I envision for our future.

Cindy:  Thank you so much for taking the time to talk to us. This has been a fascinating conversation. There's so much talk about elections and election integrity right now. And it's great to have a sane, stable voice that's been here for a long time, which is you and Verified Voting on the case. So thanks.

Pam:  Thank you, Cindy. And thank you, Danny. Thanks for doing this.

Danny: It's always good to talk to somebody like Pam, who has years of experience, especially when a topic is suddenly as controversial or in the public eye as election integrity. I did think given how controversial it is these days, Pam was reassuringly genial. She established that we need to get to a ground truth that everyone can agree on and we need to find ways, technological or not, to reassure voters that whatever the result, the rules were followed.

Cindy:  I especially appreciated the conversation about risk limiting audits as one of the tools that help us feel assured that the right person won the election, the right issue won the election. Especially that these need to be regularized. EFF is audited, lots of organizations are audited. That this is just somewhat built into the way we do elections so that the trust comes from the idea that we're not doing anything special here, we always do audits and we scale them up depending on how close the election is. And that's just one of the pieces of building trust that I think Verified Voting has really spearheaded.

The other thing I really liked was the ways that she helped us think about what we need to do when we hear those rumors of terrible things happening in elections far away. I appreciated that you start with the people who are there. Look for the election officials and the organizations who are actually on the ground in the place where you hear the rumors about looking to them first, but also looking to the election protection orgs, of which Verified Voting is one but not nearly the only one, that are really working year round and working in a nonpartisan way around election integrity.

Danny:  And another leg of the stool is transparency throughout all of this process. It's key for resolving the ambiguity of it. I do appreciate that she pointed out that while open source code is great for giving some element of transparency, it's necessary but not sufficient. You have to wrap it around a trusted system. You can't just solve this by waving the free software license wand all over it.

Cindy:  I also appreciate Pam lifting up the two sides of thinking about the Internet's involvement in our elections. First of all, the things that it's good at, delivering information, making sure ballots get to people. But also what it's not good for, which is actual voting and the fact that we can't get ground truth in internet voting right now. And that part of the reason we can't and what makes this different than doing your banking online is the need for ballot secrecy that has a tremendously long and important role in our elections.

Danny:  But that said, I do think that ultimately there was a positive thread going through all of this. That many things in this area, in the United States have got better. We have better machines, we have newer machines, we have less secrecy and proprietary barriers around those machines. Often people when we ask them about what their vision of the future is, they get a little bit thrown because it is hard to describe the positive side. But Pam was pretty specific but also pointed out perhaps why it's such a challenge. Because she highlighted that what we want in our future is a diversity of solutions. And of course, that you need the correct financial and social support in the rest of society to make that vision happen.

Cindy:  Thanks so much to Pam Smith for joining us and giving us so much honestly hope for the future of our democracy and our voting systems.

Danny: If you like what you heard, follow us on your favorite podcast player and check out our back catalog for more conversations on how to fix the internet. Music for the show was created for us by Reed Mathis and Nat Keefe of BeatMower. This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed under Creative Commons Attribution 3.0 unported licensed by their creators. You can find those creators names and links to the music in our episode notes or on our website at How to Fix The Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology. I'm Danny O'Brien

Cindy:  And I'm Cindy Cohn. Thank you so much for joining us.



Josh Richman

Data Brokers and True the Vote are the Real Villains of "2000 Mules" Movie

1 month ago

2000 Mules is a movie which claims to expose election fraud with phone app location data. While these claims have already been thoroughly debunked, the movie also deserves condemnation for performing wildly invasive research on thousands of people’s location data without their consent or even knowledge. It is a reminder of our need to stop the industry of shady data brokers that enabled this massive privacy invasion.

In its attempt to demonstrate widespread fraud in the 2020 presidential election, 2000 Mules presents the research of True the Vote (TTV). TTV reportedly purchased 10 trillion geolocation data points from an unnamed data broker with the goal of finding a pattern of so-called “mules” that stuffed ballot boxes. The researchers claim that of the hundreds of thousands of people described in the location data, they found thousands of people who were physically present near two kinds of places – ballot boxes and unnamed nonprofits – and that this shows they were “mules.” (The actual number of people whose data was purchased may be much larger—a report by TTV claims the organization collected data from over 500,000 phones near ballot boxes in Atlanta, which is just a fraction of the total data they acquired.)

This business model of making extremely sensitive location data about the general public readily available for purchase must stop

Putting aside the logical flaws of TTV’s voter fraud claims, the very fact that they were able to buy this much personal location data on hundreds of thousands of people’s lives, over a span of many months leading to election day, is appalling. But this is the data broker business model working as intended: by vacuuming up geolocation data from thousands of smartphone apps, data brokers package and sell huge quantities of highly revealing location data to anyone willing to buy it. And TTV is hardly the only customer: the U.S. military, federal agencies, and federal law enforcement are all customers to geolocation data brokers. Recently, one data broker was even found selling the location data of people seeking reproductive healthcare, which soon could provide states with draconian anti-abortion legislation new digital evidence to identify and prosecute people who seek or provide abortion.

While data brokers often claim that geolocation data is “anonymized,” location data is never anonymous. If a phone’s location data shows where its owner sleeps at night or works during the day, it is very easy to find that owner’s name and address. Even TTV admits as much in a report describing their methodology. Yet, despite claiming that “TTV does not ‘unmask’ or ‘de-anonymize’ owner identities of the devices it tracks,” they handed over device data to the Georgia Bureau of Investigation in an (unsuccessful) attempt to spark a criminal investigation, released a partially-redacted list of device IDs in the same report, and have recently announced that they plan to “release it all” (possibly referring to location data). Further, an entire industry exists for the purpose of de-anonymizing phones based only on device ID.

Although the location data acquired by TTV is extremely invasive, it can also be inaccurate. TTV has claimed that it can “pinpoint” devices, and implied that it can show that people were interacting with individual ballot boxes using GPS data alone. But cell-phone GPS data is only accurate to within about 5 meters (15 feet) under ideal conditions, meaning there is no real way of knowing if a person actually engaged with a specific object within a given time window. Despite this lack of precision, the technology is still precise enough to be dangerous by revealing where a person sleeps at night, if they visit a lawyer's office or doctor's office, or if they've stopped commuting to work in the morning. And commercially-available location data is often marred with much more dramatic inaccuracies, like “teleporting” devices that appear to travel miles in a matter of seconds. Police use of this kind of data, through techniques like “geofencing,” frequently casts false suspicion on innocent people. Relying on commercial location data alone to allege ballot box stuffing is folly.

This is not even the first time an organization used location data to make public allegations of other people. Just last year, a Catholic priest was fired after an organization tracked his location and use of the app Grindr through commercially-available data. TTV’s privacy violating research is yet another demonstration that this data is easy to acquire, powerfully invasive, and can be used to harm real people.

This business model of making extremely sensitive location data about the general public readily available for purchase must stop, and stopping it will require both regulatory efforts as well as preventative measures on the side of mobile operating system developers. But there is something you can do right now to protect your data from being useful to data brokers and organizations like TTV: disable Ad ID tracking on your phone.

Will Greenberg

EFF to Court: California Law Does Not Bar Content Moderation on Social Media

1 month ago

Moderated platforms often struggle to draw workable lines between content that is permitted, and content that's not. Every online forum for user speech struggles with this problem, not just the dominant social media platforms. Laws protecting companies’ ability to moderate their platforms free from legal mandates benefit the public, and help to create a diverse array of online spaces, with varied editorial views and community norms.

In April, EFF told California’s Sixth Court of Appeals that the Santa Clara Superior Court was correct to dismiss a lawsuit by Prager University against YouTube and its parent company, Google. The lawsuit claimed that Google’s content moderation was illegal censorship. Prager University is an educational and media nonprofit with a conservative perspective, which sued under California state law after its arguments were rejected by the U.S. Court of Appeals for the Ninth Circuit in 2020. The Ninth Circuit correctly held that, contrary to Prager’s arguments, YouTube is not a government actor bound by First Amendment limits simply because it hosts a forum for public speech. 

Under a California Supreme Court decision in Robins v. Pruneyard Shopping Center, there is a narrow public forum test for a privately-owned space’s ability to curate speech. In our brief, we emphasize that even if the law were applied to non-physical spaces, it does not transform YouTube’s curation of Prager’s videos into prohibited censorship. YouTube and other social media platforms that moderate content are primarily, if not exclusively, expressive venues. Unlike a shopping center or grocery store, an online platform’s editorial vision is often at the core of its business. Additionally, social media platforms are not functionally public forums: they are not open to the public to come and go as they please. YouTube’s action against Prager is one of millions of decisions it made and continues to make. Those decision are part of the editorial discretion that platforms have as to which users and what content they allow. Prager’s broad interpretation of the law would upend those legal protections—to everyone’s detriment.

Jason Kelley

EFF Opposes Anti-Fiber, Anti-Affordability Legislation in California That Will Raise Prices on Middle Income Users

1 month ago
California taxpayers are funding the creation of rural fiber infrastructure and should be guaranteed affordable access to 21st century broadband infrastructure.

SACRAMENTO, CA – The Electronic Frontier Foundation (EFF) opposes legislation sponsored by AT&T, AB 2749 (Quirk-Silva), that would undermine California’s historic broadband infrastructure law signed by Gov. Gavin Newsom last July.

The bill would amend the newly created grant program for funding broadband access in unserved areas by prohibiting the California Public Utilities Commission from requiring providers to offer affordable services to all residents, as well as by forcing the state to treat AT&T's inferior wireless offerings on equal terms as 21st-century-ready fiber infrastructure. Such provisions run contrary to established goals of the Biden Administration’s infrastructure effort that center on delivering affordable fiber broadband to rural Americans.

“At a time when everyone is suffering from record inflation, legislation that will raise people’s prices for broadband infrastructure must be flatly rejected,” said Ernesto Falcon, EFF Senior Legislative Counsel. “California made a historic investment to deliver 21st-century fiber infrastructure to all residents with passage of the state’s infrastructure law last year. Local county governments have already started charting out their infrastructure plans to connect everyone to fiber while committing to affordable prices. AT&T, which opposed the law from the beginning, is now trying to convince legislators to unwind that promise while padding their profits with taxpayer dollars by setting monopoly prices in rural markets.”

Additional concerns EFF has with AB 2749 include:

  1. The anti-rural-fiber and anti-affordability provisions have received no hearing prior to consideration on the Assembly floor. These new provisions were recently added to the bill and only made public on May 19, 2022; the original legislation simply had expedited process requirements for grant review but included no programmatic changes.
  2. The legislation’s “low-income” exemption for affordability is woefully insufficient. If enacted as written, it would mean a rural family of four making more than $55,000 a year will be subject to uncontrolled monopoly pricing with infrastructure that the taxpayer already paid to build.
  3. The legislation undermines the Department of Commerce NTIA’s prioritization of fiber infrastructure by requiring that wireless plans, such as the ones AT&T offers, be included for grants. Current state policy would give preference to fiber infrastructure in rural areas while federal policy explicitly states that only fiber infrastructure will deliver future-proof access.
Tags: BroadbandinfrastructureCaliforniaContact:  ErnestoFalconSenior Legislative
Josh Richman
1 hour 58 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed