Decoding the California DMV's Mobile Driver's License

6 hours 2 minutes ago

The State of California is currently rolling out a “mobile driver’s license” (mDL), a form of digital identification that raises significant privacy and equity concerns. This post explains the new smartphone application, explores the risks, and calls on the state and its vendor to focus more on protection of the users. 

What is the California DMV Wallet? 

The California DMV Wallet app came out in app stores last year as a pilot, offering the ability to store and display your mDL on your smartphone, without needing to carry and present a traditional physical document. Several features in this app replicate how we currently present the physical document with key information about our identity—like address, age, birthday, driver class, etc. 

However, other features in the app provide new ways to present the data on your driver’s license. Right now, we only take out our driver’s license occasionally throughout the week. However, with the app’s QR Code and “add-on” features, the incentive for frequency may grow. This concerns us, given the rise of age verification laws that burden everyone’s access to the internet, and the lack of comprehensive consumer data privacy laws that keep businesses from harvesting and selling identifying information and sensitive personal information. 

For now, you can use the California DMV Wallet app with TSA in airports, and with select stores that have opted in to an age verification feature called TruAge. That feature generates a separate QR Code for age verification on age-restricted items in stores, like alcohol and tobacco. This is not simply a one-to-one exchange of going from a physical document to an mDL. Rather, this presents a wider scope of possible usage of mDLs that needs expanded protections for those who use them. While California is not the first state to do this, this app will be used as an example to explain the current landscape.

What’s the QR Code? 

There are two ways to present your information on the mDL: 1) a human readable presentation, or 2) a QR code. 

The QR code with a normal QR code scanner will display an alphanumeric string of text that starts with “mdoc:”. For example: 

 “mdoc:owBjMS4wAY..." [shortened for brevity]

This “mobile document” (mdoc) text is defined by the International Organization for Standardization’s ISO/IEC18013-5. The string of text afterwards details driver’s license data that has been signed by the issuer (i.e., the California DMV), encrypted, and encoded. This data sequence includes technical specifications and standards, open and enclosed.  

In the digital identity space, including mDLs, the most referenced and utilized are the ISO standard above, the American Association of Motor Vehicle Administrators (AAMVA) standard, and the W3C’s Verified Credentials (VC). These standards are often not siloed, but rather used together since they offer directions on data formats, security, and methods of presentation that aren’t completely covered by just one. However, ISO and AAMVA are not open standards and are decided internally. VCs were created for digital credentials generally, not just for mDLs. These standards are relatively new and still need time to mature to address potential gaps.

The decrypted data could possibly look like this JSON blob:

         {"family_name":"Doe",
          "given_name":"John",
          "birth_date":"1980-10-10",
          "issue_date":"2020-08-10",
          "expiry_date":"2030-10-30",
          "issuing_country":"US",
          "issuing_authority":"CA DMV",
          "document_number":"I12345678",
          "portrait":"../../../../test/issuance/portrait.b64",
          "driving_privileges":[
            {
               "vehicle_category_code":"A",
               "issue_date":"2022-08-09",
               "expiry_date":"2030-10-20"
            },
            {
               "vehicle_category_code":"B",
               "issue_date":"2022-08-09",
               "expiry_date":"2030-10-20"
            }
          ],
          "un_distinguishing_sign":"USA",
          {
          "weight":70,
          "eye_colour":"hazel",
          "hair_colour":"red",
          "birth_place":"California",
          "resident_address":"2415 1st Avenue",
          "portrait_capture_date":"2020-08-10T12:00:00Z",
          "age_in_years":42,
          "age_birth_year":1980,
          "age_over_18":true,
          "age_over_21":true,
          "issuing_jurisdiction":"US-CA",
          "nationality":"US",
          "resident_city":"Sacramento",
          "resident_state":"California",
          "resident_postal_code":"95818",
          "resident_country": "US"}
}

Application Approach and Scope Problems 

California decided to contract a vendor to build a wallet app rather than use Google Wallet or Apple Wallet (not to be conflated with Google and Apple Pay). A handful of other states use Google and Apple, perhaps because many people have one or the other. There are concerns about large companies being contracted by the states to deliver mDLs to the public, such as their controlling the public image of digital identity and device compatibility.  

This isn’t the first time a state contracted with a vendor to build a digital credential application without much public input or consensus. For example, New York State contracted with IBM to roll out the Excelsior app during the beginning of COVID-19 vaccination availability. At the time, EFF raised privacy and other concerns about this form of digital proof of vaccination. The state ultimately paid the vendor a staggering $64 million. While initially proprietary, the application later opened to the SMART Health Card standard, which is based on the W3C’s VCs. The app was sunset last year. It’s not clear what effect it had on public health, but it’s good that it wound down as social distancing measures relaxed. The infrastructure should be dismantled, and the persistent data should be discarded. If another health crisis emerges, at least a law in New York now partially protects the privacy of this kind of data. NY state legislature is currently working on a bill around mDLs after a round-table on their potential pilot. However, the New York DMV has already entered into a $1.75 million dollar contract with the digital identity vendor IDEMIA. It will be a race to see if protections will be established prior to pilot deployment. 

Scope is also a concern with California’s mDL. The state contracted with Spruce ID to build this app. The company states that its purpose is to empower “organizations to manage the entire lifecycle of digital credentials, such as mobile driver’s licenses, software audit statements, professional certifications, and more.” In the “add-ons” section of the app, TruAge’s age verification QR code is available.  

Another issue is selective disclosure, meaning the technical ability for the identity credential holder to choose which information to disclose to a person or entity asking for information from their credential. This is a long-time promise from enthusiasts of digital identity. The most used example is verification that the credential holder is over 21, without showing anything else about the holder, such as their name and address that appear on the face of their traditional driver’s license. But the California DMV wallet app, has a lack of options for selective disclosure: 

  • The holder has to agree to TruAge’s terms and service and generate a separate TruAge QR Code.  
  • There is already an mDL reader option for age verification for the QR Code of an mDL. 
  • There is no current option for the holder to use selective disclosure for their mDL. But it is planned for future release, according to the California DMV via email. 
  • Lastly, if selective disclosure is coming, this makes the TruAge add-on redundant. 

The over-21 example is only as meaningful as its implementation; including the convenience, privacy, and choice given to the mDL holder. 

TruAge appears to be piloting its product in at least 6 states. With “add-ons”, the scope of the wallet app indicates expansion beyond simply presenting your driver’s license. According to the California DMV’s Office of Public Affairs via email: 

The DMV is exploring the possibility of offering additional services including disabled person parking placard ID, registration card, vehicle ownership and occupational license in the add-ons in the coming months.” 

This clearly displays how the scope of this pilot may expand and how the mDL could eventually be housed within an entire ecosystem of identity documentation. There are privacy preserving ways to present mDLs, like unlinkable proofs. These mechanisms help mitigate verifier-issuer collusion from establishing if the holder was in different places with their mDL. 

Privacy and Equity First 

At the time of this post, about 325,000 California residents have the pilot app. We urge states to take their time with creating mDLs, and even wait for verification methods that are more privacy considerate to mature. Deploying mDLs should prioritize holder control, privacy, and transparency. The speed of these pilots is possibly influenced by other factors, like the push for mDLs from the U.S. Department of Homeland Security. 

Digital wallet initiatives like eIDAS in the European Union are forging conversations on what user control mechanisms might look like. These might include, for example, “bringing your own wallet” and using an “open wallet” that is secure, private, interoperable, and portable. 

We also need governance that properly limits law enforcement access to information collected by mDLs, and to other information in the smartphones where holders place their mDLs. Further, we need safeguards against these state-created wallets being wedged into problematic realms like age verification mandates as a condition of accessing the internet. 

We should be speed running privacy and provide better access for all to public services and government-issued documentation. That includes a right to stick with traditional paper or plastic identification, and accommodation of cases where a phone may not be accessible.  

We urge the state to implement selective disclosure and other privacy preserving tools. The app is not required anywhere. It should remain that way no matter how cryptographically secure the system purports to be, or how robust the privacy policies. We also urge all governments to remain transparent and cautious about how they sign on vendors during pilot programs. If a contract takes away the public’s input on future protections, then that is a bad start. If a state builds a pilot without much patience for privacy and public input, then that is also turbulent ground for protecting users going forward.  

Just because digital identity may feel inevitable, doesn’t mean the dangers have to be. 

Alexis Hancock

EFF to California Appellate Court: Reject Trial Judge’s Ruling That Would Penalize Beneficial Features and Tools on Social Media

7 hours 55 minutes ago

EFF legal intern Jack Beck contributed to this post.

A California trial court recently departed from wide-ranging precedent and held that Snap, Inc., the maker of Snapchat, the popular social media app, had created a “defective” product by including features like disappearing messages, the ability to connect with people through mutual friends, and even the well-known “Stories” feature. We filed an amicus brief in the appeal, Neville v. Snap, Inc., at the California Court of Appeal, and are calling for the reversal of the earlier decision, which jeopardizes protections for online intermediaries and thus the free speech of all internet users.

At issue in the case is Section 230, without which the free and open internet as we know it would not exist. Section 230 provides that online intermediaries are generally not responsible for harmful user-generated content. Rather, responsibility for what a speaker says online falls on the person who spoke.

The plaintiffs are a group of parents whose children overdosed on fentanyl-laced drugs obtained through communications enabled by Snapchat. Even though the harm they suffered was premised on user-generated content—messages between the drug dealers and their children—the plaintiffs argued that Snapchat is a “defective product.” They highlighted various features available to all users on Snapchat, including disappearing messages, arguing that the features facilitate illegal drug deals.

Snap sought to have the case dismissed, arguing that the plaintiffs’ claims were barred by Section 230. The trial court disagreed, narrowly interpreting Section 230 and erroneously holding that the plaintiffs were merely trying to hold the company responsible for its own “independent tortious conduct—independent, that is, of the drug sellers’ posted content.” In so doing, the trial court departed from congressional intent and wide-ranging California and federal court precedent.

In a petition for a writ of mandate, Snap urged the appellate court to correct the lower court’s distortion of Section 230. The petition rightfully contends that the plaintiffs are trying to sidestep Section 230 through creative pleading. The petition argues that Section 230 protects online intermediaries from liability not only for hosting third-party content, but also for crucial editorial decisions like what features and tools to offer content creators and how to display their content.

We made two arguments in our brief supporting Snap’s appeal.

First, we explained that the features the plaintiffs targeted—and which the trial court gave no detailed analysis of—are regular parts of Snapchat’s functionality with numerous legitimate uses. Take Snapchat’s option to have messages disappear after a certain period of time. There are times when the option to make messages disappear can be crucial for protecting someone’s safety—for example, dissidents and journalists operating in repressive regimes, or domestic violence victims reaching out for support. It’s also an important privacy feature for everyday use. Simply put: the ability for users to exert control over who can see their messages and for how long, advances internet users’ privacy and security under legitimate circumstances.

Second, we highlighted in our brief that this case is about more than concerned families challenging a big tech company. Our modern communications are mediated by private companies, and so any weakening of Section 230 immunity for internet platforms would stifle everyone’s ability to communicate. Should the trial court’s ruling stand, Snapchat and similar platforms will be incentivized to remove features from their online services, resulting in bland and sanitized—and potentially more privacy invasive and less secure—communications platforms. User experience will be degraded as internet platforms are discouraged from creating new features and tools that facilitate speech. Companies seeking to minimize their legal exposure for harmful user-generated content will also drastically increase censorship of their users, and smaller platforms trying to get off the ground will fail to get funding or will be forced to shut down.

There’s no question that what happened in this case was tragic, and people are right to be upset about some elements of how big tech companies operate. But Section 230 is the wrong target. We strongly advocate for Section 230, yet when a tech company does something legitimately irresponsible, the statute still allows for them to be liable—as Snap knows from a lawsuit that put an end to its speed filter.

If the trial court’s decision is upheld, internet platforms would not have a reliable way to limit liability for the services they provide and the content they host. They would face too many lawsuits that cost too much money to defend. They would be unable to operate in their current capacity, and ultimately the internet would cease to exist in its current form. Billions of internet users would lose.

Sophia Cope

Lawmakers: Ban TikTok to Stop Election Misinformation! Same Lawmakers: Restrict How Government Addresses Election Misinformation!

3 days 5 hours ago

In a case being heard Monday at the Supreme Court, 45 Washington lawmakers have argued that government communications with social media sites about possible election interference misinformation are illegal.

Agencies can't even pass on information about websites state election officials have identified as disinformation, even if they don't request that any action be taken, they assert.

Yet just this week the vast majority of those same lawmakers said the government's interest in removing election interference misinformation from social media justifies banning a site used by 150 million Americans.

On Monday, the Supreme Court will hear oral arguments in Murthy v. Missouri, a case that raises the issue of whether the federal government violates the First Amendment by asking social media platforms to remove or negatively moderate user posts or accounts. In Murthy, the government contends that it can strongly urge social media sites to remove posts without violating the First Amendment, as long as it does not coerce them into doing so under the threat of penalty or other official sanction.

We recognize both the hazards of government involvement in content moderation and the proper role in some situations for the government to share its expertise with the platforms. In our brief in Murthy, we urge the court to adopt a view of coercion that includes indirectly coercive communications designed and reasonably perceived as efforts to replace the platform’s editorial decision-making with the government’s.

And we argue that close cases should go against the government. We also urge the court to recognize that the government may and, in some cases, should appropriately inform platforms of problematic user posts. But it’s the government’s responsibility to make sure that its communications with the platforms are reasonably perceived as being merely informative and not coercive.

In contrast, the Members of Congress signed an amicus brief in Murthy supporting placing strict limitations on the government’s interactions with social media companies. They argued that the government may hardly communicate at all with social media platforms when it detects problematic posts.

Notably, the specific posts they discuss in their brief include, among other things, posts the U.S. government suspects are foreign election interference. For example, the case includes allegations about the FBI and CISA improperly communicating with social media sites that boil down to the agency passing on pertinent information, such as websites that had already been identified by state and local election officials as disinformation. The FBI did not request that any specific action be taken and sought to understand how the sites' terms of service would apply.

As we argued in our amicus brief, these communications don't add up to the government dictating specific editorial changes it wanted. It was providing information useful for sites seeking to combat misinformation. But, following an injunction in Murthy, the government has ceased sharing intelligence about foreign election interference. Without the information, Meta reports its platforms could lack insight into the bigger threat picture needed to enforce its own rules.

The problem of election misinformation on social media also played a prominent role this past week when the U.S. House of Representatives approved a bill that would bar app stores from distributing TikTok as long as it is owned by its current parent company, ByteDance, which is headquartered in Beijing. The bill also empowers the executive branch to identify and similarly ban other apps that are owned by foreign adversaries.

As stated in the House Report that accompanied the so-called "Protecting Americans from Foreign Adversary Controlled Applications Act," the law is needed in part because members of Congress fear the Chinese government “push[es] misinformation, disinformation, and propaganda on the American public” through the platform. Those who supported the bill thus believe that the U.S. can take the drastic step of banning an app for the purposes of preventing the spread of “misinformation and propaganda” to U.S. users. A public report from the Office of the Director for National Intelligence was more specific about the threat, indicating a special concern for information meant to interfere with the November elections and foment societal divisions in the U.S.

Over 30 members of the House who signed the amicus brief in Murthy voted for the TikTok ban. So, many of the same people who supported the U.S. government’s efforts to rid a social media platform of foreign misinformation, also argued that the government’s ability to address the very same content on other social media platforms should be sharply limited.

Admittedly, there are significant differences between the two positions. The government does have greater limits on how it regulates the speech of domestic companies than it does the speech of foreign companies.

But if the true purpose of the bill is to get foreign election misinformation off of social media, the inconsistency in the positions is clear.  If ByteDance sells TikTok to domestic owners so that TikTok can stay in business in the U.S., and if the same propaganda appears on the site, is the U.S. now powerless to do anything about it? If so, that would seem to undercut the importance in getting the information away from U.S. users, which is one the chief purposes of the TikTik ban.

We believe there is an appropriate role for the government to play, within the bounds of the First Amendment, when it truly believes that there are posts designed to interfere with U.S. elections or undermine U.S. security on any social media platform. It is a far more appropriate role than banning a platform altogether.

 

 

David Greene

The SAFE Act to Reauthorize Section 702 is Two Steps Forward, One Step Back

3 days 10 hours ago

Section 702 of the Foreign Intelligence Surveillance Act (FISA) is one of the most insidious and secretive mass surveillance authorities still in operation today. The Security and Freedom Enhancement (SAFE) Act would make some much-needed and long fought-for reforms, but it also does not go nearly far enough to rein in a surveillance law that the federal government has abused time and time again.

You can read the full text of the bill here.

While Section 702 was first sold as a tool necessary to stop foreign terrorists, it has since become clear that the government uses the communications it collects under this law as a domestic intelligence source. The program was intended to collect communications of people outside of the United States, but because we live in an increasingly globalized world, the government retains a massive trove of communications between people overseas on U.S. persons. Now, it’s this US side of digital conversations that are being routinely sifted through by domestic law enforcement agencies—all without a warrant.

The SAFE Act, like other reform bills introduced this Congress, attempts to roll back some of this warrantless surveillance. Despite its glaring flaws and omissions, in a Congress as dysfunctional as this one it might be the bill that best privacy-conscious people and organizations can hope for. For instance, it does not do as much as the Government Surveillance Reform Act, which EFF supported in November 2023. But imposing meaningful checks on the Intelligence Community (IC) is an urgent priority, especially because the Intelligence Community has been trying to sneak a "clean" reauthorization of Section 702 into government funding bills, and has even sought to have the renewal happen in secret in the hopes of keeping its favorite mass surveillance law intact. The administration is also reportedly planning to seek another year-long extension of the law without any congressional action. All the while, those advocating for renewing Section 702 have toyed with as many talking points as they can—from cybercrime or human trafficking to drug smuggling, terrorism, oreven solidarity activism in the United States—to see what issue would scare people sufficiently enough to allow for a clean reauthorization of mass surveillance.

So let’s break down the SAFE Act: what’s good, what’s bad, and what aspects of it might actually cause more harm in the future. 

What’s Good about the SAFE Act

The SAFE Act would do at least two things that reform advocates have pressured Congress to include in any proposed bill to reauthorize Section 702. This speaks to the growing consensus that some reforms are absolutely necessary if this power is to remain operational.

The first and most important reform the bill would make is to require the government to obtain a warrant before accessing the content of communications for people in the United States. Currently, relying on Section 702, the government vacuums up communications from all over the world, and a huge number of those intercepted communications are to or from US persons. Those communications sit in a massive database. Both intelligence agencies and law enforcement have conducted millions of queries of this database for US-based communications—all without a warrant—in order to investigate both national security concerns and run-of-the-mill criminal investigations. The SAFE Act would prohibit “warrantless access to the communications and other information of United States persons and persons located in the United States.” While this is the bare minimum a reform bill should do, it’s an important step. It is crucial to note, however, that this does not stop the IC or law enforcement from querying to see if the government has collected communications from specific individuals under Section 702—it merely stops them from reading those communications without a warrant.

The second major reform the SAFE Act provides is to close the “data brooker loophole,” which EFF has been calling attention to for years. As one example, mobile apps often collect user data to sell it to advertisers on the open market. The problem is law enforcement and intelligence agencies increasingly buy this private user data, rather than obtain a warrant for it. This bill would largely prohibit the government from purchasing personal data they would otherwise need a warrant to collect. This provision does include a potentially significant exception for situations where the government cannot exclude Americans’ data from larger “compilations” that include foreigners’ data. This speaks not only to the unfair bifurcation of rights between Americans and everyone else under much of our surveillance law, but also to the risks of allowing any large scale acquisition from data brokers at all. The SAFE Act would require the government to minimize collection, search, and use of any Americans’ data in these compilations, but it remains to be seen how effective these prohibitions will be. 

What’s Missing from the SAFE Act

The SAFE Act is missing a number of important reforms that we’ve called for—and which the Government Surveillance Reform Act would have addressed. These reforms include ensuring that individuals harmed by warrantless surveillance are able to challenge it in court, both in civil lawsuits like those brought by EFF in the past, and in criminal cases where the government may seek to shield its use of Section 702 from defendants. After nearly 14 years of Section 702 and countless court rulings slamming the courthouse door on such legal challenges, it’s well past time to ensure that those harmed by Section 702 surveillance can have the opportunity to challenge it.

New Problems Potentially Created by the SAFE Act

While there may often be good reason to protect the secrecy of FISA proceedings, unofficial disclosures about these proceedings has from the very beginning played an indispensable role in reforming uncontested abuses of surveillance authorities. From the Bush administration’s warrantless wiretapping program through the Snowden disclosures up to the present, when reporting about FISA applications appears on the front page of the New York Times, oversight of the intelligence community would be extremely difficult, if not impossible, without these disclosures.

Unfortunately, the SAFE Act contains at least one truly nasty addition to current law: an entirely new crime that makes it a felony to disclose “the existence of an application” for foreign intelligence surveillance or any of the application’s contents. In addition to explicitly adding to the existing penalties in the Espionage Act—itself highly controversial— this new provision seems aimed at discouraging leaks by increasing the potential sentence to eight years in prison. There is no requirement that prosecutors show that the disclosure harmed national security, nor any consideration of the public interest. Under the present climate, there’s simply no reason to give prosecutors even more tools like this one to punish whistleblowers who are seen as going through improper channels.

EFF always aims to tell it like it is. This bill has some real improvements, but it’s nowhere near the surveillance reform we all deserve. On the other hand, the IC and its allies in Congress continue to have significant leverage to push fake reform bills, so the SAFE Act may well be the best we’re going to get. Either way, we’re not giving up the fight.  

Matthew Guariglia

Thousands of Young People Told Us Why the Kids Online Safety Act Will Be Harmful to Minors

3 days 11 hours ago

With KOSA passed, the information i can access as a minor will be limited and censored, under the guise of "protecting me", which is the responsibility of my parents, NOT the government. I have learned so much about the world and about myself through social media, and without the diverse world i have seen, i would be a completely different, and much worse, person. For a country that prides itself in the free speech and freedom of its peoples, this bill goes against everything we stand for! - Alan, 15  

___________________

If information is put through a filter, that’s bad. Any and all points of view should be accessible, even if harmful so everyone can get an understanding of all situations. Not to mention, as a young neurodivergent and queer person, I’m sure the information I’d be able to acquire and use to help myself would be severely impacted. I want to be free like anyone else. - Sunny, 15 

 ___________________

How young people feel about the Kids Online Safety Act (KOSA) matters. It will primarily affect them, and many, many teenagers oppose the bill. Some have been calling and emailing legislators to tell them how they feel. Others have been posting their concerns about the bill on social media. These teenagers have been baring their souls to explain how important social media access is to them, but lawmakers and civil liberties advocates, including us, have mostly been the ones talking about the bill and about what’s best for kids, and often we’re not hearing from minors in these debates at all. We should be — these young voices should be essential when talking about KOSA.

So, a few weeks ago, we asked some of the young advocates fighting to stop the Kids Online Safety Act a few questions:  

- How has access to social media improved your life? What do you gain from it? 

- What would you lose if KOSA passed? How would your life be different if it was already law? 

Within a week we received over 3,000 responses. As of today, we have received over 5,000.

These answers are critical for legislators to hear. Below, you can read some of these comments, sorted into the following themes (though they often overlap):  

These comments show that thoughtful young people are deeply concerned about the proposed law's fallout, and that many who would be affected think it will harm them, not help them. Over 700 of those who responded reported that they were currently sixteen or under—the age under which KOSA’s liability is applicable. The average age of those who answered the survey was 20 (of those who gave their age—the question was optional, and about 60% of people responded).  In addition to these two questions, we also asked those taking the survey if they were comfortable sharing their email address for any journalist who might want to speak with them; unfortunately much coverage usually only mentions one or two of the young people who would be most affected. So, journalists: We have contact info for over 300 young people who would be happy to speak to you about why social media matters to them, and why they oppose KOSA.

Individually, these answers show that social media, despite its current problems, offer an overall positive experience for many, many young people. It helps people living in remote areas find connection; it helps those in abusive situations find solace and escape; it offers education in history, art, health, and world events for those who wouldn’t otherwise have it; it helps people learn about themselves and the world around them. (Research also suggests that social media is more helpful than harmful for young people.) 

And as a whole, these answers tell a story that is 180° different from that which is regularly told by politicians and the media. In those stories, it is accepted as fact that the majority of young people’s experiences on social media platforms are harmful. But from these responses, it is clear that many, many young people also experience help, education, friendship, and a sense of belonging there—precisely because social media allows them to explore, something KOSA is likely to hinder. These kids are deeply engaged in the world around them through these platforms, and genuinely concerned that a law like KOSA could take that away from them and from other young people.  

Here are just a few of the thousands of reasons they’re worried.  

Note: We are sharing individuals’ opinions, without editing. We do not necessarily endorse them or their interpretation of KOSA.

KOSA Will Harm Rights That Young People Know They Ought to Have 

One of the most important things that would be lost is the freedom of speech - a given right that is crucial to a healthy, functioning environment. Not every speech is morally okay, but regulating what speech is deemed "acceptable" constricts people's rights; a clear violation of the First Amendment. Those who need or want to access certain information are not allowed to - not because the information will harm them or others, but for the reason that a certain portion of people disagree with the information. If the country only ran on what select people believed, we would be a bland, monotonous place. This country thrives on diversity, whether it be race, gender, sex, or any other personal belief. If KOSA was passed, I would lose my safe spaces, places where I can go to for mental health, places that make me feel more like a human than just some girl. No more would I be able to fight for ideas and beliefs I hold, nor enjoy my time on the internet either. - Anonymous, 16 

 ___________________

I, and many of my friends, grew up in an Internet where remaining anonymous was common sense, and where revealing your identity was foolish and dangerous, something only to be done sparingly, with a trusted ally at your side, meeting at a common, crowded public space like a convention or a college cafeteria. This bill spits in the face of these very practical instincts, forces you to dox yourself, and if you don’t want to be outed, you must be forced to withdraw from your communities. From your friends and allies. From the space you have made for yourself, somewhere you can truly be yourself with little judgment, where you can find out who you really are, alongside people who might be wildly different from you in some ways, and exactly like you in others. I am fortunate to have parents who are kind and accepting of who I am. I know many people are nowhere near as lucky as me. - Maeve, 25 

 ___________________ 

I couldn't do activism through social media and I couldn't connect with other queer individuals due to censorship and that would lead to loneliness, depression other mental health issues, and even suicide for some individuals such as myself. For some of us the internet is the only way to the world outside of our hateful environments, our only hope. Representation matters, and by KOSA passing queer children would see less of age appropriate representation and they would feel more alone. Not to mention that KOSA passing would lead to people being uninformed about things and it would start an era of censorship on the internet and by looking at the past censorship is never good, its a gateway to genocide and a way for the government to control. – Sage, 15 

  ___________________

Privacy, censorship, and freedom of speech are not just theoretical concepts to young people. Their rights are often already restricted, and they see the internet as a place where they can begin to learn about, understand, and exercise those freedoms. They know why censorship is dangerous; they understand why forcing people to identify themselves online is dangerous; they know the value of free speech and privacy, and they know what they’ve gained from an internet that doesn’t have guardrails put up by various government censors.  

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA Could Impact Young People’s Artistic Education and Opportunities 

I found so many friends and new interests from social media. Inspirations for my art I find online, like others who have an art style I admire, or models who do poses I want to draw. I can connect with my friends, send them funny videos and pictures. I use social media to keep up with my favorite YouTubers, content creators, shows, books. When my dad gets drunk and hard to be around or my parents are arguing, I can go on YouTube or Instagram and watch something funny to laugh instead. It gives me a lot of comfort, being able to distract myself from my sometimes upsetting home life. I get to see what life is like for the billions of other people on this planet, in different cities, states, countries. I get to share my life with my friends too, freely speaking my thoughts, sharing pictures, videos, etc.  
I have found my favorite YouTubers from other social media platforms like tiktok, this happened maybe about a year ago, and since then I think this is the happiest I have been in a while. Since joining social media I have become a much more open minded person, it made me interested in what others lives are like. It also brought awareness and educated me about others who are suffering in the world like hunger, poor quality of life, etc. Posting on social media also made me more confident in my art, in the past year my drawing skills have immensely improved and I’m shocked at myself. Because I wanted to make better fan art, inspire others, and make them happy with my art. I have been introduce to many styles of clothing that have helped develop my own fun clothing style. It powers my dreams and makes me want to try hard when I see videos shared by people who have worked hard and made it. - Anonymous, 15 

  ___________________

As a kid I was able to interact in queer and disabled and fandom spaces, so even as a disabled introverted child who wasn’t popular with my peers I still didn’t feel lonely. The internet is arguably a safer way to interact with other fans of media than going to cons with strangers, as long as internet safety is really taught to kids. I also get inspiration for my art and writing from things I’ve only discovered online, and as an artist I can’t make money without the internet and even minors do commissions. The issue isn’t that the internet is unsafe, it’s that internet safety isn’t taught anymore. - Rachel, 19 

  ___________________

i am an artist, and sharing my things online makes me feel happy and good about myself. i love seeing other people online and knowing that they like what i make. when i make art, im always nervous to show other people. but when i post it online i feel like im a part of something, and that im in a community where i feel that i belong. – Anonymous, 15 

 ___________________ 

Social media has saved my life, just like it has for many young people. I have found safe spaces and motivation because of social media, and I have never encountered anything negative or harmful to me. With social media I have been able to share my creativity (writing, art, and music) and thoughts safely without feeling like I'm being held back or oppressed. My creations have been able to inspire and reach so many people, just like how other people's work have reached me. Recently, I have also been able to help the library I volunteer at through the help of social media. 
What I do in life and all my future plans (career, school, volunteer projects, etc.) surrounds social media, and without it I wouldn't be able to share what I do and learn more to improve my works and life. I wouldn't be able to connect with wonderful artists, musicians, and writers like I do now. I would be lost and feel like I don't have a reason to do what I do. If KOSA is passed, I wouldn't be able to get the help I need in order to survive. I've made so many friends who have been saved because of social media, and if this bill gets passed they will also be affected. Guess what? They wouldn't be able to get the help they need either. 
If KOSA was already a law when I was just a bit younger, I wouldn't even be alive. I wouldn't have been able to reach help when I needed it. I wouldn't have been able to share my mind with the world. Social media was the reason I was able to receive help when I was undergoing abuse and almost died. If KOSA was already a law, I would've taken my life, or my abuser would have done it before I could. If KOSA becomes a law now, I'm certain that the likeliness of that happening to kids of any age will increase. – Anonymous, 15 

  ___________________

A huge number of young artists say they use social media to improve their skills, and in many cases, the avenue by which they discovered their interest in a type of art or music. Young people are rightfully worried that the magic moment where you first stumble upon an artist or a style that changes your entire life will be less and less common for future generations if KOSA passes. We agree: KOSA would likely lead platforms to limit that opportunity for young people to experience unexpected things, forcing their online experiences into a much smaller box under the guise of protecting them.  

Also, a lot of young people told us they wanted to, or were developing, an online business—often an art business. Under KOSA, young people could have less opportunities in the online communities where artists share their work and build a customer base, and a harder time navigating the various communities where they can share their art.  

KOSA Will Hurt Young People’s Ability to Find Community Online 

Social media has allowed me to connect with some of my closest friends ever, probably deeper than some people in real life. i get to talk about anything i want unimpeded and people accept me for who i am. in my deepest and darkest moments, knowing that i had somewhere to go was truly more relieving than anything else. i've never had the courage to commit suicide, but still, if it weren't for social media, i probably wouldn't be here, mentally & emotionally at least. 
i'd lose the space that accepts me. i'd lose the only place where i can be me. in life, i put up a mask to appease my parents and in some cases, my friends. with how extreme the u.s. is becoming these days, i could even lose my life. i would live my days in fear. i'm terrified of how fast this country is changing and if this bill passes, saying i would fall into despair would be an understatement. people say to "be yourself", but they don't understand that if i were to be my true self tomorrow, i could be killed. – march, 14 

 ___________________ 

Without the internet, and especially the rhythm gaming community which I found through Discord, I would've most likely killed myself at 13. My time on here has not been perfect, as has anyone's but without the internet I wouldn't have been the person I am today. I wouldn't have gotten help recognizing that what my biological parents were doing to me was abuse, the support I've received for my identity (as queer youth) and the way I view things, with ways to help people all around the world and be a more mindful ally, activist, and thinker, and I wouldn't have met my mom. 
I love my chosen mom. We met at a Dance Dance Revolution tournament in April of last year and have been friends ever since. When I told her that she was the first person I saw as a mother figure in my life back in November, I was bawling my eyes out. I'm her mije, and she's my mom. love her so much that saying that doesn't even begin to express exactly how much I love her.  
I love all my chosen family from the rhythm gaming community, my older sisters and siblings, I love them all. I have a few, some I talk with more regularly than others. Even if they and I may not talk as much as we used to, I still love them. They mean so much to me. – X86, 15 

  ___________________

i spent my time in public school from ages 9-13 getting physically and emotionally abused by special ed aides, i remember a few months after i left public school for good, i saw a post online that made me realize that what i went through wasn’t normal. if it wasn’t for the internet, i wouldn’t have come to terms with my autism, i would have still hated myself due to not knowing that i was genderqueer, my mental health would be significantly worse, and i would probably still be self harming, which is something i stopped doing at 13. besides the trauma and mental health side of things, something important to know is that spaces for teenagers to hang out have been eradicated years ago, minors can’t go to malls unless they’re with their parents, anti loitering laws are everywhere, and schools aren’t exactly the best place for teenagers to hang out, especially considering queer teens who were murdered by bullies (such as brianna ghey or nex benedict), the internet has become the third space that teenagers have flocked to as a result. – Anonymous, 17 

  ___________________

KOSA is anti-community. People online don’t only connect over shared interests in art and music—they also connect over the difficult parts of their lives. Over and over again, young people told us that one of the most valuable parts of social media was learning that they were not alone in their troubles. Finding others in similar circumstances gave them a community, as well as ideas to improve their situations, and even opportunities to escape dangerous situations.  

KOSA will make this harder. As platforms limit the types of recommendations and public content they feel safe sharing with young people, those who would otherwise find communities or potential friends will not be as likely to do so. A number of young people explained that they simply would never have been able to overcome some of the worst parts of their lives alone, and they are concerned that KOSA’s passage would stop others from ever finding the help they did. 

KOSA Could Seriously Hinder People’s Self-Discovery  

I am a transgender person, and when I was a preteen, looking down the barrel of the gun of puberty, I was miserable. I didn't know what was wrong I just knew I'd rather do anything else but go through puberty. The internet taught me what that was. They told me it was okay. There were things like haircuts and binders that I could use now and medical treatment I could use when I grew up to fix things. The internet was there for me too when I was questioning my sexuality and again when my mental health was crashing and even again when I was realizing I'm not neurotypical. The internet is a crucial source of information for preteens and beyond and you cannot take it away. You cannot take away their only realistically reachable source of information for what the close-minded or undereducated adults around them don't know. - Jay, 17 

   ___________________

Social media has improved my life so much and led to how I met my best friend, I’ve known them for 6+ years now and they mean so much to me. Access to social media really helps me connect with people similar to me and that make me feel like less of an outcast among my peers, being able to communicate with other neurodivergent queer kids who like similar interests to me. Social media makes me feel like I’m actually apart of a community that won’t judge me for who I am. I feel like I can actually be myself and find others like me without being harassed or bullied, I can share my art with others and find people like me in a way I can’t in other spaces. The internet & social media raised me when my parents were busy and unavailable and genuinely shaped the way I am today and the person I’ve become. – Anonymous, 14 

   ___________________

The censorship likely to come from this bill would mean I would not see others who have similar struggles to me. The vagueness of KOSA allows for state attorney generals to decide what is and is not appropriate for children to see, a power that should never be placed in the hands of one person. If issues like LGBT rights and mental health were censored by KOSA, I would have never realized that I AM NOT ALONE. There are problems with children and the internet but KOSA is not the solution. I urge the senate to rethink this bill, and come up with solutions that actually protect children, not put them in more danger, and make them feel ever more alone. - Rae, 16 

  ___________________ 

KOSA would effectively censor anything the government deems "harmful," which could be anything from queerness and fandom spaces to anything else that deviates from "the norm." People would lose support systems, education, and in some cases, any way to find out about who they are. I'll stop beating around the bush, if it wasn't for places online, I would never have discovered my own queerness. My parents and the small circle of adults I know would be my only connection to "grown-up" opinions, exposing me to a narrow range of beliefs I would likely be forced to adopt. Any kids in positions like mine would have no place to speak out or ask questions, and anything they bring up would put them at risk. Schools and families can only teach so much, and in this age of information, why can't kids be trusted to learn things on their own? - Anonymous, 15 

   ___________________

Social media helped me escape a very traumatic childhood and helped me connect with others. quite frankly, it saved me from being brainwashed. – Milo, 16 

   ___________________

Social media introduced me to lifelong friends and communities of like-minded people; in an abusive home, online social media in the 2010s provided a haven of privacy, safety, and information. I honed my creativity, nurtured my interests and developed my identity through relating and talking to people to whom I would otherwise have been totally isolated from. Also, unrestricted internet access actually taught me how to spot shady websites and inappropriate content FAR more effectively than if censorship had been at play like it is today. 
A couple of the friends I made online, as young as thirteen, were adults; and being friends with adults who knew I was a child, who practiced safe boundaries with me yet treated me with respect, helped me recognise unhealthy patterns in predatory adults. I have befriended mothers and fathers online through games and forums, and they were instrumental in preventing me being groomed by actual pedophiles. Had it not been for them, I would have wound up terribly abused by an "in real life" adult "friend". Instead, I recognised the differences in how he was treating me (infantilising yet praising) vs how my adult friends had treated me (like a human being), and slowly tapered off the friendship and safely cut contact. 
As I grew older, I found a wealth of resources on safe sex and sexual health education online. Again, if not for these discoveries, I would most certainly have wound up abused and/or pregnant as a teenager. I was never taught about consent, safe sex, menstruation, cervical health, breast health, my own anatomy, puberty, etc. as a child or teenager. What I found online-- typically on Tumblr and written with an alarming degree of normalcy-- helped me understand my body and my boundaries far more effectively than "the talk" or in-school sex ed ever did. I learned that the things that made me panic were actually normal; the ins and outs of puberty and development, and, crucially, that my comfort mattered most. I was comfortable and unashamed of being a virgin my entire teen years because I knew it was okay that I wasn't ready. When I was ready, at twenty-one, I knew how to communicate with my partner and establish safe boundaries, and knew to check in and talk afterwards to make sure we both felt safe and happy. I knew there was no judgement for crying after sex and that it didn't necessarily mean I wasn't okay. I also knew about physical post-sex care; e.g. going to the bathroom and cleaning oneself safely. 
AGAIN, I would NOT have known any of this if not for social media. AT ALL. And seeing these topics did NOT turn me into a dreaded teenage whore; if anything, they prevented it by teaching me safety and self-care. 
I also found help with depression, anxiety, and eating disorders-- learning to define them enabled me to seek help. I would not have had this without online spaces and social media. As aforementioned too, learning, sometimes through trial of fire, to safely navigate the web and differentiate between safe and unsafe sites was far more effective without censored content. Censorship only hurts children; it has never, ever helped them. How else was I to know what I was experiencing at home was wrong? To call it "abuse"? I never would have found that out. I also would never have discovered how to establish safe sexual AND social boundaries, or how to stand up for myself, or how to handle harassment, or how to discover my own interests and identity through media. The list goes on and on and on. – June, 21 

   ___________________

One of the claims that KOSA’s proponents make is that it won’t stop young people from finding the things they already want to search for. But we read dozens and dozens of comments from people who didn’t know something about themselves until they heard others discussing it—a mental health diagnosis, their sexuality, that they were being abused, that they had an eating disorder, and much, much more.  

Censorship that stops you from looking through a library is still dangerous even if it doesn’t stop you from checking out the books you already know. It’s still a problem to stop young people in particular from finding new things that they didn’t know they were looking for.   

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA Could Stop Young People from Getting Accurate News and Valuable Information 

Social media taught me to be curious. It taught me caution and trust and faith and that simply being me is enough. It brought me up where my parents failed, it allowed me to look into stories that assured me I am not alone where I am now. I would be fucking dead right now if it weren't for the stories of my fellow transgender folk out there, assuring me that it gets better.  
I'm young and I'm not smart but I know without social media, myself and plenty of the people I hold dear in person and online would not be alive. We wouldn't have news of the atrocities happening overseas that the news doesn't report on, we wouldn't have mentors to help teach us where our parents failed. - Anonymous, 16 

  ___________________ 

Through social media, I've learned about news and current events that weren't taught at school or home, things like politics or controversial topics that taught me nuance and solidified my concept of ethics. I learned about my identity and found numerous communities filled with people I could socialize with and relate to. I could talk about my interests with people who loved them just as much as I did. I found out about numerous different perspectives and cultures and experienced art and film like I never had before. My empathy and media literacy greatly improved with experience. I was also able to gain skills in gathering information and proper defences against misinformation. More technically, I learned how to organize my computer and work with files, programs, applications, etc; I could find guides on how to pursue my hobbies and improve my skills (I'm a self-taught artist, and I learned almost everything I know from things like YouTube or Tumblr for free). - Anonymous, 15 

  ___________________ 

A huge portion of my political identity has been shaped by news and information I could only find on social media because the mainstream news outlets wouldn’t cover it. (Climate Change, International Crisis, Corrupt Systems, etc.) KOSA seems to be intentionally working to stunt all of this. It’s horrifying. So much of modern life takes place on the internet, and to strip that away from kids is just another way to prevent them from formulating their own thoughts and ideas that the people in power are afraid of. Deeply sinister. I probably would have never learned about KOSA if it were in place! That’s terrifying! - Sarge, 17 

  ___________________

I’ve met many of my friends from [social media] and it has improved my mental health by giving me resources. I used to have an eating disorder and didn’t even realize it until I saw others on social media talking about it in a nuanced way and from personal experience. - Anonymous, 15 

   ___________________

Many young people told us that they’re worried KOSA will result in more biased news online, and a less diverse information ecosystem. This seems inevitable—we’ve written before that almost any content could fit into the categories that politicians believe will cause minors anxiety or depression, and so carrying that content could be legally dangerous for a platform. That could include truthful news about what’s going on in the world, including wars, gun violence, and climate change. 

“Preventing and mitigating” depression and anxiety isn’t a goal of any other outlet, and it shouldn’t be required for social media platforms. People have a right to access information—both news and opinion— in an open and democratic society, and sometimes that information is depressing or anxiety-inducing. To truly “prevent and mitigate” self-destructive behaviors, we must look beyond the media to systems that allow all humans to have self-respect, a healthy environment, and healthy relationships—not hiding truthful information that is disappointing.  

Young People’s Voices Matter 

While KOSA’s sponsors intend to help these young people, those who responded to the survey don’t see it that way. You may have noticed that it’s impossible to limit these complex and detailed responses into single categories—many childhood abuse victims found help as well as arts education on social media; many children connected to communities that they otherwise couldn’t and learned something essential about themselves in doing so. Many understand that KOSA would endanger their privacy, and also know it could harm marginalized kids the most.  

In reading thousands of these comments, it becomes clear that social media itself was not in itself a solution to the issues they experienced. What helped these young people was other people. Social media was where they were able to find and stay connected with those friends, communities, artists, activists, and educators. When you look at it this way, of course KOSA seems absurd: social media has become an essential element of young peoples’ lives, and they are scared to death that if the law passes, that part of their lives will disappear. Older teens and twenty-somethings, meanwhile, worry that if the law had been passed a decade ago, they never would have become the person that they did. And all of these fears are reasonable.  

There were thousands more comments like those above. We hope this helps balance the conversation, because if young people’s voices are suppressed now—and if KOSA becomes law—it will be much more difficult for them to elevate their voices in the future.  

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

Jason Kelley

Analyzing KOSA’s Constitutional Problems In Depth 

3 days 11 hours ago
Why EFF Does Not Think Recent Changes Ameliorate KOSA’s Censorship 

The latest version of the Kids Online Safety Act (KOSA) did not change our critical view of the legislation. The changes have led some organizations to drop their opposition to the bill, but we still believe it is a dangerous and unconstitutional censorship bill that would empower state officials to target services and online content they do not like. We respect that different groups can come to their own conclusions about how KOSA will affect everyone’s ability to access lawful speech online. EFF, however, remains steadfast in our long-held view that imposing a vague duty of care on a broad swath of online services to mitigate specific harms based on the content of online speech will result in those services imposing age verification and content restrictions. At least one group has characterized EFF’s concerns as spreading “disinformation.” We are not. But to ensure that everyone understands why EFF continues to oppose KOSA, we wanted to break down our interpretation of the bill in more detail and compare our views to those of others—both advocates and critics.  

Below, we walk through some of the most common criticisms we’ve gotten—and those criticisms the bill has received—to help explain our view of its likely impacts.  

KOSA’s Effectiveness  

First, and most importantly: We have serious and important disagreements with KOSA’s advocates on whether it will prevent future harm to children online. We are deeply saddened by the stories so many supporters and parents have shared about how their children were harmed online. And we want to keep talking to those parents, supporters, and lawmakers about ways in which EFF can work with them to prevent harm to children online, just as we will continue to talk with people who advocate for the benefits of social media. We believe, and have advocated for, comprehensive privacy protections as a better way to begin to address harms done to young people (and old) who have been targeted by platforms’ predatory business practices.  

A line of U.S. Supreme Court cases involving efforts to prevent book sellers from disseminating certain speech, which resulted in broad, unconstitutional censorship, shows why KOSA is unconstitutional. 

EFF does not think KOSA is the right approach to protecting children online, however. As we’ve said before, we think that in practice, KOSA is likely to exacerbate the risks of children being harmed online because it will place barriers on their ability to access lawful speech about addiction, eating disorders, bullying, and other important topics. We also think those restrictions will stifle minors who are trying  to find their own communities online.  We do not think that language added to KOSA to address that censorship concern solves the problem. We also don’t think that focusing KOSA’s regulation on design elements of online services addresses the First Amendment problems of the bill, either. 

Our views of KOSA’s harmful consequences are grounded in EFF’s 34-year history of both making policy for the internet and seeing how legislation plays out once it’s passed. This is also not our first time seeing the vast difference between how a piece of legislation is promoted and what it does in practice. Recently we saw this same dynamic with FOSTA/SESTA, which was promoted by politicians and the parents of  child sex trafficking victims as the way to prevent future harms. Sadly, even the politicians who initially championed it now agree that this law was not only ineffective at reducing sex trafficking online, but also created additional dangers for those same victims as well as others.   

KOSA’s Duty of Care  

KOSA’s core component requires an online platform or service that is likely to be accessed by young people to “exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate” various harms to minors. These enumerated harms include: 

  • mental health disorders (anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors) 
  • patterns of use that indicate or encourage addiction-like behaviors  
  • physical violence, online bullying, and harassment 

Based on our understanding of the First Amendment and how all online platforms and services regulated by KOSA will navigate their legal risk, we believe that KOSA will lead to broad online censorship of lawful speech, including content designed to help children navigate and overcome the very same harms KOSA identifies.  

A line of U.S. Supreme Court cases involving efforts to prevent book sellers from disseminating certain speech, which resulted in broad, unconstitutional censorship, shows why KOSA is unconstitutional. 

In Smith v. California, the Supreme Court struck down an ordinance that made it a crime for a book seller to possess obscene material. The court ruled that even though obscene material is not protected by the First Amendment, the ordinance’s imposition of liability based on the mere presence of that material had a broader censorious effect because a book seller “will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected, as well as obscene literature.” The court recognized that the “ordinance tends to impose a severe limitation on the public’s access to constitutionally protected material” because a distributor of others’ speech will react by limiting access to any borderline content that could get it into legal trouble.  

Online services have even less ability to read through the millions (or sometimes billions) of pieces of content on their services than a bookseller or distributor

In Bantam Books, Inc. v. Sullivan, the Supreme Court struck down a government effort to limit the distribution of material that a state commission had deemed objectionable to minors. The commission would send notices to book distributors that identified various books and magazines they believed were objectionable and sent copies of their lists to local and state law enforcement. Book distributors reacted to these notices by stopping the circulation of the materials identified by the commission. The Supreme Court held that the commission’s efforts violated the First Amendment and once more recognized that by targeting a distributor of others’ speech, the commission’s “capacity for suppression of constitutionally protected publications” was vast.  

KOSA’s duty of care creates a more far-reaching censorship threat than those that the Supreme Court struck down in Smith and Bantam Books. KOSA makes online services that host our digital speech liable should they fail to exercise reasonable care in removing or restricting minors’ access to lawful content on the topics KOSA identifies. KOSA is worse than the ordinance in Smith because the First Amendment generally protects speech about addiction, suicide, eating disorders, and the other topics KOSA singles out.  

We think that online services will react to KOSA’s new liability in much the same way as the bookstore in Smith and the book distributer in Bantam Books: They will limit minors’ access to or simply remove any speech that might touch on the topics KOSA identifies, even when much of that speech is protected by the First Amendment. Worse, online services have even less ability to read through the millions (or sometimes billions) of pieces of content on their services than a bookseller or distributor who had to review hundreds or thousands of books.  To comply, we expect that platforms will deploy blunt tools, either by gating off entire portions of their site to prevent minors from accessing them (more on this below) or by deploying automated filters that will over-censor speech, including speech that may be beneficial to minors seeking help with addictions or other problems KOSA identifies. (Regardless of their claims, it is not possible for a service to accurately pinpoint the content KOSA describes with automated tools.) 

But as the Supreme Court ruled in Smith and Bantam Books, the First Amendment prohibits Congress from enacting a law that results in such broad censorship precisely because it limits the distribution of, and access to, lawful speech.  

Moreover, the fact that KOSA singles out certain legal content—for example, speech concerning bullying—means that the bill creates content-based restrictions that are presumptively unconstitutional. The government bears the burden of showing that KOSA’s content restrictions advance a compelling government interest, are narrowly tailored to that interest, and are the least speech-restrictive means of advancing that interest. KOSA cannot satisfy this exacting standard.  

The fact that KOSA singles out certain legal content—for example, speech concerning bullying—means that the bill creates content-based restrictions that are presumptively unconstitutional. 

EFF agrees that the government has a compelling interest in protecting children from being harmed online. But KOSA’s broad requirement that platforms and services face liability for showing speech concerning particular topics to minors is not narrowly tailored to that interest. As said above, the broad censorship that will result will effectively limit access to a wide range of lawful speech on topics such as addiction, bullying, and eating disorders. The fact that KOSA will sweep up so much speech shows that it is far from the least speech-restrictive alternative, too.  

Why the Rule of Construction Doesn’t Solve the Censorship Concern 

In response to censorship concerns about the duty of care, KOSA’s authors added a rule of construction stating that nothing in the duty of care “shall be construed to require a covered platform to prevent or preclude:”  

  • minors from deliberately or independently searching for content, or 
  • the platforms or services from providing resources that prevent or mitigate the harms KOSA identifies, “including evidence-based information and clinical resources." 

We understand that some interpret this language as a safeguard for online services that limits their liability if a minor happens across information on topics that KOSA identifies, and consequently, platforms hosting content aimed at mitigating addiction, bullying, or other identified harms can take comfort that they will not be sued under KOSA. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

But EFF does not believe the rule of construction will limit KOSA’s censorship, in either a practical or constitutional sense. As a practical matter, it’s not clear how an online service will be able to rely on the rule of construction’s safeguards given the diverse amount of content it likely hosts.  

Take for example an online forum in which users discuss drug and alcohol abuse. It is likely to contain a range of content and views by users, some of which might describe addiction, drug use, and treatment, including negative and positive views on those points. KOSA’s rule of construction might protect the forum from a minor’s initial search for content that leads them to the forum. But once that minor starts interacting with the forum, they are likely to encounter the types of content KOSA proscribes, and the service may face liability if there is a later claim that the minor was harmed. In short, KOSA does not clarify that the initial search for the forum precludes any liability should the minor interact with the forum and experience harm later. It is also not clear how a service would prove that the minor found the forum via a search. 

The near-impossible standard required to review such a large volume of content, coupled with liability for letting any harmful content through, is precisely the scenario that the Supreme Court feared

Further, the rule of construction’s protections for the forum, should it provide only resources regarding preventing or mitigating drug and alcohol abuse based on evidence-based information and clinical resources, is unlikely to be helpful. That provision assumes that the forum has the resources to review all existing content on the forum and effectively screen all future content to only permit user-generated content concerning mitigation or prevention of substance abuse. The rule of construction also requires the forum to have the subject-matter expertise necessary to judge what content is or isn’t clinically correct and evidence-based. And even that assumes that there is broad scientific consensus about all aspects of substance abuse, including its causes (which there is not). 

Given that practical uncertainty and the potential hazard of getting anything wrong when it comes to minors’ access to that content, we think that the substance abuse forum will react much like the bookseller and distributor in the Supreme Court cases did: It will simply take steps to limit the ability for minors to access the content, a far easier and safer alternative than  making case-by-case expert decisions regarding every piece of content on the forum. 

EFF also does not believe that the Supreme Court’s decisions in Smith and Bantam Books would have been different if there had been similar KOSA-like safeguards incorporated into the regulations at issue. For example, even if the obscenity ordinance at issue in Smith had made an exception letting bookstores  sell scientific books with detailed pictures of human anatomy, the bookstore still would have to exhaustively review every book it sold and separate the obscene books from the scientific. The Supreme Court rejected such burdens as offensive to the First Amendment: “It would be altogether unreasonable to demand so near an approach to omniscience.” 

The near-impossible standard required to review such a large volume of content, coupled with liability for letting any harmful content through, is precisely the scenario that the Supreme Court feared. “The bookseller's self-censorship, compelled by the State, would be a censorship affecting the whole public, hardly less virulent for being privately administered,” the court wrote in Smith. “Through it, the distribution of all books, both obscene and not obscene, would be impeded.” 

Those same First Amendment concerns are exponentially greater for online services hosting everyone’s speech. That is why we do not believe that KOSA’s rule of construction will prevent the broader censorship that results from the bill’s duty of care. 

Finally, we do not believe the rule of construction helps the government overcome its burden on strict scrutiny to show that KOSA is narrowly tailored or restricts less speech than necessary. Instead, the rule of construction actually heightens KOSA’s violation of the First Amendment by preferencing certain viewpoints over others. The rule of construction here creates a legal preference for viewpoints that seek to mitigate the various identified harms, and punishes viewpoints that are neutral or even mildly positive of those harms. While EFF agrees that such speech may be awful, the First Amendment does not permit the government to make these viewpoint-based distinctions without satisfying strict scrutiny. It cannot meet that heavy burden with KOSA.  

KOSA's Focus on Design Features Doesn’t Change Our First Amendment Concerns 

KOSA supporters argue that because the duty of care and other provisions of KOSA concern an online service or platforms’ design features, the bill raises no First Amendment issues. We disagree.  

It’s true enough that KOSA creates liability for services that fail to “exercise reasonable care in the creation and implementation of any design feature” to prevent the bill’s enumerated harms. But the features themselves are not what KOSA's duty of care deems harmful. Rather, the provision specifically links the design features to minors’ access to the enumerated content that KOSA deems harmful. In that way, the design features serve as little more than a distraction. The duty of care provision is not concerned per se with any design choice generally, but only those design choices that fail to mitigate minors’ access to information about depression, eating disorders, and the other identified content. 

Once again, the Supreme Court’s decision in Smith shows why it’s incorrect to argue that KOSA’s regulation of design features avoids the First Amendment concerns. If the ordinance at issue in Smith regulated the way in which bookstores were designed, and imposed liability based on where booksellers placed certain offending books in their stores—for example, in the front window—we  suspect that the Supreme Court would have recognized, rightly, that the design restriction was little more than an indirect effort to unconstitutionally regulate the content. The same holds true for KOSA.  

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA Doesn’t “Mandate” Age-Gating, But It Heavily Pushes Platforms to Do So and Provides Few Other Avenues to Comply 

KOSA was amended in May 2023 to include language that was meant to ease concerns about age verification; in particular, it included explicit language that age verification is not required under the “Privacy Protections” section of the bill. The bill now states that a covered platform is not required to implement an age gating or age verification functionality to comply with KOSA.  

EFF acknowledges the text of the bill and has been clear in our messaging that nothing in the proposal explicitly requires services to implement age verification. Yet it's hard to see this change as anything other than a technical dodge that will be contradicted in practice.  

KOSA creates liability for any regulated platform or service that presents certain content to minors that the bill deems harmful to them. To comply with that new liability, those platforms and services’ options are limited. As we see them, the options are either to filter content for known minors or to gate content so only adults can access it. In either scenario, the linchpin is the platform knowing every user’s age  so it can identify its minor users and either filter the content they see or  exclude them from any content that could be deemed harmful under the law.  

EFF acknowledges the text of the bill and has been clear in our messaging that nothing in the proposal explicitly requires services to implement age verification.

There’s really no way to do that without implementing age verification. Regardless of what this section of the bill says, there’s no way for platforms to block either categories of content or design features for minors without knowing the minors are minors.  

We also don’t think KOSA lets platforms  claim ignorance if they take steps to never learn the ages of their users. If a 16-year-old user misidentifies herself as an adult and the platform does not use age verification, it could still be held liable because it should have “reasonably known” her age. The platform’s ignorance thus could work against it later, perversely incentivizing the services to implement age verification at the outset. 

EFF Remains Concerned About State Attorneys General Enforcing KOSA 

Another change that KOSA’s sponsors made  this year was to remove the ability of state attorneys general to enforce KOSA’s duty of care standard. We respect that some groups believe this addresses  concerns that some states would misuse KOSA to target minors’ access to any information that state officials dislike, including LGBTQIA+ or sex education information. We disagree that this modest change prevents this harm. KOSA still lets state attorneys general  enforce other provisions, including a section requiring certain “safeguards for minors.” Among the safeguards is a requirement that platforms “limit design features” that lead to minors spending more time on a service, including the ability to scroll through content, be notified of other content or messages, or auto playing content.  

But letting an attorney general  enforce KOSA’s requirement of design safeguards could be used as a proxy for targeting services that host content certain officials dislike.  The attorney general would simply target the same content or service it disfavored, butinstead of claiming that it violated KOSA’s duty to care, the official instead would argue that the service failed to prevent harmful design features that minors in their state used, such as notifications or endless scrolling. We think the outcome will be the same: states are likely to use KOSA to target speech about sexual health, abortion, LBGTQIA+ topics, and a variety of other information. 

KOSA Applies to Broad Swaths of the Internet, Not Just the Big Social Media Platforms 

Many sites, platforms, apps, and games would have to follow KOSA’s requirements. It applies to “an online platform, online video game, messaging application, or video streaming service that connects to the internet and that is used, or is reasonably likely to be used, by a minor.”  

There are some important exceptions—it doesn’t apply to services that only provide direct or group messages only, such as Signal, or to schools, libraries, nonprofits, or to ISP’s like Comcast generally. This is good—some critics of KOSA have been concerned that it would apply to websites like Archive of Our Own (AO3), a fanfiction site that allows users to read and share their work, but AO3 is a nonprofit, so it would not be covered.  

But  a wide variety of niche online services that are for-profit  would still be regulated by KOSA. Ravelry, for example, is an online platform focused on knitters, but it is a business.   

And it is an open question whether the comment and community portions of major mainstream news and sports websites are subject to KOSA. The bill exempts news and sports websites, with the huge caveat that they are exempt only so long as they are “not otherwise an online platform.” KOSA defines “online platform” as “any public-facing website, online service, online application, or mobile application that predominantly provides a community forum for user generated content.” It’s easily arguable that the New York Times’ or ESPN’s comment and forum sections are predominantly designed as places for user-generated content. Would KOSA apply only to those interactive spaces or does the exception to the exception mean the entire sites are subject to the law? The language of the bill is unclear. 

Not All of KOSA’s Critics Are Right, Either 

Just as we don’t agree on KOSA’s likely outcomes with many of its supporters, we also don’t agree with every critic regarding KOSA’s consequences. This isn’t surprising—the law is broad, and a major complaint is that it remains unclear how its vague language would be interpreted. So let’s address some of the more common misconceptions about the bill. 

Large Social Media May Not Entirely Block Young People, But Smaller Services Might 

Some people have concerns that KOSA will result in minors not being able to use social media at all. We believe a more likely scenario is that the major platforms would offer different experiences to different age groups.  

They already do this in some ways—Meta currently places teens into the most restrictive content control setting on Instagram and Facebook. The company specifically updated these settings for many of the categories included in KOSA, including suicide, self-harm, and eating disorder content. Their update describes precisely what we worry KOSA would require by law: “While we allow people to share content discussing their own struggles with suicide, self-harm and eating disorders, our policy is not to recommend this content and we have been focused on ways to make it harder to find.” TikTok also has blocked some videos for users under 18. To be clear, this content filtering as a result of KOSA will be harmful and would violate the First Amendment.  

Though large platforms will likely react this way, many smaller platforms will not be capable of this kind of content filtering. They very well may decide blocking young people entirely is the easiest way to protect themselves from liability. We cannot know how every platform will react if KOSA is enacted, but smaller platforms that do not already use complex automated content moderation tools will likely find it financially burdensome to implement both age verification tools and content moderation tools.  

KOSA Won’t Necessarily Make Your Real Name Public by Default 

One recurring fear that critics of KOSA have shared is that they will no longer to be able to use platforms anonymously. We believe this is true, but there is some nuance to it. No one should have to hand over their driver's license—or, worse, provide biometric information—just to access lawful speech on websites. But there's nothing in KOSA that would require online platforms to publicly tie your real name to your username.  

Still, once someone shares information to verify their age, there’s no way for them to be certain that the data they’re handing over is not going to be retained and used by the website, or further shared or even sold. As we’ve said, KOSA doesn't technically require age verification but we think it’s the most likely outcome. Users still will be forced to trust that the website they visit, or its third-party verification service, won’t misuse their private data, including their name, age, or biometric information. Given the numerous  data privacy blunders we’ve seen from companies like Meta in the past, and the general concern with data privacy that Congress seems to share with the general public (and with EFF), we believe this outcome to be extremely dangerous. Simply put: Sharing your private info with a company doesn’t necessarily make it public, but it makes it far more likely to become public than if you hadn’t shared it in the first place.   

We Agree With Supporters: Government Should Study Social Media’s Effects on Minors 

We know tensions are high; this is an incredibly important topic, and an emotional one. EFF does not have all the right answers regarding how to address the ways in which young people can be harmed online. Which is why we agree with KOSA’s supporters that the government should conduct much greater research on these issues. We believe that comprehensive fact-finding is the first step to both identifying the problems and legislative solutions. A provision of KOSA does require the National Academy of Sciences to research these issues and issue reports to the public. But KOSA gets this process backwards. It creates solutions to general concerns about young people being harmed without first doing the work necessary to show that the bill’s provisions address those problems. As we have said repeatedly, we do not think KOSA will address harms to young people online. We think it will exacerbate them.  

Even if your stance on KOSA is different from ours, we hope we are all working toward the same goal: an internet that supports freedom, justice, and innovation for all people of the world. We don’t believe KOSA will get us there, but neither will ad hominem attacks. To that end,  we look forward to more detailed analyses of the bill from its supporters, and to continuing thoughtful engagement from anyone interested in working on this critical issue. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

Aaron Mackey

San Diego City Council Breaks TRUST

3 days 12 hours ago

In a stunning reversal against the popular Transparent & Responsible Use of Surveillance Technology (TRUST) ordinance, the San Diego city council voted earlier this year to cut many of the provisions that sought to ensure public transparency for law enforcement surveillance technologies. 

Similar to other Community Control Of Police Surveillance (CCOPS) ordinances, the TRUST ordinance was intended to ensure that each police surveillance technology would be subject to basic democratic oversight in the form of public disclosures and city council votes. The TRUST ordinance was fought for by a coalition of community organizations– including several members of the Electronic Frontier Alliance – responding to surprise smart streetlight surveillance that was not put under public or city council review.  

The TRUST ordinance was passed one and a half years ago, but law enforcement advocates immediately set up roadblocks to implementation. Police unions, for example, insisted that some of the provisions around accountability for misuse of surveillance needed to be halted after passage to ensure they didn’t run into conflict with union contracts. The city kept the ordinance unapplied and untested, and then in the late summer of 2023, a little over a year after passage, the mayor proposed a package of changes that would gut the ordinance. This included exemption of a long list of technologies, including ARJIS databases and record management system data storage. These changes were later approved this past January.  

But use of these databases should require, for example, auditing to protect data security for city residents. There also should be limits on how police share data with federal agencies and other law enforcement agencies, which might use that data to criminalize San Diego residents for immigration status, gender-affirming health care, or exercise of reproductive rights that are not criminalized in the city or state. The overall TRUST ordinance stands, but partly defanged with many carve-outs for technologies the San Diego police will not need to bring before democratically-elected lawmakers and the public. 

Now, opponents of the TRUST ordinance are emboldened with their recent victory, and are vowing to introduce even more amendments to further erode the gains of this ordinance so that San Diegans won’t have a chance to know how their local law enforcement surveils them, and no democratic body will be required to consent to the technologies, new or old. The members of the TRUST Coalition are not standing down, however, and will continue to fight to defend the standing portions of the TRUST ordinance, and to regain the wins for public oversight that were lost. 

As Lilly Irani, from Electronic Frontier Alliance member and TRUST Coalition member Tech Workers Coalition San Diegohas said

“City Council members and the mayor still have time to make this right. And we, the people, should hold our elected representatives accountable to make sure they maintain the oversight powers we currently enjoy — powers the mayor’s current proposal erodes.” 

If you live or work in San Diego, it’s important to make it clear to city officials that San Diegans don’t want to give police a blank check to harass and surveil them. Such dangerous technology needs basic transparency and democratic oversight to preserve our privacy, our speech, and our personal safety. 

José Martinez

5 Questions to Ask Before Backing the TikTok Ban

3 days 12 hours ago

With strong bipartisan support, the U.S. House voted 352 to 65 to pass HR 7521 this week, a bill that would ban TikTok nationwide if its Chinese owner doesn’t sell the popular video app. The TikTok bill’s future in the U.S. Senate isn’t yet clear, but President Joe Biden has said he would sign it into law if it reaches his desk. 

The speed at which lawmakers have moved to advance a bill with such a significant impact on speech is alarming. It has given many of us — including, seemingly, lawmakers themselves — little time to consider the actual justifications for such a law. In isolation, parts of the argument might sound somewhat reasonable, but lawmakers still need to clear up their confused case for banning TikTok. Before throwing their support behind the TikTok bill, Americans should be able to understand it fully, something that they can start doing by considering these five questions. 

1. Is the TikTok bill about privacy or content?

Something that has made HR 7521 hard to talk about is the inconsistent way its supporters have described the bill’s goals. Is this bill supposed to address data privacy and security concerns? Or is it about the content TikTok serves to its American users? 

From what lawmakers have said, however, it seems clear that this bill is strongly motivated by content on TikTok that they don’t like. When describing the "clear threat" posed by foreign-owned apps, the House report on the bill  cites the ability of adversary countries to "collect vast amounts of data on Americans, conduct espionage campaigns, and push misinformation, disinformation, and propaganda on the American public."

This week, the bill’s Republican sponsor Rep. Mike Gallagher told PBS Newshour that the “broader” of the two concerns TikTok raises is “the potential for this platform to be used for the propaganda purposes of the Chinese Communist Party." On that same program, Representative Raja Krishnamoorthi, a Democratic co-sponsor of the bill, similarly voiced content concerns, claiming that TikTok promotes “drug paraphernalia, oversexualization of teenagers” and “constant content about suicidal ideation.”

2. If the TikTok bill is about privacy, why aren’t lawmakers passing comprehensive privacy laws? 

It is indeed alarming how much information TikTok and other social media platforms suck up from their users, information that is then collected not just by governments but also by private companies and data brokers. This is why the EFF strongly supports comprehensive data privacy legislation, a solution that directly addresses privacy concerns. This is also why it is hard to take lawmakers at their word about their privacy concerns with TikTok, given that Congress has consistently failed to enact comprehensive data privacy legislation and this bill would do little to stop the many other ways adversaries (foreign and domestic) collect, buy, and sell our data. Indeed, the TikTok bill has no specific privacy provisions in it at all.

It has been suggested that what makes TikTok different from other social media companies is how its data can be accessed by a foreign government. Here, too, TikTok is not special. China is not unique in requiring companies in the country to provide information to them upon request. In the United States, Section 702 of the FISA Amendments Act, which is up for renewal, authorizes the mass collection of communication data. In 2021 alone, the FBI conducted up to 3.4 million warrantless searches through Section 702. The U.S. government can also demand user information from online providers through National Security Letters, which can both require providers to turn over user information and gag them from speaking about it. While the U.S. cannot control what other countries do, if this is a problem lawmakers are sincerely concerned about, they could start by fighting it at home.

3. If the TikTok bill is about content, how will it avoid violating the First Amendment? 

Whether TikTok is banned or sold to new owners, millions of people in the U.S. will no longer be able to get information and communicate with each other as they presently do. Indeed, one of the given reasons to force the sale is so TikTok will serve different content to users, specifically when it comes to Chinese propaganda and misinformation.

The First Amendment to the U.S. Constitution rightly makes it very difficult for the government to force such a change legally. To restrict content, U.S. laws must be the least speech-restrictive way of addressing serious harms. The TikTok bill’s supporters have vaguely suggested that the platform poses national security risks. So far, however, there has been little public justification that the extreme measure of banning TikTok (rather than addressing specific harms) is properly tailored to prevent these risks. And it has been well-established law for almost 60 years that U.S. people have a First Amendment right to receive foreign propaganda. People in the U.S. deserve an explicit explanation of the immediate risks posed by TikTok — something the government will have to do in court if this bill becomes law and is challenged.

4. Is the TikTok bill a ban or something else? 

Some have argued that the TikTok bill is not a ban because it would only ban TikTok if owner ByteDance does not sell the company. However, as we noted in the coalition letter we signed with the American Civil Liberties Union, the government generally cannot “accomplish indirectly what it is barred from doing directly, and a forced sale is the kind of speech punishment that receives exacting scrutiny from the courts.” 

Furthermore, a forced sale based on objections to content acts as a backdoor attempt to control speech. Indeed, one of the very reasons Congress wants a new owner is because it doesn’t like China’s editorial control. And any new ownership will likely bring changes to TikTok. In the case of Twitter, it has been very clear how a change of ownership can affect the editorial policies of a social media company. Private businesses are free to decide what information users see and how they communicate on their platforms, but when the U.S. government wants to do so, it must contend with the First Amendment. 

5. Does the U.S. support the free flow of information as a fundamental democratic principle? 

Until now, the United States has championed the free flow of information around the world as a fundamental democratic principle and called out other nations when they have shut down internet access or banned social media apps and other online communications tools. In doing so, the U.S. has deemed restrictions on the free flow of information to be undemocratic.

In 2021, the U.S. State Department formally condemned a ban on Twitter by the government of Nigeria. “Unduly restricting the ability of Nigerians to report, gather, and disseminate opinions and information has no place in a democracy,” a department spokesperson wrote. “Freedom of expression and access to information both online and offline are foundational to prosperous and secure democratic societies.”

Whether it’s in Nigeria, China, or the United States, we couldn’t agree more. Unfortunately, if the TikTok bill becomes law, the U.S. will lose much of its moral authority on this vital principle.

TAKE ACTION

TELL CONGRESS: DON'T BAN TIKTOK

Hudson Hongo

Location Data Tracks Abortion Clinic Visits. Here’s What to Know

3 days 13 hours ago

Our concerns about the selling and misuse of location data for those seeking reproductive and gender healthcare are escalating amid a recent wave of cases and incidents demonstrating that the digital trail we leave is being used by anti-abortion activists.

The good news is some states and tech companies are taking steps to better protect location data privacy, including information that endangers people needing or seeking information about reproductive and gender-affirming healthcare. But we know more must be done—by pharmacies, our email providers, and lawmakers—to plug gaping holes in location data protection.

Location data is highly sensitive, as it paints a picture of our daily lives—where we go, who we visit, when we seek medical care, or what clinics we visit. That’s what makes it so attractive to data brokers and law enforcement in states outlawing abortion and gender-affirming healthcare and those seeking to exploit such data for ideological or commercial purposes.

What we’re seeing is deeply troubling. Sen. Ron Wyden recenty disclosed that vendor Near Intelligence allegedly gathered location data of people’s visits to nearly 600 Planned Parenthood locations across 48 states, without consent. It sold that data to an anti-abortion group, which used it in a massive anti-abortion ad campaign.The Wisconsin-based group used the geofenced data to send mobile ads to people who visited the clinics.

It’s hardly a leap to imagine that law enforcement and bounty hunters in anti-abortion states would gladly buy the same data to find out who is visiting Planned Parenthood clinics and try to charge and imprison women, their families, doctors, and caregivers. That’s the real danger of an unregulated data broker industry; anyone can buy what’s gathered from warrantless surveillance, for whatever nefarious purpose they choose.

For example, police in Idaho, where abortion is illegal, used cell phone data in an investigation against an Idaho woman and her son charged with kidnapping. The data showed that they had taken the son’s minor girlfriend to Oregon, where abortion is legal, to obtain an abortion.

The exploitation of location data is not the only problem. Information about prescription medicines we take is not protected against law enforcement requests. The nation’s eight largest pharmacy chains, including CVS, Walgreens, and Rite Aid, have routinely turned over prescription records of thousands of Americans to law enforcement agencies or other government entities secretly without a warrant, according to a congressional inquiry.

Many people may not know that their prescription records can be obtained by law enforcement without too much trouble. There’s not much standing between someone’s self-managed abortion medication and a law enforcement records demand. In April the U.S. Health and Human Services Department proposed a rule that would prevent healthcare providers and insurers from giving information to state officials trying to prosecute some seeking or providing a legal abortion. A final rule has not yet been published.

Exploitation of location and healthcare data to target communities could easily expand to other groups working to protect bodily autonomy, especially those most likely to suffer targeted harassment and bigotry. With states passing and proposing bills restricting gender-affirming care and state law enforcement officials pursuing medical records of transgender youth across state lines, it’s not hard to imagine them buying or using location data to find people to prosecute.

To better protect people against police access to sensitive health information, lawmakers in a few states have taken action. In 2022, California enacted two laws protecting abortion data privacy and preventing California companies from sharing abortion data with out-of-state entities.

Then, last September the state enacted a shield law prohibiting California-based companies, including social media and tech companies, from disclosing patients’ private communications regarding healthcare that is legally protected in the state.

Massachusetts lawmakers have proposed the Location Shield Act, which would prohibit the sale of cellphone location information to data brokers. The act would make it harder to trace the path of those traveling to Massachusetts for abortion services.

Of course, tech companies have a huge role to play in location data privacy. EFF was glad when Google said in 2022 it would delete users’ location history for visits to medical facilities, including abortion clinics and counseling and fertility centers. Google pledged that when the location history setting on a device was turned on, it would delete entries for particularly personal places like reproductive health clinics soon after such a visit.

But a study by AccountableTech testing Google’s pledge said the company wasn’t living up to its promises and continued to collect and retain location data from individuals visiting abortion clinics. Accountable Tech reran the study in late 2023 and the results were again troubling—Google still retained location search query data for some visits to Planned Parenthood clinics. It appears users will have to manually delete location search history to remove information about the routes they take to visiting sensitive locations. It doesn’t happen automatically.

Late last year, Google announced plans to move saved Timeline entries in Google Maps to users’ devices. Users who want to keep the entries could choose to back up the data to the cloud, where it would be automatically encrypted and out of reach even to Google.

These changes would appear to make it much more difficult—if not impossible—for Google to provide mass location data in response to a geofence warrant, a change we’ve been asking Google to implement for years. But when these features are coming is uncertain—though Google said in December they’re “coming soon.”

Google should implement the changes sooner as opposed to later. In the meantime, those seeking reproductive and gender information and healthcare can find tips on how to protect themselves in our Surveillance Self Defense guide. 

Karen Gullo

How to Figure Out What Your Car Knows About You (and Opt Out of Sharing When You Can)

3 days 14 hours ago

Cars collect a lot of our personal data, and car companies disclose a lot of that data to third parties. It’s often unclear what’s being collected, and what's being shared and with whom. A recent New York Times article highlighted how data is shared by G.M. with insurance companies, sometimes without clear knowledge from the driver. If you're curious about what your car knows about you, you might be able to find out. In some cases, you may even be able to opt out of some of that sharing of data.

Why Your Car Collects and Shares Data

A car (and its app, if you installed one on your phone) can collect all sorts of data in the background with and without you realizing it. This in turn may be shared for a wide variety of purposes, including advertising and risk-assessment for insurance companies. The list of data collected is long and dependent on the car’s make, model, and trim.  But if you look through any car maker’s privacy policy, you'll see some trends:

  • Diagnostics data, sometimes referred to as “vehicle health data,” may be used internally for quality assurance, research, recall tracking, service issues, and similar unsurprising car-related purposes. This type of data may also be shared with dealers or repair companies for service.
  • Location information may be collected for emergency services, mapping, and to catalog other environmental information about where a car is operated. Some cars may give you access to the vehicle’s location in the app.
  • Some usage data may be shared or used internally for advertising. Your daily driving or car maintenance habits, alongside location data, is a valuable asset to the targeted advertising ecosystem. 
  • All of this data could be shared with law enforcement.
  • Information about your driving habits, sometimes referred to as “Driving data” or “Driver behavior information,” may be shared with insurance companies and used to alter your premiums.  This can range from odometer readings to braking and acceleration statistics and even data about what time of day you drive.. 

Surprise insurance sharing is the thrust of The New York Times article, and certainly not the only problem with car data. We've written previously about how insurance companies offer discounts for customers who opt into a usage-based insurance program. Every state except California currently allows the use of telematics data for insurance rating, but privacy protections for this data vary widely across states.

When you sign up directly through an insurer, these opt-in insurance programs have a pretty clear tradeoff and sign up processes, and they'll likely send you a physical device that you plug into your car's OBD port that then collects and transmits data back to the insurer.

But some cars have their own internal systems for sharing information with insurance companies that can piggy back off an app you may have installed, or the car’s own internet connection. Many of these programs operate behind dense legalese. You may have accidentally “agreed” to such sharing without realizing it, while buying a new car—likely in a state of exhaustion and excitement after finally completing a gauntlet of finance and legal forms.

This gets more confusing: car-makers use different terms for their insurance sharing programs. Some, like Toyota's “Insure Connect,” are pretty obviously named. But others, like Honda, tuck information about sharing with a data broker (that then shares with insurance companies) inside a privacy policy after you enable its “Driver Feedback” feature. Others might include the insurance sharing opt-in alongside broader services you might associate more with safety or theft, like G.M.’s OnStar, Subaru’s Starlink, and Volkswagen’s Car-Net.

The amount of data shared differs by company, too. Some car makers might share only small amounts of data, like an odometer reading, while others might share specific details about driving habits.

That's just the insurance data sharing. There's little doubt that many cars sell other data for behavioral advertising, and like the rest of that industry, it's nearly impossible to track exactly where your data goes and how it's used.

See What Data Your Car Has (and Stop the Sharing)

This is a general guide to see what your car collects and who it shares it with. It does not include information about specific scenarios—like intimate partner violence— that may raise distinctive driver privacy issues.

See How Your Car Handles (Data)
Start by seeing what your car is equipped to collect using Privacy4Cars’ Vehicle Privacy Report. Once you enter your car’s VIN, the site provides a rough idea of what sorts of data your car collects. It's also worth reading about your car manufacturer’s more general practices on Mozilla's Privacy Not Included site.

Check the Privacy Options In Your Car’s Apps and Infotainment System
If you use an app for your car, head into the app’s settings, and look for any sort of data sharing options. Look for settings like “Data Privacy” or “Data Usage.” When possible, opt out of sharing any data with third-parties, or for behavioral advertising. As annoying as it may be, it’s important to read carefully here so you don’t accidentally disable something you want, like a car’s SOS feature. Be mindful that, at least according to Mozilla’s report on Tesla, opting out of certain data sharing might someday make the car undriveable. Now’s also a good time to disable ad tracking on your phone.

When it comes to sharing with insurance companies, you’re looking for an option that may be something obvious, like Toyota’s “Insure Connect,” or less obvious, like Kia’s “Driving Score.” If your car’s app has any sort of driver scoring or feedback option—some other names include GM’s ”Smart Driver,” Honda’s “Driver Feedback,” or Mitsubishi’s “Driving Score”—there’s a chance it’s sharing that data with an insurance company. Check for these options in both the app and the car’s infotainment system.

If you did accidentally sign up for sharing data with insurance companies, you may want to call your insurance company to see how doing so may affect your premiums. Depending on your driving habits, your premiums might go up or down, and in either case you don’t want a surprise bill.

File a Privacy Request with the Car Maker
Next, file a privacy request with the car manufacturer so you can see exactly what data the company has collected about you. Some car makers will provide this to anyone who asks. Others might only respond to requests from residents of states with a consumer data privacy law that requires their response. The International Association of Privacy Professionals has published this list of states with such laws.

In these states, you have a “right to know” or “right to access” your data, which requires the company to send you a copy of what personal information it collected about you. Some of these states also guarantee “data portability,” meaning the right to access your data in a machine-readable format. File one of these requests, and you should receive a copy of your data. In some states, you can also file a request for the car maker to not sell or share your information, or to delete it. While the car maker might not be legally required to respond to your request if you're not from a state with these privacy rights, it doesn’t hurt to ask anyway.

Every company tends to word these requests a little differently, but you’re looking for options to get a copy of your data, and ask them to stop sharing it. This typically requires filling out a separate request form for each type of request.

Here are the privacy request pages for the major car brands:

Sometimes, you will need to confirm the request in an email, so be sure to keep an eye on your inbox.

Check for Data On Popular Data Brokers Known to Share with Insurers
Finally, request your data from data brokers known to hand car data to insurers. For example, do so with the two companies mentioned in The New York Times’ article: 

Now, you wait. In most states, within 45 to 90 days you should receive an email from the car maker, and another from the data brokers, which will often include a link to your data. You will typically get a CSV file, though it may also be a PDF, XLS, or even a folder with a whole webpage and an HTML file. If you don't have any sort of spreadsheet software on your computer, you might struggle to open it up, but most of the files you get can be opened in free programs, like Google Sheets or LibreOffice.

Without a national law that puts privacy first, there is little that most people can do to stop this sort of data sharing. Moreover, the steps above clearly require far too much effort for most people to take. That’s why we need much more than these consumer rights to know, to delete, and to opt-out of disclosure: we also need laws that automatically require corporations to minimize the data they process about us, and to get our opt-in consent before processing our data. As to car insurers, we've outlined exactly what sort of guardrails we'd like to see here

As The New York Times' reporting revealed, many people were surprised to learn how their data is collected, disclosed, and used, even if there was an opt-in consent screen. This is a clear indication that car makers need to do better. 

Thorin Klosowski

Making the Law Accessible in Europe and the USA

4 days 9 hours ago

Special thanks to EFF legal intern Alissa Johnson, who was the lead author of this post.

Earlier this month, the European Union Court of Justice ruled that harmonized standards are a part of EU law, and thus must be accessible to EU citizens and residents free of charge.

While it might seem like common sense that the laws that govern us should be freely accessible, this question has been in dispute in the EU for the past five years, and in the U.S. for over a decade. At the center of this debate are technical standards, developed by private organizations and later incorporated into law. Before they were challenged in court, standards-development organizations were able to limit access to these incorporated standards through assertions of copyright. Regulated parties or concerned citizens checking compliance with technical or safety standards had to do so by purchasing these standards, often at significant expense, from private organizations. While free alternatives, like proprietary online “reading rooms,” were sometimes available, these options had their own significant downsides, including limited functionality and privacy concerns.

In 2018, two nonprofits, Public.Resource.Org and Right to Know, made a request to the European Commission for access to four harmonized standards—that is, standards that apply across the European Union—pertaining to the safety of toys. The Commission refused to grant them access on the grounds that the standards were copyrighted.   

The nonprofits then brought an action before the General Court of the European Union seeking annulment of the Commission’s decision. They made two main arguments. First, that copyright couldn’t be applicable to the harmonized standards, and that open access to the standards would not harm the commercial interests of the European Committee for Standardization or other standard setting bodies. Second, they argued that the public interest in open access to the law should override whatever copyright interests might exist. The General Court rejected both arguments, finding that the threshold for originality that makes a work eligible for copyright protection had been met, the sale of standards was a vital part of standards bodies’ business model, and the public’s interest in ensuring the proper functioning of the European standardization system outweighed their interest in free access to harmonized standards.

Last week, the EU Court of Justice overturned the General Court decision, holding that EU citizens and residents have an overriding interest in free access to the laws that govern them. Article 15(3) of the Treaty on the Functioning of the EU and Article 42 of the Charter of Fundamental Rights of the EU guarantee a right of access to documents of Union institutions, bodies, offices, and agencies. These bodies can refuse access to a document where its disclosure would undermine the protection of commercial interests, including intellectual property, unless there is an overriding public interest in disclosure.

Under the ECJ’s ruling, standards written by private companies, but incorporated into legislation, now form part of EU law. People need access to these standards to determine their own compliance. While compliance with harmonized standards is not generally mandatory, it is in the case of the toy safety standards in question here. Even when compliance is not mandatory, products that meet technical standards benefit from a “presumption of conformity,” and failure to conform can impose significant administrative difficulties and additional costs.

Given that harmonized standards are a part of EU law, citizens and residents of member states have an interest in free access that overrides potential copyright concerns. Free access is necessary for economic actors “to ascertain unequivocally what their rights and obligations are,” and to allow concerned citizens to examine compliance. As the U.S. Supreme Court noted in in 2020, “[e]very citizen is presumed to know the law, and it needs no argument to show that all should have free access” to it.

The Court of Justice’s decision has far-reaching effects beyond the four toy safety standards under dispute. Its reasoning classifying these standards as EU law applies more broadly to standards incorporated into law. We’re pleased that under this precedent, EU standards-development organizations will be required to disclose standards on request without locking these important parts of the law behind a paywall.

Mitch Stoltz

Why U.S. House Members Opposed the TikTok Ban Bill

4 days 15 hours ago

What do House Democrats like Alexandria Ocasio-Cortez and Barbara Lee have in common with House Republicans like Thomas Massie and Andy Biggs? Not a lot. But they do know an unconstitutional bill when they see one.

These and others on both sides of the aisle were among the 65 House Members who voted "no" yesterday on the “Protecting Americans from Foreign Adversary Controlled Applications Act,” H.R. 7521, which would effectively ban TikTok. The bill now goes to the Senate, where we hope cooler heads will prevail in demanding comprehensive data privacy legislation instead of this attack on Americans' First Amendment rights.

We're saying plenty about this misguided, unfounded bill, and we want you to speak out about it too, but we thought you should see what some of the House Members who opposed it said, in their own words.

 

I am voting NO on the TikTok ban.

Rather than target one company in a rushed and secretive process, Congress should pass comprehensive data privacy protections and do a better job of informing the public of the threats these companies may pose to national security.

— Rep. Barbara Lee (@RepBarbaraLee) March 13, 2024

   ___________________ 

Today, I voted against the so-called “TikTok Bill.”

Here’s why: pic.twitter.com/Kbyh6hEhhj

gilc15axuaahok9.jpg

— Rep Andy Biggs (@RepAndyBiggsAZ) March 13, 2024

   ___________________

Today, I voted against H.R. 7521. My full statement: pic.twitter.com/9QCFQ2yj5Q

nadler.png

— Rep. Nadler (@RepJerryNadler) March 13, 2024

   ___________________ 

Today I claimed 20 minutes in opposition to the TikTok ban bill, and yielded time to several likeminded colleagues.

This bill gives the President far too much authority to determine what Americans can see and do on the internet.

This is my closing statement, before I voted No. pic.twitter.com/xMxp9bU18t

massie.mp4

— Thomas Massie (@RepThomasMassie) March 13, 2024

   ___________________ 

Why I voted no on the bill to potentially ban tik tok: pic.twitter.com/OGkfdxY8CR

himes.jpg

— Jim Himes 🇺🇸🇺🇦 (@jahimes) March 13, 2024

   ___________________ 

I don’t use TikTok. I find it unwise to do so. But after careful review, I’m a no on this legislation.

This bill infringes on the First Amendment and grants undue power to the administrative state. pic.twitter.com/oSpmYhCrV8

bishop.mp4

— Rep. Dan Bishop (@RepDanBishop) March 13, 2024

   ___________________ 

I’m voting NO on the TikTok forced sale bill.

This bill was incredibly rushed, from committee to vote in 4 days, with little explanation.

There are serious antitrust and privacy questions here, and any national security concerns should be laid out to the public prior to a vote.

— Alexandria Ocasio-Cortez (@AOC) March 13, 2024

   ___________________ 

We should defend the free & open debate that our First Amendment protects. We should not take that power AWAY from the people & give it to the government. The answer to authoritarianism is NOT more authoritarianism. The answer to CCP-style propaganda is NOT CCP-style oppression. pic.twitter.com/z9HWgUSMpw
mcclintock.mp4

— Tom McClintock (@RepMcClintock) March 13, 2024

   ___________________ 

I'm voting no on the TikTok bill. Here's why:
1) It was rushed.
2) There's major free speech issues.
3) It would hurt small businesses.
4) America should be doing way more to protect data privacy & combatting misinformation online. Singling out one app isn't the answer.

— Rep. Jim McGovern (@RepMcGovern) March 13, 2024

    ___________________

Solve the correct problem.
Privacy.
Surveillance.
Content moderation.

Who owns #TikTok?
60% investors - including Americans
20% +7,000 employees - including Americans
20% founders
CEO & HQ Singapore
Data in Texas held by Oracle

What changes with ownership? I’ll be voting NO. pic.twitter.com/MrfROe02IS

davidson.mp4

— Warren Davidson 🇺🇸 (@WarrenDavidson) March 13, 2024

   ___________________ 

I voted no on the bill to force the sale of TikTok. Unlike our adversaries, we believe in freedom of speech and don’t ban social media platforms. Instead of this rushed bill, we need comprehensive data security legislation that protects all Americans.

— Val Hoyle (@RepValHoyle) March 13, 2024

    ___________________

Please tell the Senate to reject this bill and instead give Americans the comprehensive data privacy protections we so desperately need.

TAKE ACTION

TELL CONGRESS: DON'T BAN TIKTOK

Josh Richman

SXSW Tried to Silence Critics with Bogus Trademark and Copyright Claims. EFF Fought Back.

5 days 8 hours ago

Special thanks to EFF legal intern Jack Beck, who was the lead author of this post.

Amid heavy criticism for its ties to weapons manufacturers supplying Israel, South by Southwest—the organizer of an annual conference and music festival in Austin—has been on the defensive. One tool in their arsenal: bogus trademark and copyright claims against local advocacy group Austin for Palestine Coalition.

The Austin for Palestine Coalition has been a major source of momentum behind recent anti-SXSW protests. Their efforts have included organizing rallies outside festival stages and hosting an alternative music festival in solidarity with Palestine. They have also created social media posts explaining the controversy, criticizing SXSW, and calling on readers to email SXSW with demands for action. The group’s posts include graphics that modify SXSW’s arrow logo to add blood-stained fighter jets. Other images incorporate patterns evoking SXSW marketing materials overlaid with imagery like a bomb or a bleeding dove.

One of Austin for Palestine's graphics

Days after the posts went up, SXSW sent a cease-and-desist letter to Austin for Palestine, accusing them of trademark and copyright infringement and demanding they take down the posts. Austin for Palestine later received an email from Instagram indicating that SXSW had reported the post for violating their trademark rights.

We responded to SXSW on Austin for Palestine’s behalf, explaining that their claims are completely unsupported by the law and demanding they retract them.

The law is clear on this point. The First Amendment protects your right to make a political statement using trademark parodies, whether or not the trademark owner likes it. That’s why trademark law applies a different standard (the “Rogers test”) to infringement claims involving expressive works. The Rogers test is a crucial defense against takedowns like these, and it clearly applies here. Even without Rogers’ extra protections, SXSW’s trademark claim would be bogus: Trademark law is about preventing consumer confusion, and no reasonable consumer would see Austin for Palestine’s posts and infer they were created or endorsed by SXSW.

SXSW’s copyright claims are just as groundless. Basic symbols like their arrow logo are not copyrightable. Moreover, even if SXSW meant to challenge Austin for Palestine’s mimicking of their promotional material—and it’s questionable whether that is copyrightable as well—the posts are a clear example of non-infringing fair use. In a fair use analysis, courts conduct a four-part analysis, and each of those four factors here either favors Austin for Palestine or is at worst neutral. Most importantly, it’s clear that the critical message conveyed by Austin for Palestine’s use is entirely different from the original purpose of these marketing materials, and the only injury to SXSW is reputational—which is not a cognizable copyright injury.

SXSW has yet to respond to our letter. EFF has defended against bogus copyright and trademark claims in the past, and SXSW’s attempted takedown feels especially egregious considering the nature of Austin for Palestine’s advocacy. Austin for Palestine used SXSW’s iconography to make a political point about the festival itself, and neither trademark nor copyright is a free pass to shut down criticism. As an organization that “dedicates itself to helping creative people achieve their goals,” SXSW should know better.

Cara Gagliano

Protect Yourself from Election Misinformation

5 days 12 hours ago

Welcome to your U.S. presidential election year, when all kinds of bad actors will flood the internet with election-related disinformation and misinformation aimed at swaying or suppressing your vote in November. 

So… what’re you going to do about it? 

As EFF’s Corynne McSherry wrote in 2020, online election disinformation is a problem that has had real consequences in the U.S. and all over the world—it has been correlated to ethnic violence in Myanmar and India and to Kenya’s 2017 elections, among other events. Still, election misinformation and disinformation continue to proliferate online and off. 

That being said, regulation is not typically an effective or human rights-respecting way to address election misinformation. Even well-meaning efforts to control election misinformation through regulation inevitably end up silencing a range of dissenting voices and hindering the ability to challenge ingrained systems of oppression. Indeed, any content regulation must be scrutinized to avoid inadvertently affecting meaningful expression: Is the approach narrowly tailored or a categorical ban? Does it empower users? Is it transparent? Is it consistent with human rights principles? 

 While platforms and regulators struggle to get it right, internet users must be vigilant about checking the election information they receive for accuracy. There is help. Nonprofit journalism organization ProPublica published a handy guide about how to tell if what you’re reading is accurate or “fake news.” The International Federation of Library Associations and Institutions infographic on How to Spot Fake News is a quick and easy-to-read reference you can share with friends:

how_to_spot_fake_news.jpg

To make sure you’re getting good information about how your election is being conducted, check in with trusted sources including your state’s Secretary of State, Common Cause, and other nonpartisan voter protection groups, or call or text 866-OUR-VOTE (866-687-8683) to speak with a trained election protection volunteer. 

And if you see something, say something: You can report election disinformation at https://reportdisinfo.org/, a project of the Common Cause Education Fund. 

 EFF also offers some election-year food for thought: 

  • On EFF’s “How to Fix the Internet” podcast, Pamela Smith—president and CEO of Verified Voting—in 2022 talked with EFF’s Cindy Cohn and Jason Kelley about finding reliable information on how your elections are conducted, as part of ensuring ballot accessibility and election transparency.
  • Also on “How to Fix the Internet”, Alice Marwick—cofounder and principal researcher at the University of North Carolina, Chapel Hill’s Center for Information, Technology and Public Life—in 2023 talked about finding ways to identify and leverage people’s commonalities to stem the flood of disinformation while ensuring that the most marginalized and vulnerable internet users are still empowered to speak out. She discussed why seemingly ludicrous conspiracy theories get so many views and followers; how disinformation is tied to personal identity and feelings of marginalization and disenfranchisement; and when fact-checking does and doesn’t work.
  • EFF’s Cory Doctorow wrote in 2020 about how big tech monopolies distort our public discourse: “By gathering a lot of data about us, and by applying self-modifying machine-learning algorithms to that data, Big Tech can target us with messages that slip past our critical faculties, changing our minds not with reason, but with a kind of technological mesmerism.” 

An effective democracy requires an informed public and participating in a democracy is a responsibility that requires work. Online platforms have a long way to go in providing the tools users need to discern legitimate sources from fake news. In the meantime, it’s on each of us. Don’t let anyone lie, cheat, or scare you away from making the most informed decision for your community at the ballot box. 

Josh Richman

Congress Should Give Up on Unconstitutional TikTok Bans

6 days 7 hours ago

Congress’ unfounded plan to ban TikTok under the guise of protecting our data is back, this time in the form of a new bill—the “Protecting Americans from Foreign Adversary Controlled Applications Act,” H.R. 7521 — which has gained a dangerous amount of momentum in Congress. This bipartisan legislation was introduced in the House just a week ago and is expected to be sent to the Senate after a vote later this week.

A year ago, supporters of digital rights across the country successfully stopped the federal RESTRICT Act, commonly known as the “TikTok Ban” bill (it was that and a whole lot more). And now we must do the same with this bill. 

TAKE ACTION

TELL CONGRESS: DON'T BAN TIKTOK

As a first step, H.R. 7521 would force TikTok to find a new owner that is not based in a foreign adversarial country within the next 180 days or be banned until it does so. It would also give the President the power to designate other applications under the control of a country considered adversarial to the U.S. to be a national security threat. If deemed a national security threat, the application would be banned from app stores and web hosting services unless it cuts all ties with the foreign adversarial country within 180 days. The bill would criminalize the distribution of the application through app stores or other web services, as well as the maintenance of such an app by the company. Ultimately, the result of the bill would either be a nationwide ban on the TikTok, or a forced sale of the application to a different company.

The only solution to this pervasive ecosystem is prohibiting the collection of our data in the first place.

Make no mistake—though this law starts with TikTok specifically, it could have an impact elsewhere. Tencent’s WeChat app is one of the world’s largest standalone messenger platforms, with over a billion users, and is a key vehicle for the Chinese diaspora generally. It would likely also be a target. 

The bill’s sponsors have argued that the amount of private data available to and collected by the companies behind these applications — and in theory, shared with a foreign government — makes them a national security threat. But like the RESTRICT Act, this bill won’t stop this data sharing, and will instead reduce our rights online. User data will still be collected by numerous platforms—possibly even TikTok after a forced sale—and it will still be sold to data brokers who can then sell it elsewhere, just as they do now. 

The only solution to this pervasive ecosystem is prohibiting the collection of our data in the first place. Ultimately, foreign adversaries will still be able to obtain our data from social media companies unless those companies are forbidden from collecting, retaining, and selling it, full stop. And to be clear, under our current data privacy laws, there are many domestic adversaries engaged in manipulative and invasive data collection as well. That’s why EFF supports such consumer data privacy legislation

Congress has also argued that this bill is necessary to tackle the anti-American propaganda that young people are seeing due to TikTok’s algorithm. Both this justification and the national security justification raise serious First Amendment concerns, and last week EFF, the ACLU, CDT, and Fight for the Future wrote to the House Energy and Commerce Committee urging them to oppose this bill due to its First Amendment violations—specifically for those across the country who rely on TikTok for information, advocacy, entertainment, and communication. The US has rightfully condemned other countries when they have banned, or sought a ban, on specific social media platforms.

Montana’s ban was as unprecedented as it was unconstitutional

And it’s not just civil society saying this. Late last year, the courts blocked Montana’s TikTok ban, SB 419, from going into effect on January 1, 2024, ruling that the law violated users’ First Amendment rights to speak and to access information online, and the company’s First Amendment rights to select and curate users’ content. EFF and the ACLU had filed a friend-of-the-court brief in support of a challenge to the law brought by TikTok and a group of the app’s users who live in Montana. 

Our brief argued that Montana’s ban was as unprecedented as it was unconstitutional, and we are pleased that the district court upheld our free speech rights and blocked the law from going into effect. As with that state ban, the US government cannot show that a federal ban is narrowly tailored, and thus cannot use the threat of unlawful censorship as a cudgel to coerce a business to sell its property. 

TAKE ACTION

TELL CONGRESS: DON'T BAN TIKTOK

Instead of passing this overreaching and misguided bill, Congress should prevent any company—regardless of where it is based—from collecting massive amounts of our detailed personal data, which is then made available to data brokers, U.S. government agencies, and even foreign adversaries, China included. We shouldn’t waste time arguing over a law that will get thrown out for silencing the speech of millions of Americans. Instead, Congress should solve the real problem of out-of-control privacy invasions by enacting comprehensive consumer data privacy legislation.

Jason Kelley

Congress Must Stop Pushing Bills That Will Benefit Patent Trolls

6 days 8 hours ago

The U.S. Senate is moving forward with two bills that would enrich patent trolls, patent system insiders, and a few large companies that rely on flimsy patents, at the expense of everyone else. 

One bill, the Patent Eligibility Restoration Act (PERA) would bring back some of the worst software patents we’ve seen, and even re-introduce types of patents on human genes that were banned years ago. Meanwhile, a similar group of senators is trying to push forward the PREVAIL Act (S. 2220), which would shut out most of the public from even petitioning the government to reconsider wrongly granted patents. 

Take Action

Tell Congress: No New Bills For Patent Trolls

Patent trolls are companies that don’t focus on making products or selling services. Instead, they collect patents, then use them to threaten or sue other companies and individuals. They’re not a niche problem; patent trolls filed the majority of patent lawsuits last year and for all the years in which we have good data. In the tech sector, they file more than 80% of the lawsuits. These do-nothing companies continue to be vigorous users of the patent system, and they’ll be the big winners under the two bills the U.S. Senate is considering pushing forward. 

Don’t Bring Back “Do It On A Computer” Patents 

The Patent Eligibility Restoration Act, or PERA, would overturn key legal precedents that we all rely on to kick the worst-of-the-worst patents out of the system. PERA would throw out a landmark Supreme Court ruling called the Alice v. CLS Bank case, which made it clear that patents can’t just claim basic business or cultural processes by adding generic computer language. 

The Alice rules are what—finally—allowed courts to throw out the most ridiculous “do it on a computer” software patents at an early stage. Under the Alice test, courts threw out patents on “matchmaking”, online picture menus, scavenger hunts, and online photo contests

The rules under Alice are clear, fair, and they work. It hasn’t stopped patent trolling, because there are so many patent owners willing to ask for nuisance-value settlements that are far below the cost of legal defense. It’s not perfect, and it hasn’t ended patent trolling. But Alice has done a good job of saving everyday internet users from some of the worst patent claims. 

PERA would allow patents like the outrageous one brought forward in the Alice v. CLS Bank case, which claimed the idea of having a third party clear financial transactions—but on a computer. A patent on ordering restaurant food through a mobile phone, which was used to sue more than 100 restaurants, hotels, and fast-food chains before it was finally thrown out under the Alice rules, could survive if PERA becomes law. 

Don’t Bring Back Patents On Human Genes 

PERA goes further than software. It would also overturn a Supreme Court rule that prevents patents from being granted on naturally occurring human genes. For almost 30 years, some biotech and pharmaceutical companies used a cynical argument to patent genes and monopolize diagnostic tests that analyzed them. That let the patent owners run up the costs on tests like the BRCA genes, which are predictive of ovarian and breast cancers. When the Supreme Court disallowed patents on human genes found in nature, the prices of those tests plummeted. 

Patenting naturally occurring human genes is a horrific practice and the Supreme Court was right to ban it. The fact that PERA sponsors want to bring back these patents is unconscionable. 

Allowing extensive patenting of genetic information will also harm future health innovations, by blocking competition from those who may offer more affordable tests and treatments. It could affect our response to future pandemics. Imagine if the first lab to sequence the COVID-19 genome filed for patent protection, and went on to threaten other labs that seek to create tests with patent infringement. As an ACLU attorney who litigated against the BRCA gene patents has pointed out, this scenario is not fantastical if a bill like PERA were to advance. 

Take Action

Tell Congress To Reject PERA and PREVAIL

Don’t Shut Down The Public’s Right To Challenge Patents

The PREVAIL Act would bar most people from petitioning the U.S. Patent and Trademark Office (USPTO) to revoke patents that never should have been granted in the first place. 

The U.S. Patent and Trademark Office (USPTO) issues hundreds of thousands of patents every year, with less than 20 hours, on average, being devoted to examining each patent. Mistakes happen. 

That’s why Congress created a process for the public to ask the USPTO to double-check certain patents, to make sure they were not wrongly granted. This process, called inter partes review or IPR, is still expensive and difficult, but faster and cheaper than federal courts, where litigating a patent through a jury trial can cost millions of dollars. IPR has allowed the cancellation of thousands of patent claims that never should have been issued in the first place. 

The PREVAIL Act will limit access to the IPR process to only people and companies that have been directly threatened or sued over a patent. No one else will have standing to even file a petition. That means that EFF, other non-profits, and membership-based patent defense companies won’t be able to access the IPR process to protect the public. 

EFF used the IPR process back in 2013, when thousands of our supporters chipped in to raise more than $80,000 to fight against a patent that claimed to cover all podcasts. We won’t be able to do that if PREVAIL passes. 

And EFF isn’t the only non-profit to use IPRs to protect users and developers. The Linux Foundation, for instance, funds an “open source zone” that uses IPR to knock out patents that may be used to sue open source projects. Dozens of lawsuits are filed each year against open source projects, the majority of them brought by patent trolls. 

IPR is already too expensive and limited; Congress should be eliminating barriers to challenging bad patents, not raising more.

Congress Should Work For the Public, Not For Patent Trolls

The Senators pushing this agenda have chosen willful ignorance of the patent troll problem. The facts remain clear: the majority of patent lawsuits are brought by patent trolls. In the tech sector, it’s more than 80%. These numbers may be low considering threat letters from patent trolls, which don’t become visible in the public record. 

These patent lawsuits don’t have much to do with what most people think of when they think about “inventors” or inventions. They’re brought by companies that have no business beyond making patent threats. 

The Alice rules and IPR system, along with other important reforms, have weakened the power of these patent trolls. Patent trolls that used to receive regular multi-million dollar paydays have seen their incomes shrink (but not disappear). Some trolls, like Shipping and Transit LLC finally wound up operations after being hit with sanctions (more than 500 lawsuits later). Trolls like IP Edge, now being investigated by a federal judge after claiming its true “owners” included a Texas food truck owner who turned out to be, essentially, a decoy. 

There’s big money behind bringing back the patent troll business, as well as a few huge tech and pharma companies that prefer to use unjustified monopolies rather than competing fairly. Two former Federal Circuit judges, two former Directors of the U.S. Patent and Trademark Office, and many other well-placed patent insiders are all telling Congress that Alice should be overturned and patent trolls should be allowed to run amok. We can’t let that happen. 

Take Action

Tell Congress: Don't Work For Patent Trolls

Joe Mullin

Reject Nevada’s Attack on Encrypted Messaging, EFF Tells Court

6 days 9 hours ago
Nevada Makes Backward Argument That Insecure Communication Makes Children Safer

LAS VEGAS — The Electronic Frontier Foundation (EFF) and a coalition of partners urged a court to protect default encrypted messaging and children’s privacy and security in a brief filed today.

The brief by the American Civil Liberties Union (ACLU), the ACLU of Nevada, the EFF, Stanford Internet Observatory Research Scholar Riana Pfefferkorn, and six other organizations asks the court to reject a request by Nevada’s attorney general to stop Meta from offering end-to-end encryption by default to Facebook Messenger users under 18 in the state. The brief was also signed by Access Now, Center for Democracy & Technology (CDT), Fight for the Future, Internet Society, Mozilla, and Signal Messenger LLC.

Communications are safer when third parties can’t listen in on them. That’s why the EFF and others who care about privacy pushed Meta for years to make end-to-end encryption the default option in Messenger. Meta finally made the change, but Nevada wants to turn back the clock. As the brief notes, end-to-end encryption “means that even if someone intercepts the messages—whether they are a criminal, a domestic abuser, a foreign despot, or law enforcement—they will not be able to decipher or access the message.” The state of Nevada, however, bizarrely argues that young people would be better off without this protection.

“Encryption is the best tool we have for safeguarding our privacy and security online — and privacy and security are especially important for young people,” said EFF Surveillance Litigation Director Andrew Crocker. “Nevada’s argument that children need to be ‘protected’ from securely communicating isn’t just baffling; it’s dangerous.”

As explained in a friend-of-the-court brief filed by the EFF and others today, encryption is one of the best ways to reclaim our privacy and security in a digital world full of cyberattacks and security breaches. It is increasingly being deployed across the internet as a way to protect users and data. For children and their families especially, encrypted communication is one of the strongest safeguards they have against malicious misuse of their private messages — a safeguard Nevada seeks to deny them.

“The European Court of Human Rights recently rejected a Russian law that would have imposed similar requirements on services that offer end-to-end message encryption – finding that it violated human rights and EU law to deny people the security and privacy that encryption provides,” said EFF’s Executive Director Cindy Cohn. “Nevada’s attempt should be similarly rejected.”

In its motion to the court, Nevada argues that it is necessary to block end-to-end encryption on Facebook Messenger because it can impede some criminal investigations involving children. This ignores that law enforcement can and does conduct investigations involving encrypted messages, which can be reported by users and accessed from either the sender or recipient’s devices. It also ignores law enforcement’s use of the tremendous amount of additional information about users that Meta routinely collects.

The brief notes that co-amicus Pfeffercorn recently authored a study that confirmed that Nevada does not, in fact, need to block encryption to do its investigations. The study found that “content-oblivious” investigation methods are “considered more useful than monitoring the contents of users’ communications when it comes to detecting nearly every kind of online abuse.” 

“The court should reject Nevada’s motion,” said EFF’s Crocker. “Making children more vulnerable in just to make law enforcement investigators’ jobs slightly easier is an uneceesary and dangerous trade off.”

For the brief: https://www.eff.org/document/nevada-v-meta-amicus-brief

Contact:  AndrewCrockerSurveillance Litigation Directorandrew@eff.org
Hudson Hongo

EFF Urges New York Court to Protect Online Speakers’ Anonymity

6 days 10 hours ago

The First Amendment requires courts to apply a robust balancing test before unmasking anonymous online speakers, EFF explained in an amicus brief it filed recently in a New York State appeal.

In the case on appeal, GSB Gold Standard v. Google, a German company that sells cryptocurrency investments is seeking to unmask an anonymous blogger who criticized the company. Based upon a German court order, the company sought a subpoena that would identify the blogger. The blogger fought back, without success, and they are now appealing.

Like speech itself, the First Amendment right to anonymity fosters and advances public debate and self-realization. Anonymity allows speakers to communicate their ideas without being defined by their identity. Anonymity protects speakers who express critical or unpopular views from harassment, intimidation, or being silenced. And, because powerful individuals or entities’ efforts to punish one speaker through unmasking may well lead others to remain silent, protecting anonymity for one speaker can promote free expression for many others.

Too often, however, corporate or human persons try to abuse the judicial process to unmask anonymous speakers. Thus, courts should apply robust evidentiary and procedural standards before compelling the disclosure of an anonymous speaker’s identity. 

Under these standards, parties seeking to unmask anonymous speakers must first show they have meritorious legal claims, to help ensure that the litigation isn’t a pretext for harassment. Those parties that meet this first step must then also show that their interests in unmasking an anonymous speaker outweigh the speaker’s interests in retaining their anonymity. In this case, the trial court didn’t require the German company to meet this standard, and it could not have in any event.

Courts around the United States have adopted various forms of this test, with EFF often participating as amicus or counsel. We hope that New York follows their lead.

Brendan Gilligan

Access to Internet Infrastructure is Essential, in Wartime and Peacetime

6 days 16 hours ago

We’ve been saying it for 20 years, and it remains true now more than ever: the internet is an essential service. It enables people to build and create communities, shed light on injustices, and acquire vital knowledge that might not otherwise be available. And access to it becomes even more imperative in circumstances where being able to communicate and share real-time information directly with the people you trust is instrumental to personal safety and survival. More specifically, during wartime and conflict, internet and phone services enable the communication of information between people in challenging situations, as well as the reporting by on-the-ground journalists and ordinary people of the news. 

Unfortunately, governments across the world are very aware of their power to cut off this crucial lifeline, and frequently undertake targeted initiatives to do so. These internet shutdowns have become a blunt instrument that aid state violence and inhibit free speech, and are routinely deployed in direct contravention of human rights and civil liberties.

And this is not a one-dimensional situation. Nearly twenty years after the world’s first total internet shutdowns, this draconian measure is no longer the sole domain of authoritarian states but has become a favorite of a diverse set of governments across three continents. For example:

In Iran, the government has been suppressing internet access for many years. In the past two years in particular, people of Iran have suffered repeated internet and social media blackouts following an activist movement that blossomed after the death of Mahsa Amini, a woman murdered in police custody for refusing to wear a hijab. The movement gained global attention and in response, the Iranian government rushed to control both the public narrative and organizing efforts by banning social media, and sometimes cutting off internet access altogether. 

In Sudan, authorities have enacted a total telecommunications blackout during a massive conflict and displacement crisis. Shutting down the internet is a deliberate strategy blocking the flow of information that brings visibility to the crisis and prevents humanitarian aid from supporting populations endangered by the conflict. The communications blackout has extended for weeks, and in response a global campaign #KeepItOn has formed to put pressure on the Sudanese government to restore its peoples' access to these vital services. More than 300 global humanitarian organizations have signed on to support #KeepItOn.

And in Palestine, where the Israeli government exercises near-total control over both wired internet and mobile phone infrastructure, Palestinians in Gaza have experienced repeated internet blackouts inflicted by the Israeli authorities. The latest blackout in January 2024 occurred amid a widespread crackdown by the Israeli government on digital rights—including censorship, surveillance, and arrests—and amid accusations of bias and unwarranted censorship by social media platforms. On that occasion, the internet was restored after calls from civil society and nations, including the U.S. As we’ve noted, internet shutdowns impede residents' ability to access and share resources and information, as well as the ability of residents and journalists to document and call attention to the situation on the ground—more necessary than ever given that a total of 83 journalists have been killed in the conflict so far. 

Given that all of the internet cables connecting Gaza to the outside world go through Israel, the Israeli Ministry of Communications has the ability to cut off Palestinians’ access with ease. The Ministry also allocates spectrum to cell phone companies; in 2015 we wrote about an agreement that delivered 3G to Palestinians years later than the rest of the world. In 2022, President Biden offered to upgrade the West Bank and Gaza to 4G, but the initiative stalled. While some Palestinians are able to circumvent the blackout by utilizing Israeli SIM cards (which are difficult to obtain) or Egyptian eSIMs, these workarounds are not solutions to the larger problem of blackouts, which the National Security Council has said: “[deprive] people from accessing lifesaving information, while also undermining first responders and other humanitarian actors’ ability to operate and to do so safely.”

Access to internet infrastructure is essential, in wartime as in peacetime. In light of these numerous blackouts, we remain concerned about the control that authorities are able to exercise over the ability of millions of people to communicate. It is imperative that people’s access to the internet remains protected, regardless of how user platforms and internet companies transform over time. We continue to shout this, again and again, because it needs to be restated, and unfortunately today there are ever more examples of it happening before our eyes.




Jillian C. York

Podcast Episode: 'I Squared' Governance

1 week ago

Imagine a world in which the internet is first and foremost about empowering people, not big corporations and government. In that world, government does “after-action” analyses to make sure its tech regulations are working as intended, recruits experienced technologists as advisors, and enforces real accountability for intelligence and law enforcement programs.

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2Ff16bc667-91d4-4190-9d9e-8e7cd7a64df3%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

(You can also find this episode on the Internet Archive and on YouTube.)

Ron Wyden has spent decades working toward that world, first as a congressman and now as Oregon’s senior U.S. Senator. Long among Congress’ most tech-savvy lawmakers, he helped write the law that shaped and protects the internet as we know it, and he has fought tirelessly against warrantless surveillance of Americans’ telecommunications data. Wyden speaks with EFF’s Cindy Cohn and Jason Kelley about his “I squared” —individuals and innovation—legislative approach to foster an internet that benefits everyone. 

In this episode you’ll learn about: 

  • How a lot of the worrisome online content that critics blame on Section 230 is actually protected by the First Amendment 
  • Requiring intelligence and law enforcement agencies to get warrants before obtaining Americans’ private telecommunications data 
  • Why “foreign” is the most important word in “Foreign Intelligence Surveillance Act” 
  • Making government officials understand national security isn’t heightened by reducing privacy 
  • Protecting women from having their personal data weaponized against them 

U.S. Sen. Ron Wyden, D-OR, has served in the Senate since 1996; he was elected to his current six-year term in 2022. He chairs the Senate Finance Committee, and serves on the Energy and Natural Resources Committee, the Budget Committee, and the Select Committee on Intelligence; he also is the lead Senate Democrat on the Joint Committee on Taxation. His relentless defiance of the national security community's abuse of secrecy forced the declassification of the CIA Inspector General's 9/11 report, shut down the controversial Total Information Awareness program, and put a spotlight on both the Bush and Obama administrations’ reliance on "secret law." In 2006 he introduced the first Senate bill on net neutrality, and in 2011 he was the lone Senator to stand against the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA), ultimately unsuccessful bills that purportedly were aimed at fighting online piracy but that actually would have caused significant harm to the internet. Earlier, he served from 1981 to 1996 in the House of Representatives, where he co-authored Section 230 of the Communications Decency Act of 1996—the law that protects Americans’ freedom of expression online by protecting the intermediaries we all rely on.

Resources: 

 What do you think of “How to Fix the Internet?” Share your feedback here

Transcript

SENATOR RON WYDEN
It's been all about two things, individuals and innovation. I call it “I squared,” so to speak, because those my principles. If you kind of follow what I'm trying to do, it's about individuals, it's about innovation. And you know, government has a role in playing to guardrails and ensuring that there are competitive markets. But what I really want to do is empower individuals.

CINDY COHN
That's U.S. Senator Ron Wyden of Oregon. He is a political internet pioneer. Since he was first elected to the Senate in 1996, he has fought for personal digital rights, and against corporate and company censorship, and for sensible limits on government secrecy.

[THEME MUSIC BEGINS]

CINDY COHN
I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I'm Jason Kelley - EFF's Activism Director. This is our podcast series, How to Fix the Internet.

CINDY COHN
The idea behind this show is that we're trying to make our digital lives better. And sometimes when we think about the lawmakers in our country, we often think of the conflict and fighting and people who just don’t get it when it comes to how digital works. But there are also some people in the legislatures who have worked to enact real progress.

JASON KELLEY
Our guest this week is one of the giants in the political fight for internet freedom for several decades now. Senator Wyden played a critical role in the passage of Section 230 — a pillar of online freedom of speech that has recently been coming under attack from many different sides. And he introduced the first Senate net neutrality bill back in 2006. He’s consistently pushed back against mass surveillance and pushed for a strong Fourth Amendment, and over the years, he has consistently fought for many of the things that we are fighting for here at EFF as well.

CINDY COHN
Our conversation takes a look back at some of the major milestones of his career, decisions that have directly impacted all of our online lives. And we talk about the challenges of getting Section 230 passed into law in the first place. But more recently, Senator Wyden also talks about why he was strongly opposed to laws like FOSTA-SESTA, which undermined the space that Section 230 creates for some online speakers, using the cover of trying to stop sex trafficking on the internet.

JASON KELLEY
But like us at EFF, Senator Wyden is focusing on the battles happening right now in Congress that could have a fundamental impact on our online lives. When he was elected in the ‘90s, the focus was on the explosion and rapid expansion of the internet. Now he’s thinking about the rapid expansion of artificial intelligence, and how we can make sure that we put the individual before the profits of corporations when it comes to AI.

CINDY COHN
Our conversation covers a lot of ground but we wanted to start with Senator Wyden’s own view of what a good tech future would look like for all of us.

SENATOR RON WYDEN
Well, it's one that empowers the individual. You know, consistently, the battles around here are between big interest groups. And what I want to do is see the individual have more power and big corporations and big government have less as it relates to communications.

CINDY COHN
Yeah. So what would that look like for an ordinary user? What kinds of things might be different?

SENATOR RON WYDEN
What we'd have, for example, is faster adoption of new products and services for people showing greater trust in emergency technologies. We'd build on the motivations that have been behind my privacy bills, the Fourth Amendment Is Not For Sale, for example, Section 230, the Algorithm Accountability Act. Cindy, in each one of these, it's been all about two things: individuals and innovation.

JASON KELLEY
I'm wondering if you're surprised by the way that things have turned out in any specific instance, you know, you had a lot of responsibility for some really important legislation for CDA 230, scaling back some NSA spying issues, helping to stop SOPA-PIPA, which are all, you know, really important to EFF and to a lot of our listeners and supporters. But I'm wondering if, you know, despite that, you've seen surprises in where we are that you didn't expect.

SENATOR RON WYDEN
I didn't expect to have so many opponents across the political spectrum for Section 230. I knew we would have some, but nothing has been the subject of more misinformation than 230. You had Donald Trump, the President of the United States, lying about Section 230 over and over again. I don't think Donald Trump would know what Section 230 was if it hit him in the head, but he was always lying about vote by mail and all those kinds of things.
And huge corporate interests like Big Cable and legacy media have bankrolled massive lobbying and PR campaigns against 230. Since they saw user-created content and the ability of regular people to be heard as a threat to their top-down model, all those big guys have been trying to invent reasons to oppose 230 that I could not have dreamed of.
So I'm not saying, I don't think Chris Cox would say it either, that the law is perfect. But when I think about it, it's really a tool for individuals, people without power, without clout, without lobbies, without big checkbooks. And, uh, you know, a lot of people come up to me and say, "Oh, if you're not in public life, 230 will finally disappear" and all this kind of thing. And I said, I think you're underestimating the power of people to really see what this was all about, which was something very new, a very great opportunity, but still based on a fundamental principle that the individual would be responsible for what they posted in this whole new medium and in the United States individual responsibility carries a lot of weight.

CINDY COHN
Oh, I so agree, and I think that one of the things that we've seen, um, with 230 but with a lot of other things now, is a kind of a correct identification of the harm and a wrong identification of what's causing it or what will solve it. So, you know, there are plenty of problems online, but, um, I think we feel, and I think it sounds like you do as well, that we're playing this funny little whack-a-mole game where whatever the problem is, somebody's sliding in to say that 230 is the reason they have that problem, when a lot of times it has to do with something, you know, not related. It could even be, in many cases, the U. S. Constitution, but also kind of misindentifying –

SENATOR RON WYDEN
Cindy, there's a great story that I sometimes tell. The New York Times one day had a big picture of Chris Cox and I, it was practically a full-length page. I'm 6'4", went to college on a basketball scholarship dreaming of playing in the NBA, and they said “these two people are responsible for all the hate information online and 230 empowered people to do it.” And we hardly ever do this, but Keith Chu, our wonderful expert on all things technology, finally touched base with him and said, "you know that if there was no 230, over 95 percent of what we see online that we really dislike — you know, misogyny, hate speech, racism — would still be out there because of the First Amendment, not 230."
And the New York Times, to its credit, printed a long, long apology essentially the next day, making the case that that was really all about the First Amendment, not 230. 230 brought added kind of features to this, particularly the capacity to moderate, which was so important in a new opportunity to communicate.

[MUSIC FADES IN]

CINDY COHN
What drives you towards building a better internet? So many people in Congress in your town don't really take the time to figure out what's going on, much less propose real solutions. They kind of, you know, we've been in this swing where they, they treated the technologies like heroes and now we're in a time when they're treating them like villains. But what drives you to, to kind of figure out what's actually going on and propose real solutions?

SENATOR RON WYDEN
I showed up, Cindy, Oregon's first new United States senator in 34 years, in 1996, the winner, and the only person who knew how to use a computer at that point was, uh, Pat Leahy, who was a great advocate of technology and, and innovation. I said, "I'm going to get into new stuff." In other words, Oregon had always been about wood products. We always will be about wood products and I will continue to champion those kinds of practices, particularly now we're working to prevent these huge fires. I also said we're going to get into new things. And my dad was a journalist and he said, "You're not doing your job if you don't ask hard questions every single day."
So what we tried to do, particularly in those first days, is kind of lay the foundation, just do the foundational principles for the internet. I mean, there's a book, Jeff Kossoff wrote “26 Words That Created the Internet,” but we also had internet tax policy to promote non-discrimination, so you wouldn't be treated different online than you would be offline.
Our digital signatures law, I think, has been a fabulous, you know, addition. People used to spend hours and hours in offices, you know, kind of signing these documents that look like five phone books stacked on top of each other, and they'd be getting through it in 15, 20 minutes. So, um, to me, what I think we showed is that you could produce more genuine innovation by thinking through what was to come than just lining the pocketbooks of these big entrenched interests. Now, a big part of what we're going to have to do now with AI is go through some of those same kinds of issues. You know, I think for example, we're all in on beating China. That's important. We're all in on innovation, but we've got to make sure that we cement bedrock, you know, privacy and accountability.
And that's really what's behind the Algorithm Accountability Act because, you know, what we wanted to do when people were getting ripped off in terms of housing and education and the like with AI, we wanted to get them basic protection.

JASON KELLEY
It sounds like you're, you know, you're already thinking about this new thing, AI, and in 20 or more years ago, you were thinking about the new thing, which is posting online. How do we get more of your colleagues to sort of have that same impulse to be interested in tackling those hard questions that you mentioned? I think we always wonder what's missing from their views, and we just don't really know how to make them sort of wake up to the things that you get.

SENATOR RON WYDEN
What we do is particularly focus on getting experienced and knowledgeable and effective staff. I tell people I went to school on a basketball scholarship. I remember recruiting, we kind of recruit our technologists like they were all LeBron James, and kind of talking about, you know, why there were going to be opportunities here. And we have just a terrific staff now, really led by Chris Segoyan and Keith Chu.
And it's paid huge dividends, for example, when we look at some of these shady data broker issues, government surveillance. Now, with the passing of my, my friend Dianne Feinstein,  one of the most senior members in the intelligence field and, uh,  these incredibly good staff allow me to get into these issues right now I'm with Senator Moran, Jerry Moran of Kansas trying to upend the declassification system because it basically doesn't declassify anything and I'm not sure they could catch bad guys, and they certainly are hanging on to stuff that is irresponsible, uh, information collection about innocent people.

[SHORT MUSIC INTERLUDE]

CINDY COHN
These are all problems that, of course, we're very deep in and,  we do appreciate that you, you know, our friend, Chris Segoyan,  who EFF's known for a long time and other people you've brought in really good technologists and people who understand technology to advise you. How do we get more senators to do that too? Are there things that we could help build that would make that easier?

SENATOR RON WYDEN
I think there are, and I think we need to do more, not post-mortems, but sort of more after-action kind of analysis. For example, the vote on SESTA-FOSTA was 98 to 2. And everybody wasn't sure where the other vote was, and Rand Paul came up to me and said, "You're right, so I'm voting with you."
And, uh, the point really was, you know, everybody hated the scourge of sex trafficking and the like. I consider those people monsters. But I pointed out that all you're going to do is drive them from a place where there was transparency to the dark web, where you can't get a search engine. And people go, "Huh? Well, Ron's telling us, you know, that it's going to get worse." And then I offered an amendment to basically do what I think would have really made a difference there, which is get more prosecutors and more investigators going after bad guys. And the ultimate factor that would be good, as I say, to have these sort of after-action, after-legislating kind of things, is everybody said, "Well, you know, you've got to have SESTA-FOSTA, or you're never going to be able to do anything about Backpage. This was this horrible place that, you know, there were real problems with respect to sex trafficking. And what happened was, Backpage was put out of business under existing law, not under SESTA-FOSTA, and when you guys have this discussion with, you know, people who are following the program and ask them, ask them when their senator or congressperson last had a press conference about SESTA-FOSTA.
I know the answer to this. I can't find a single press conference about SESTA-FOSTA, which was ballyhooed at the time as this miraculous cure for dealing with really bad guys, and the technology didn't make sense and the education didn't make sense, and the history with Backpage didn't make any sense and it's because people got all intoxicated with these, you know, ideas that somehow they were going to be doing this wondrous, you know, thing and it really made things worse.

CINDY COHN
So I'm hearing three things in the better world. One, and the one you've just mentioned, is that we actually have real accountability, that when we pass some kind of regulation, we take the time to look back and see whether it worked; that we have informed people who are helping advise or actually are the lawmakers and the regulators who understand how things, uh, really work.
And the third one is that we have a lot more accountability inside government around classification and secrecy, especially around things involving, you know, national security. And, you know, you're in this position, right, where you are read in as a member of the Intelligence Committee. So you kind of see what the rest of us don't. And I'm wondering, obviously I don't want you to reveal anything, but you know, are there, is that gap an important one that we close?

SENATOR RON WYDEN
Yeah, I mean, you know, there have been a lot of 14-to-1 votes in the Intelligence Committee over the, over the years, and, you know, I've been the one, and you know, the reality is people often get swept up in these kinds of arguments, particularly from people in government, like, we're having a big debate about surveillance now, Section 702, and, you know, everybody's saying, "Ron, what are you talking about? You're opposing this, you know, we face all these, all these kinds of, kinds of threats," and, um, you know, what I've always said is, read the title of the bill, Foreign Intelligence Surveillance Act, that means we're worried about foreign intelligence, we're not, under that law supposed to be sweeping up the records of vast numbers of Americans who are interconnected to those foreign individuals by virtue of the fact that communication systems have changed.
And I personally believe that smart policies ensure that you can fight terror ferociously while still protecting civil liberties, and not-so-smart policies give you less of both.

JASON KELLEY
How do we get to that balance that you're talking about, where, you know, I know a lot of people feel like we do have to have some level of surveillance to protect national security, but that balance of protecting the individual rights of people is a complicated one. And I'm wondering how you think about what that looks like for people.

SENATOR RON WYDEN
Well, for example, Zoe Lofgren, you know, Zoe has been a partner of mine on many projects. I know she's been sympathetic with all of you all, well, for many years in her service as a member from California. You know, what we said on our 702 reforms, and by the way, we had a whole bunch of Republicans, there needs to be a warrant requirement. If you're going after the personal data of Americans, there should be a warrant requirement.

Now, we were then asked, "Well, what happens if it's some kind of imminent kind of crisis?" And I said, what I've always said is that all my bills, as it relates to surveillance, have a warrant exception, which is if the government believes that there is an imminent threat to the security of our country and our people, the government can go up immediately and come back and settle the warrant matter afterwards. And at one point I was having a pretty vigorous debate with the President and his people, then-President Obama. And I said, "Mr. President, if the warrant requirement exception isn't written right, you all write it and I'm sure we'll work it out."
But I think that giving the government a wide berth to make an assessment about whether there is a real threat to the country and they're prepared to not only go up immediately to get the information, but to trust the process later on to come back and show that it was warranted. I think it's a fair balance. That's the kind of thing I'm working on right now.

JASON KELLEY
Let’s pause for just a moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
And now back to our conversation with Senator Ron Wyden and his work on privacy laws.

SENATOR RON WYDEN
Really, the first big law that I got passed involved privacy rights of Americans outside the country. So we had won a bunch of battles before that, you know, defeating John Poindexter, Total Information Awareness, and a variety of other battles.
But when I started this, trying to protect the privacy rights of Americans who are outside the United States, you would have thought that Western civilization was going to end. And this was the Bush administration. And the DNI, the head of national intelligence, talked to me. He said, "Ron, this is just going to be disastrous. It's going to be horrible."
And I walked him through who we were talking about. And I said, the biggest group of people we're talking about are men and women who wear the uniform in the United States because they are outside the United States. You can't possibly be telling me, Director McConnell, it was Director McConnell at that time, that they shouldn't have privacy rights. And then things kind of moved and I kept working with them and they still said that this was going to be a tremendous threat and all the rest. They were going to veto it. They actually put out a statement about there would be a veto message. So I worked with them a little bit more and we worked it out. And when we were done, the Bush administration put out something, and we are proud to say that we are protecting the privacy rights of Americans outside the United States.
So, if you can just take enough time and be persistent enough, you can get things done. And now, we actually have elected officials and presidents of both political parties all taking credit for the privacy rights of people outside the United States.

[MUSIC STING COMES IN TO INTRO CLIP]

SENATOR RON WYDEN ON CSPAN
A yes or no answer to the question, does the NSA collect any type of data at all on millions or hundreds of millions of Americans?

JAMES CLAPPER ON CSPAN
No sir.

SENATOR RON WYDEN ON CSPAN
It does not.

JAMES CLAPPER ON CSPAN
Not wittingly. There are cases where they could inadvertantly, perhaps, collect but not, not wittingly.

CINDY COHN
That's a clip from CSPAN, a pretty famous interaction you had with James Clapper in 2013. But I think the thing that really shines through with you is your ability to walk this fine line — you're very respectful of the system, even in an instance like this where someone is lying under oath right in your face, you know you have to work within the system to make change. How do you navigate that in the face of lies and misdirection?

SENATOR RON WYDEN
Well, you have to take the time to really tee it up, and I really credit John Dickus of Oregon, our staffer at the time, did a phenomenal job. He spent about six months teeing that question up for Mr. Clapper and what happened is his deputy — Mr. Clapper's deputy, Keith Alexander — had been telling what my 11-year-old daughter — my wife and I are older parents — we have this 11-year-old. She said, "Dad, that was a big whopper. That guy told a big whopper." Keith Alexander told a bunch of whoppers. And then Mr. Clapper did. And this had all been done in public. And so we asked for answers. He wouldn't give any answers. Then he came to the one, um, you know, open-threat hearing that we have each year. And we prepare for those open threat hearings like there is no tomorrow, because you don't get very many opportunities to have a chance to ask, you know, the important questions. And so John Dickus sent to Mr. Clapper, he sent him the question a day in advance, so that nobody could say that they hadn't gotten it, and it's an informal rule in the Intelligence Committee that if an official feels that they can't answer, they just say, "I can't answer, I have to do it in private." I wouldn't have liked that answer. But I would have respected it and tried to figure out some other way, but James Clapper got the question, looked at the camera, looked at me, and just lied and persisted in coming up — he had like five or six excuses for how he wasn't lying. And I think as the country found out what was going on, it was a big part of our product to produce the next round of laws that provided some scrutiny over the Patriot Act.

CINDY COHN
I think that's a really important kind of insight, right? Which is the thing that led to people being upset about the kind of massive surveillance and understanding it was kind of the lie, right? Like if there was more transparency on the part of the national security people and they didn't just tell themselves that they have to lie to all the rest of us, you know, in order to keep us safe, which I think is a very, very dangerous story in a democracy, we might end up in a much more reasonable place for everyone about privacy and security. And I actually don't think it's a balance. I think that you only get security if you have privacy, rather than they have to be traded off against them, and –

SENATOR RON WYDEN
You're a Ben Franklin person, Cindy. Anybody who gives up liberty to have security doesn't deserve either.

CINDY COHN
Well, I think that that's kind of right, but I also think that, you know, the history has shown that the intense secrecy, overbroad secrecy actually doesn't make us safer. And I think this goes back to your point about accountability, where we really do need to look back and say these things that have been embraced as allegedly making us safer, are they actually making us safer or are we better off having a different role for secrecy — not that there's no role, but then the one that has been, you know, kind of, it's an all-purpose excuse that no matter what the government does, it just uses the secrecy argument to make sure that the American people can't find out so that we don't, you know, evaluate whether things are working or not.
I just don't think that the, you know, my experience watching these things, and I don't know about yours, is that the overblown secrecy isn't actually making us safer.

[SHORT MUSIC INTERLUDE]

JASON KELLEY
Before we wrap up, we wanted to get a sense from you of what issues you see coming in the next three years or so that we're going to need to be thinking about to be ahead of the game. What's at the top of your mind looking forward?

SENATOR RON WYDEN
The impact of the Dobbs decision repealing Roe v. Wade is going to have huge ripple effects through our society. I believe, you know, women are already having their personal information weaponized. against them. And you're seeing it in states with, you know, MAGA attorneys general, but you're also seeing it – we did a big investigation of pharmacies. And pharmacies are giving out women's personal information hither and, and yon. And, you know, we're very much committed to getting privacy rights here. And I also want to congratulate EFF on your Who's Got Your Back report, because you really are touching on these same kinds of issues, and I think getting a warrant ought to be really important.
And the other one I mentioned is, uh, fighting government censorship. And I would put that both at home and abroad. It's no secret that China, Russia, and India want to control what people can say and read, but you know, if you look at some of what, you know, we're seeing in this country, the U.S. trade representative taking a big step backwards in terms of access to information, we're going to have to deal with that in here in our country too.

CINDY COHN
Oh, those are wonderful and scary, but wonderful and important things. I really appreciate you taking the time to talk to us. It's always such a pleasure and we are huge fans of the work that you've done, and thank you so much for carrying, you know, the “I squared,” individuals and innovation. Those are two values close to our hearts here at EFF and we really appreciate having you in Congress championing that as well

SENATOR RON WYDEN
I don't want to make this a bouquet-tossing contest, but we've had a lot of opportunities to work, work together and, you know, EFF is part of the Steppin' Up Caucus and, uh, really appreciate it and, uh, let's put this in "to be continued," okay?

CINDY COHN
Terrific.

SENATOR RON WYDEN
Thanks, guys.

CINDY COHN
I really could talk with Senator Wyden all day and specifically talk with him about national security all day, but what a great conversation. And it's so refreshing to have somebody who's experienced in Congress who really is focusing on two of the most important things that EFF focuses on as well. I love the framing of I squared, right? Individuals and innovation as the kind of centerpiece of a better world.

JASON KELLEY
Yeah. And you know, he's not just saying it, it's clear from his bills and his work over the years that he really does center those things. Innovation and individuals are really the core of things like Section 230 and many other pieces of legislation that he's worked on, which, it's just really nice and refreshing to hear someone who has a really strong ethos in the Senate and has the background to show that he means it.

CINDY COHN
Yeah, and you know, sometimes we disagree with Senator Wyden, but it's always refreshing to feel like, well, we're all trying to point in the same direction. We sometimes have disagreements about how to get there.

JASON KELLEY
Yeah. And one of the great things about working with him is that, you know, he and his staff are tech-savvy, so our disagreements are often pretty nuanced, at least from what I can remember. You know, we aren't having disagreements about what a technology is or something like that very often. I think we're, we're usually having really good conversations with his folks, because he's one of the most tech-savvy staffers in the Senate, and he's helped really make the Senate more tech-savvy overall.

CINDY COHN
Yeah, I think that this is one of these pieces of a better internet that, that feels kind of indirect, but is actually really important, which is making sure that our lawmakers - you know, they don't all have to be technologists. We have a couple technologists in Congress now, but they really have to be informed by people who understand how technology works.
And I think one of the things that's important when we show up a lot of the times is really, you know, having a clear ability to explain to the people, you know, whether it's the congressional people themselves or their staff, like how things really work and having that kind of expertise in house is, I think, something that's going to be really important if we're going to get to a better internet.

JASON KELLEY
Yeah. And it's clear that we have still work to do. You know, he brought up SESTA-FOSTA and that's an instance where, you know, he understands and his staff understands that that was a bad bill, but it was still, as he said, you know, 98-2, when it came to the vote. And ultimately that was a tech bill. And I think if, if we had more, even more sort of tech-savvy folks, we wouldn't have had such a such a fight with that bill.

CINDY COHN
And I think that he also pointed to something really important, which was this idea of after analysis, after-action thinking and looking back and saying, "Well, we passed this thing, did it do what we had hoped it would do?" as a way to really have a process where we can do error correction. And I noted that, you know, Ro Khanna and Elizabeth Warren have actually, and Senator Wyden, have floated a bill to have an investigation into FOSTA-SESTA, which, you know, for, for those who, who don't know the shorthand, this was a way that Section 230 was cut back, and protection was cut back. And the idea is that it could help stop sex trafficking. Well, all the data that we've seen so far is that it did not do that. And in some ways made sex trafficking,  you know, in the offline environment more dangerous. But having Congress actually step in and do and sponsor the research to figure out whether the bill that Congress passed did the thing that they said is, I think, just a critical piece of how we decide what we're going to do in order to protect individuals and innovation online.

JASON KELLEY
Yeah. For me, you know, it's actually tied to something that I know a lot of tech teams do which is like a sort of post-mortem. You know, after something happens, you really do need to investigate how we got there, what worked and what didn't, but in this case we all know, at least at EFF, that this was a bad bill.

CINDY COHN
Yeah, I mean, sometimes it might be just taking what we know anecdotally and turning it into something that Congress can more easily see and digest. Um, I think the other thing, it's just impossible to talk with or about Senator Wyden without talking about national security because he has just been heroic in his efforts to try to make sure that we don't trade privacy off for security. And that we recognize that these two things are linked and that by lifting up privacy, we're lifting up national security.
And by reducing privacy, we're not actually making ourselves safer. And he really has done more for this. And I think what was heartening about this conversation was that, you know, he talked about how he convinced national security hawks to support something that stood with privacy, this story about kind of really talking about how most of the Americans abroad are affiliated in one way or another with the U.S. military, people who are stationed abroad and their families, and how standing up for their privacy and framing it that way, you know, ultimately led to some success for this. Now, we've got a long ways to go, and I think he'd be the first one to agree. But the kind of doggedness and willingness to be in there for the long haul and talk to the national security folks about how, how these two values support each other is something that he has really proven that he's willing to do and it's so important.

JASON KELLEY
Yeah, that's exactly right, I think, as well. And it's also terrific that he's looking to the future, you know, we do know that he's thinking about these things, you know, 702 has been an issue for a long time and he's still focused on it, but what did you think of his thoughts about what our coming challenges are — things like how to deal with data in in a post-Dobbs world, for example?

CINDY COHN
Oh, I think he's right on, right on it. He's recognizing, I think as a lot of people have, that the Dobbs decision, overturning Roe v. Wade has really made it clear to a lot of people how vulnerable we are, based upon the data that we have to leave behind in what we do every day. Now you can do things to try to protect them, but there's only so much we can do right now without changes in the law and changes in the way things go because you know, your phone needs to know where you are in order to ring when somebody calls you or ping when somebody texts you.
So we need legal answers and he's correct that this is really coming into the fore right now. I think he's also thinking about the challenges that artificial intelligence are bringing. So I really appreciate that he's already thinking about how we fix the internet, you know, in the coming years, not just right now.

JASON KELLEY
I'm really glad we had this bouquet-throwing contest, I think was what he called it. Something like that. But yeah, I think it's great to have an ally and have them be in the Senate and I know he feels the same way about us.

CINDY COHN
Oh, absolutely. I mean, you know, part of the way we get to a better internet is to recognize the people who are doing the right thing. And so, you know, we spend a lot of time at EFF throwing rocks at the people who are doing the wrong thing. And that's really important too. But occasionally, you know, we get to throw some bouquets to the people who are fighting the good fight.

[THEME MUSIC FADES IN]

JASON KELLEY

Thanks for joining us for this episode of How To Fix the Internet.
If you have feedback or suggestions, we'd love to hear from you. Visit EFF.org/podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch and just see what's happening in digital rights this week and every week.
We’ve got a newsletter, EFFector, as well as social media accounts on many, many, many platforms.
This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators.
In this episode you heard Kalte Ohren by Alex and Drops of H10 (The Filtered Water Treatment) by J. Lang
Our theme music is by Nat Keefe of BeatMower with Reed Mathis
How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.
We’ll talk to you again soon.
I’m Jason Kelley.

CINDY COHN
And I’m Cindy Cohn.

Josh Richman
Checked
1 hour 49 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed