Americans, Be Warned: Lessons From Reddit’s Chaotic UK Age Verification Rollout

1 day 20 hours ago

Age verification has officially arrived in the UK thanks to the Online Safety Act (OSA), a UK law requiring online platforms to check that all UK-based users are at least eighteen years old before allowing them to access broad categories of “harmful” content that go far beyond graphic sexual content. EFF has extensively criticized the OSA for eroding privacy, chilling speech, and undermining the safety of the children it aims to protect. Now that it’s gone into effect, these countless problems have begun to reveal themselves, and the absurd, disastrous outcome illustrates why we must work to avoid this age-verified future at all costs.

Perhaps you’ve seen the memes as large platforms like Spotify and YouTube attempt to comply with the OSA, while smaller sites—like forums focused on parenting, green living, and gaming on Linux—either shut down or cease some operations rather than face massive fines for not following the law’s vague, expensive, and complicated rules and risk assessments. 

But even Reddit, a site that prizes anonymity and has regularly demonstrated its commitment to digital rights, was doomed to fail in its attempt to comply with the OSA. Though Reddit is not alone in bowing to the UK mandates, it provides a perfect case study and a particularly instructive glimpse of what the age-verified future would look like if we don’t take steps to stop it.

It’s Not Just Porn—LGBTQ+, Public Health, and Politics Forums All Behind Age Gates

On July 25, users in the UK were shocked and rightfully revolted to discover that their favorite Reddit communities were now locked behind age verification walls. Under the new policies, UK Redditors were asked to submit a photo of their government ID and/or a live selfie to Persona, the for-profit vendor that Reddit contracts with to provide age verification services. 

For many, this was the first time they realized what the OSA would actually mean in practice—and the outrage was immediate. As soon as the policy took effect, reports emerged from users that subreddits dedicated to LGBTQ+ identity and support, global journalism and conflict reporting, and even public health-related forums like r/periods, r/stopsmoking, and r/sexualassault were walled off to unverified users. A few more absurd examples of the communities that were blocked off, according to users, include: r/poker, r/vexillology (the study of flags), r/worldwar2, r/earwax, r/popping (the home of grossly satisfying pimple-popping content), and r/rickroll (yup). This is, again, exactly what digital rights advocates warned about. 

Every user in the country is now faced with a choice: submit their most sensitive data for privacy-invasive analysis, or stay off of Reddit entirely. Which would you choose? 

The OSA defines "harmful" in multiple ways that go far beyond pornography, so the obstacles the UK users are experiencing are exactly what the law intended. Like other online age restrictions, the OSA obstructs way more than kids’ access to clearly adult sites. When fines are at stake, platforms will always default to overcensoring. So every user in the country is now faced with a choice: submit their most sensitive data for privacy-invasive analysis, or stay off of Reddit entirely. Which would you choose? 

Again, the fact that the OSA has forced Reddit, the “heart of the internet,” to overcensor user-generated content is noteworthy. Reddit has historically succeeded where many others have failed in safeguarding digital rights—particularly the free speech and privacy of its users. It may not be perfect, but Reddit has worked harder than many large platforms to defend Section 230, a key law in the US protecting free speech online. It was one of the first platforms to endorse the Santa Clara Principles, and it was the only platform to receive every star in EFF’s 2019 “Who Has Your Back” (Censorship Edition) report due to its unique approach to moderation, its commitment to notice and appeals of moderation decisions, and its transparency regarding government takedown requests. Reddit’s users are particularly active in the digital rights world: in 2012, they helped EFF and other advocates defeat SOPA/PIPA, a dangerous censorship law. Redditors were key in forcing members of Congress to take a stand against the bill, and were the first to declare a “blackout day,” a historic moment of online advocacy in which over a hundred thousand websites went dark to protest the bill. And Reddit is the only major social media platform where EFF doesn’t regularly share our work—because its users generally do so on their own. 

If a platform with a history of fighting for digital rights is forced to overcensor, how will the rest of the internet look if age verification spreads? Reddit’s attempts to comply with the OSA show the urgency of fighting these mandates on every front. 

We cannot accept these widespread censorship regimes as our new norm. 

Rollout Chaos: The Tech Doesn’t Even Work! 

In the days after the OSA became effective, backlash to the new age verification measures spread across the internet like wildfire as UK users made their hatred of these new policies clear. VPN usage in the UK soared, over 500,000 people signed a petition to repeal the OSA, and some shrewd users even discovered that video game face filters and meme images could fool Persona’s verification software. But these loopholes aren’t likely to last long, as we can expect the age-checking technology to continuously adapt to new evasion tactics. As good as they may be, VPNs cannot save us from the harms of age verification. 

In effect, the OSA and other age verification mandates like it will increase the risk of harm, not reduce it. 

Even when the workarounds inevitably cease to function and the age-checking procedures calcify, age verification measures still will not achieve their singular goal of protecting kids from so-called “harmful” online content. Teenagers will, uh, find a way to access the content they want. Instead of going to a vetted site like Pornhub for explicit material, curious young people (and anyone else who does not or cannot submit to age checks) will be pushed to the sketchier corners of the internet—where there is less moderation, more safety risk, and no regulation to prevent things like CSAM or non-consensual sexual content. In effect, the OSA and other age verification mandates like it will increase the risk of harm, not reduce it. 

If that weren’t enough, the slew of practical issues that have accompanied Reddit’s rollout also reveals the inadequacy of age verification technology to meet our current moment. For example, users reported various bugs in the age-checking process, like being locked out or asked repeatedly for ID despite complying. UK-based subreddit moderators also reported facing difficulties either viewing NSFW post submissions or vetting users’ post history, even when the particular submission or subreddit in question was entirely SFW. 

Taking all of this together, it is excessively clear that age-gating the internet is not the solution to kids’ online safety. Whether due to issues with the discriminatory and error-prone technology, or simply because they lack either a government ID or personal device of their own, millions of UK internet users will be completely locked out of important social, political, and creative communities. If we allow age verification, we welcome new levels of censorship and surveillance with it—while further lining the pockets of big tech and the slew of for-profit age verification vendors that have popped up to fill this market void.

Americans, Take Heed: It Will Happen Here Too

The UK age verification rollout, chaotic as it is, is a proving ground for platforms that are looking ahead to implementing these measures on a global scale. In the US, there’s never been a better time to get educated and get loud about the dangers of this legislation. EFF has sounded this alarm before, but Reddit’s attempts to comply with the OSA show its urgency: age verification mandates are censorship regimes, and in the US, porn is just the tip of the iceberg

US legislators have been disarmingly explicit about their intentions to use restrictions on sexually explicit content as a Trojan horse that will eventually help them censor all sorts of other perfectly legal (and largely uncontroversial) content. We’ve already seen them move the goalposts from porn to transgender and other LGBTQ+ content. What’s next? Sexual education materials, reproductive rights information, DEI or “critical race theory” resources—the list goes on. Under KOSA, which last session passed the Senate with an enormous majority but did not make it to the House, we would likely see similar results here that we see in the UK under the OSA.

Nearly half of U.S. states have some sort of online age restrictions in place already, and the Supreme Court recently paved the way for even more age blocks on online sexual content. But Americans—including those under 18—still have a First Amendment right to view content that is not sexually explicit, and EFF will continue to push back against any legislation that expands the age mandates beyond porn, in statehouses, in courts, and in the streets. 

What can you do?

Call or email your representatives to oppose KOSA and any other federal age-checking mandate. Tell your state lawmakers, wherever you are, to oppose age verification laws. Make your voice heard online, and talk to your friends and family. Tell them about what’s happening to the internet in the UK, and make sure they understand what we all stand to lose—online privacy, security, anonymity, and expression—if the age-gated internet becomes a global reality. EFF is building a coalition to stop this enormous violation of digital rights. Join us today.

Molly Buckley

EFF to Court: Chatbot Output Can Reflect Human Expression

4 days 23 hours ago

When a technology can have a conversation with you, it’s natural to anthropomorphize that technology—to see it as a person. It’s tempting to see a chatbot as a thinking, speaking robot, but this gives the technology too much credit. This can also lead people—including judges in cases about AI chatbots—to overlook the human expressive choices connected to the words that chatbots produce. If chatbot outputs had no First Amendment protections, the government could potentially ban chatbots that criticize the administration or reflect viewpoints the administration disagrees with.

In fact, the output of chatbots can reflect not only the expressive choices of their creators and users, but also implicates users’ right to receive information. That’s why EFF and the Center for Democracy and Technology (CDT) have filed an amicus brief in Garcia v. Character Technologies explaining how large language models work and the various kinds of protected speech at stake.

Among the questions in this case is the extent to which free speech protections extend to the creation, dissemination, and receipt of chatbot outputs. Our brief explains how the expressive choices of a chatbot developer can shape its output, such as during reinforcement learning, when humans are instructed to give positive feedback to responses that align with the scientific consensus around climate change and negative feedback for denying it (or vice versa). This chain of human expressive decisions extends from early stages of selecting training data to crafting a system prompt. A user’s instructions are also reflected in chatbot output. Far from being the speech of a robot, chatbot output often reflects human expression that is entitled to First Amendment protection.

In addition, the right to receive speech in itself is protected—even when the speaker would have no independent right to say it. Users have a right to access the information chatbots provide.

None of this is to suggest that chatbots cannot be regulated or that the harms they cause cannot be addressed. The First Amendment simply requires that those regulations be appropriately tailored to the harm to avoid unduly burdening the right to express oneself through the medium of a chatbot, or to receive the information it provides.

We hope that our brief will be helpful to the court as the case progresses, as the judge decided not to send the question up on appeal at this time.

Read our brief below.

Katharine Trendacosta

No Walled Gardens. No Gilded Cages.

5 days 3 hours ago

Sometimes technology feels like a gilded cage, and you’re not the one holding the key. Most people can’t live off the grid, so how do we stop data brokers who track and exploit you for money? Tech companies that distort what you see and hear? Governments that restrict, censor, and intimidate? No one can do it alone, but EFF was built to protect your rights. With your support, we can take back control.

Join EFF

With 35 years of deep expertise and the support of our members, EFF is delivering bold action to solve the biggest problems facing tech users: suing the government for overstepping their bounds; empowering the people and lawmakers to help them hold the line; and creating free, public interest software toolsguides, and explainers to make the web better.

EFF members enable thousands of hours of our legal work, activism, investigation, and software development for the public good. Join us today.

No Walled Gardens. No Gilded Cages.

Think about it: in the face of rising authoritarianism and invasive surveillance, where would we be without an encrypted web? Your security online depends on researchers, hackers, and creators who are willing to take privacy and free speech rights seriously. That's why EFF will eagerly protect the beating heart of that movement at this week's summer security conferences in Las Vegas. This renowned summit of computer hacking events—BSidesLV, Black Hat USA, and DEF CON—illustrate the key role a community can play in helping you break free of the trappings of technology and retake the reins.

For summer security week, EFF’s DEF CON 33 t-shirt design Beyond the Walled Garden by Hannah Diaz is your gift at the Gold Level membership. Look closer to discover this year’s puzzle challenge! Many thanks to our volunteer puzzlemasters jabberw0nky and Elegin for all their work.

defcon-shirt-frontback-wide.png
A Token of AppreciationBecome a recurring monthly or annual Sustaining Donor this week and you'll get a numbered EFF35 Challenge Coin. Challenge coins follow a long tradition of offering a symbol of kinship and respect for great achievements—and EFF owes its strength to technology creators and users like you.

Our team is on a relentless mission to protect your civil liberties and human rights wherever they meet tech, but it’s only possible with your help.

Donate Today

Break free of tech’s walled gardens.

Aaron Jue

Blocking Access to Harmful Content Will Not Protect Children Online, No Matter How Many Times UK Politicians Say So

5 days 7 hours ago

The UK is having a moment. In late July, new rules took effect that require all online services available in the UK to assess whether they host content considered harmful to children, and if so, these services must introduce age checks to prevent children from accessing such content. Online services are also required to change their algorithms and moderation systems to ensure that content defined as harmful, like violent imagery, is not shown to young people.

During the four years that the legislation behind these changes—the Online Safety Act (OSA)—was debated in Parliament, and in the two years since while the UK’s independent, online regulator Ofcom devised the implementing regulations, experts from across civil society repeatedly flagged concerns about the impact of this law on both adults’ and children’s rights. Yet politicians in the UK pushed ahead and enacted one of the most contentious age verification mandates that we’ve seen.

The case of safety online is not solved through technology alone.

No one—no matter their age—should have to hand over their passport or driver’s license just to access legal information and speak freely. As we’ve been saying for many years now, the approach that UK politicians have taken with the Online Safety Act is reckless, short-sighted, and will introduce more harm to the children that it is trying to protect. Here are five reasons why:

Age Verification Systems Lead to Less Privacy 

Mandatory age verification tools are surveillance systems that threaten everyone’s rights to speech and privacy. To keep children out of a website or away from certain content, online services need to confirm the ages of all their visitors, not just children—for example by asking for government-issued documentation or by using biometric data, such as face scans, that are shared with third-party services like Yoti or Persona to estimate that the age of the user is over 18. This means that adults and children must all share their most sensitive and personal information with online services to access a website. 

Once this information is shared to verify a user's age, there’s no way for people to know how it's going to be retained or used by that company, including whether it will be sold or shared with even more third parties like data brokers or law enforcement. The more information a website collects, the more chances there are for that information to get into the hands of a marketing company, a bad actor, or a state actor or someone who has filed a legal request for it. If a website, or one of the intermediaries it uses, misuses or mishandles the data, the visitor might never find out. There is also a risk that this data, once collected, can be linked to other unrelated web activity, creating an aggregated profile of the user that grows more valuable as each new data point is added. 

As we argued extensively during the passage of the Online Safety Act, any attempt to protect children online should not include measures that require platforms to collect data or remove privacy protections around users’ identities. But with the Online Safety Act, users are being forced to trust that platforms (and whatever third-party verification services they choose to partner with) are guardrailing users’ most sensitive information—not selling it through the opaque supply chains that allow corporations and data brokers to make millions. The solution is not to come up with a more sophisticated technology, but to simply not collect the data in the first place.

This Isn’t Just About Safety—It’s Censorship

Young people should be able to access information, speak to each other and to the world, play games, and express themselves online without the government making decisions about what speech is permissible. But under the Online Safety Act, the UK government—with Ofcom—are deciding what speech young people have access to, and are forcing platforms to remove any content considered harmful. As part of this, platforms are required to build “safer algorithms” to ensure that children do not encounter harmful content, and introduce effective content moderation systems to remove harmful content when platforms become aware of it. 

Because the OSA threatens large fines or even jail time for any non-compliance, platforms are forced to over-censor content to ensure that they do not face any such liability. Reports are already showing the censorship of content that falls outside the parameters of the OSA, such as footage of police attacking pro-Palestinian protestors being blocked on X, the subreddit r/cider—yes, the beverage—asking users for photo ID, and smaller websites closing down entirely. UK-based organisation Open Rights Group are tracking this censorship with their tool, Blocked.

We know that the scope for so-called “harmful content” is subjective and arbitrary, but it also often sweeps up content like pro-LGBTQ+ speech. Policies like the OSA, that claim to “protect children” or keep sites “family-friendly,” often label LGBTQ+ content as “adult” or “harmful,” while similar content that doesn't involve the LGBTQ+ community is left untouched. Sometimes, this impact—the censorship of LGBTQ+ content—is implicit, and only becomes clear when the policies are actually implemented. Other times, this intended impact is explicitly spelled out in the text of the policies. But in all scenarios, legal content is being removed at the discretion of government agencies and online platforms, all under the guise of protecting children. 

Children deserve a more intentional and holistic approach to protecting their safety and privacy online.

People Do Not Want This 

Users in the UK have been clear in showing that they do not want this. Just days after age checks came into effect, VPN apps became the most downloaded on Apple's App Store in the UK. The BBC reported that one app, Proton VPN, reported an 1,800% spike in UK daily sign-ups after the age check rules took effect. A similar spike in searches for VPNs was evident in January when Florida joined the ever growing list of U.S. states in implementing an age verification mandate on sites that host adult content, including pornography websites like Pornhub. 

Whilst VPNs may be able to disguise the source of your internet activity, they are not foolproof or a solution to age verification laws. Ofcom has already started discouraging their use, and with time, it will become increasingly difficult for VPNs to effectively circumvent age verification requirements as enforcement of the OSA adapts and deepens. VPN providers will struggle to keep up with these constantly changing laws to ensure that users can bypass the restrictions, especially as more sophisticated detection systems are introduced to identify and block VPN traffic. 

Some politicians in the Labour Party argued that a ban on VPNs will be essential to prevent users circumventing age verification checks. But banning VPNs, just like introducing age verification measures, will not achieve this goal. It will, however, function as an authoritarian control on accessing information in the UK. If you are navigating protecting your privacy or want to learn more about VPNs, EFF provides a comprehensive guide on using VPNs and protecting digital privacy—a valuable resource for anyone looking to use these tools.

 Alongside increased VPN usage, a petition calling for the repeal of the Online Safety Act recently hit more than 400,000 signatures. In its official response to the petition, the UK government said that it “has no plans to repeal the Online Safety Act, and is working closely with Ofcom to implement the Act as quickly and effectively as possible to enable UK users to benefit from its protections.” This is not good enough: the government must immediately treat the reasonable concerns of people in the UK with respect, not disdain, and revisit the OSA.

Users Will Be Exposed to Amplified Discrimination 

To check users' ages, three types of systems are typically deployed: age verification, which requires a person to prove their age and identity; age assurance, whereby users are required to prove that they are of a certain age or age range, such as over 18; or age estimation, which typically describes the process or technology of estimating ages to a certain range. The OSA requires platforms to check ages through age assurance to prove that those accessing platforms are over 18, but leaves the specific tool for measuring this at the platforms’ discretion. This may therefore involve uploading a government-issued ID, or submitting a face scan to an app that will then use a third-party platform to “estimate” your age.

From what we know about systems that use face scanning in other contexts, such as face recognition technology used by law enforcement, even the best technology is susceptible to mistakes and misidentification. Just last year, a legal challenge was launched against the Met Police after a community worker was wrongly identified and detained following a misidentification by the Met’s live facial recognition system. 

For age assurance purposes, we know that the technology at best has an error range of over a year, which means that users may risk being incorrectly blocked or locked out of content by erroneous estimations of their age—whether unintentionally or due to discriminatory algorithmic patterns that incorrectly determine people’s identities. These algorithms are not always reliable, and even if the technology somehow had 100% accuracy, it would still be an unacceptable tool of invasive surveillance that people should not have to be subject to just to access content that the government could consider harmful.

Not Everyone Has Access to an ID or Personal Device 

Many advocates of the ‘digital transition’ introduce document-based verification requirements or device-based age verification systems on the assumption that every individual has access to a form of identification or their own smartphone. But this is not true. In the UK, millions of people don’t hold a form of identification or own a personal mobile device, instead sharing with family members or using public devices like those at a library or internet cafe. Yet because age checks under the OSA involve checking a user’s age through government-issued ID documents or face scans on a mobile device, millions of people will be left excluded from online speech and will lose access to much of the internet. 

These are primarily lower-income or older people who are often already marginalized, and for whom the internet may be a critical part of life. We need to push back against age verification mandates like the Online Safety Act, not just because they make children less safe online, but because they risk undermining crucial access to digital services, eroding privacy and data protection, and limiting freedom of expression. 

The Way Forward 

The case of safety online is not solved through technology alone, and children deserve a more intentional and holistic approach to protecting their safety and privacy online—not this lazy strategy that causes more harm that it solves. Rather than weakening rights for already vulnerable communities online, politicians must acknowledge these shortcomings and explore less invasive approaches to protect all people from online harms. We encourage politicians in the UK to look into what is best, and not what is easy.

Paige Collings

EFF at the Las Vegas Security Conferences

5 days 13 hours ago

It’s time for EFF’s annual journey to Las Vegas for the summer security conferences: BSidesLV, Black Hat USA, and DEF CON. Our lawyers, activists, and technologists are always excited to support this community of security researchers and tinkerers—the folks who push computer security forward (and somehow survive the Vegas heat in their signature black hoodies).  

As in past years, EFF attorneys will be on-site to assist speakers and attendees. If you have legal concerns about an upcoming talk or sensitive infosec research—during the Las Vegas conferences or anytime—don’t hesitate to reach out at info@eff.org. Share a brief summary of the issue, and we’ll do our best to connect you with the right resources. You can also learn more about our work supporting technologists on our Coders’ Rights Project page. 

Be sure to swing by the expo areas at all three conferences to say hello to your friendly neighborhood EFF staffers! You’ll probably spot us in the halls, but we’d love for you to stop by our booths to catch up on our latest work, get on our action alerts list, or become an EFF member! For the whole week, we’ll have our limited-edition DEF CON 33 t-shirt on hand—I can’t wait to see them take over each conference! 

defcon-shirt-frontback.png

EFF Staff Presentations

Ask EFF at BSides Las Vegas
At this interactive session, our panelists will share updates on critical digital rights issues and EFF's ongoing efforts to safeguard privacy, combat surveillance, and advocate for freedom of expression.
WHEN: Tuesday, August 5, 15:00
WHERE: Skytalks at the Tuscany Suites Hotel & Casino

Recording PCAPs from Stingrays With a $20 Hotspot
What if you could use Wireshark on the connection between your cellphone and the tower it's connected to? In this talk we present Rayhunter, a cell site simulator detector built on top of a cheap cellular hotspot. 
WHEN: Friday, August 8, 13:30
WHERE: DEF CON, LVCC - L1 - EHW3 - Track 1

Rayhunter Build Clinic
Come out and build EFF's Rayhunter! ($10 materials fee as an EFF donation)
WHEN: Friday, August 8 at 14:30
WHERE: DEF CON, Hackers.Town Community Space

Protect Your Privacy Online and on the Streets with EFF Tools
The Electronic Frontier Foundation (EFF) has been protecting your rights to privacy, free expression, and security online for 35 years! One important way we push for these freedoms is through our free, open source tools. We’ll provide an overview of how these tools work, including Privacy Badger, Rayhunter, Certbot, and Surveillance-Self Defense, and how they can help keep you safe online and on the streets.
WHEN: Friday, August 8 at 17:00
WHERE: DEF CON, Community Stage

Rayhunter Internals
Rayhunter is an open source project from EFF to detect IMSI catchers. In this follow up to our main stage talk about the project we will take a deep dive into the internals of Rayhunter. We will talk about the architecture of the project, what we have gained by using Rust, porting to other devices, how to jailbreak new devices, the design of our detection heuristics, open source shenanigans, and how we analyze files sent to us.
WHEN: Saturday, August 9, at 12:00
WHERE: DEF CON, Hackers.Town Community Space

Ask EFF at DEF CON 33
We're excited to answer your burning questions on pressing digital rights issues! Our expert panelists will offer brief updates on EFF's work defending your digital rights, before opening the floor for attendees to ask their questions. This dynamic conversation centers challenges DEF CON attendees actually face, and is an opportunity to connect on common causes.
WHEN: Saturday, August 9, at 14:30
WHERE: DEF CON, LVCC - L1 - EHW3 - Track 4

EFF Benefit Poker Tournament at DEF CON 33

The EFF Benefit Poker Tournament is back for DEF CON 33! Your buy-in is paired with a donation to support EFF’s mission to protect online privacy and free expression for all. Join us at the Planet Hollywood Poker Room as a player or spectator. Play for glory. Play for money. Play for the future of the web. 
WHEN: Friday, August 8, 2025 - 12:00-15:00
WHERE: Planet Hollywood Poker Room, 3667 Las Vegas Blvd South, Las Vegas, NV 89109

Beard and Mustache Contest at DEF CON 33

Yes, it's exactly what it sounds like. Join EFF at the intersection of facial hair and hacker culture. Spectate, heckle, or compete in any of four categories: Full beard, Partial Beard, Moustache  Only, or Freestyle (anything goes so create your own facial apparatus!). Prizes! Donations to EFF! Beard oil! Get the latest updates.
WHEN: Saturday, August 9, 10:00- 12:00
WHERE: DEF CON, Contest Stage (Look for the Moustache Flag)

Tech Trivia Contest at DEF CON 33

Join us for some tech trivia on Saturday, August 9 at 7:00 PM! EFF's team of technology experts have crafted challenging trivia about the fascinating, obscure, and trivial aspects of digital security, online rights, and internet culture. Competing teams will plumb the unfathomable depths of their knowledge, but only the champion hive mind will claim the First Place Tech Trivia Trophy and EFF swag pack. The second and third place teams will also win great EFF gear.
WHEN: Saturday, August 9, 19:00-22:00
WHERE: DEF CON, Contest Stage

Join the Cause!

Come find our table at BSidesLV (Middle Ground), Black Hat USA (back of the Business Hall), and DEF CON (Vendor Hall) to learn more about the latest in online rights, get on our action alert list, or donate to become an EFF member. We'll also have our limited-edition DEF CON 33 shirts available starting Monday at BSidesLV! These shirts have a puzzle incorporated into the design. Snag one online for yourself starting on Tuesday, August 5 if you're not in Vegas!

Join EFF

Support Security & Digital Innovation

Christian Romero

Digital Rights Are Everyone’s Business, and Yours Can Join the Fight!

5 days 18 hours ago

Companies large and small are doubling down on digital rights, and we’re excited to see more and more of them join EFF. We’re first and always an organization who fights for users, so you might be asking: Why does EFF work with corporate donors, and why do they want to work with us?

SHOW YOUR COMPANY SUPPORTS A BETTER DIGITAL FUTURE

JOIN EFF TODAY

Businesses want to work with EFF for two reasons:

  1. They, their employees, and their customers believe in EFF’s values.
  2. They know that when EFF wins, we all win.

Both customers and employees alike care about working with organizations they know share their values. And issues like data privacy, sketchy uses of surveillance, and free expression are pretty top of mind for people these days. Research shows that today’s working adults take philanthropy seriously, whether they’re giving organizations their money or their time. For younger generations (like the Millennial EFFer writing this blog post!) especially, feeling like a meaningful part of the fight for good adds to a sense of purpose and fulfillment. Given the choice to spend hard-earned cash with techno-authoritarians versus someone willing to take a stand for digital freedom: We’ll take option two, thanks.

When EFF wins, users win. Standing up for the ability to access, use, and build on technology means that a handful of powerful interests won’t have unfair advantages over everyone else. Whether it’s the fight for net neutrality, beating back patent trolls in court, protecting the right to repair and tinker, or pushing for decentralization and interoperability, EFF’s work can build a society that supports creativity and innovation; where established players aren’t allowed to silence the next generation of creators. Simply put: Digital rights are good for business!

The trust of EFF’s membership is based on 35 years of speaking truth to power, whether it’s on Capitol Hill or in Silicon Valley (and let’s be honest, if EFF was Big Tech astroturf, we’d drive nicer cars). EFF will always lead the work and invite supporters to join us, not the other way around. EFF will gratefully thank the companies who join us and offer employees and customers ways to get involved, too. EFF won’t take money from Google, Apple, Meta, Microsoft, Amazon, or Tesla, and we won’t endorse or sponsor a company, service, or product. Most importantly: EFF won’t alter the mission or the message to meet a donor’s wishes, no matter how much they’ve donated.

A few of the ways your team can support EFF:

  1.  Cash donations
  2. Sponsoring an EFF event
  3. Providing an in-kind product or service
  4. Matching your employees’ gifts
  5. Boosting our messaging

Ready to join us in the fight for a better future? Visit eff.org/thanks.

Tierney Hamilton

Data Brokers Are Ignoring Privacy Law. We Deserve Better.

6 days 1 hour ago

Of the many principles EFF fights for in consumer data privacy legislation, one of the most basic is a right to access the data companies have about you. It’s only fair. So many companies collect information about us without our knowledge or consent. We at least should have a way to find out what they purport to know about our lives.

Yet a recent paper from researchers at the University of Californian-Irvine found that, of 543 data brokers in California’s data broker registry at time of publishing, 43 percent failed to even respond to requests to access data.

43 percent of registered data brokers in California failed to even respond to requests to access data, one study shows.

Let’s stop there for a second. That’s more than four in ten companies from an industry that makes its money from collecting and selling our personal information, ignoring one of our most basic rights under the California Consumer Privacy Act: the right to know what information companies have about us.

Such failures violate the law. If this happens to you, you should file a complaint with the California Privacy Protection Agency (CPPA) and the California Attorney General's Office

This is particularly galling because it’s not easy to file a request in the first place. As these researchers pointed out, there is no streamlined process for these time-consuming requests. People often won’t have the time or energy to see them through. Yet when someone does make the effort to file a request, some companies still feel just fine ignoring the law and their customers completely.

Four in ten data brokers are leaving requesters on read, in violation of the law and our privacy rights. That’s not a passing grade in anyone’s book.

Without consequences to back up our rights, as this research illustrates, many companies will bank on not getting caught, or factor weak slaps on the wrist into the cost of doing business.

This is why EFF fights for bills that have teeth. For example, we demand that people have the right to sue for privacy violations themselves—what’s known as a private right of action. Companies hate this form of enforcement, because it can cost them real money when they flout the law.

When the CCPA started out as a ballot initiative, it had a private right of action, including to enforce access requests. But when the legislature enacted the CCPA (in exchange for the initiative’s proponents removing it from the ballot), corporate interests killed the private right of action in negotiations.

We encourage the California Privacy Protection Agency and the California Attorney General’s Office, which both have the authority to bring these companies to task under the CCPA, to look into these findings. Moving forward, we all have to continue to fight for better laws, to strengthen existing laws, and call on states to enforce the laws on their books to respect everyone’s privacy. Data brokers must face real consequences for brazenly flouting our privacy rights.

Hayley Tsukayama

No, the UK’s Online Safety Act Doesn’t Make Children Safer Online

1 week 2 days ago

Young people should be able to access information, speak to each other and to the world, play games, and express themselves online without the government making decisions about what speech is permissible. But in one of the latest misguided attempts to protect children online, internet users of all ages in the UK are being forced to prove their age before they can access millions of websites under the country’s Online Safety Act (OSA). 

The legislation attempts to make the UK the “the safest place” in the world to be online by placing a duty of care on online platforms to protect their users from harmful content. It mandates that any site accessible in the UK—including social media, search engines, music sites, and adult content providers—enforce age checks to prevent children from seeing harmful content. This is defined in three categories, and failure to comply could result in fines of up to 10% of global revenue or courts blocking services:

  1. Primary priority content that is harmful to children: 
    1. Pornographic content.
    2. Content which encourages, promotes or provides instructions for:
      1. suicide;
      2. self-harm; or 
      3. an eating disorder or behaviours associated with an eating disorder.
  2. Priority content that is harmful to children: 
    1. Content that is abusive on the basis of race, religion, sex, sexual orientation, disability or gender reassignment;
    2. Content that incites hatred against people on the basis of race, religion, sex, sexual orientation, disability or gender reassignment; 
    3. Content that encourages, promotes or provides instructions for serious violence against a person; 
    4. Bullying content;
    5. Content which depicts serious violence against or graphicly depicts serious injury to a person or animal (whether real or fictional); 
    6. Content that encourages, promotes or provides instructions for stunts and challenges that are highly likely to result in serious injury; and 
    7. Content that encourages the self-administration of harmful substances.
  3. Non-designated content that is harmful to children (NDC): 
    1. Content is NDC if it presents a material risk of significant harm to an appreciable number of children in the UK, provided that the risk of harm does not flow from any of the following:
      1. the content’s potential financial impact;
      2. the safety or quality of goods featured in the content; or
      3. the way in which a service featured in the content may be performed.

    Online service providers must make a judgement about whether the content they host is harmful to children, and if so, address the risk by implementing a number of measures, which includes, but is not limited to:

    1. Robust age checks: Services must use “highly effective age assurance to protect children from this content. If services have minimum age requirements and are not using highly effective age assurance to prevent children under that age using the service, they should assume that younger children are on their service and take appropriate steps to protect them from harm.”

      To do this, all users on sites that host this content must verify their age, for example by uploading a form of ID like a passport, taking a face selfie or video to facilitate age assurance through third-party services, or giving permission for the age-check service to access information from your bank about whether you are over 18. 

    2. Safer algorithms: Services “will be expected to configure their algorithms to ensure children are not presented with the most harmful content and take appropriate action to protect them from other harmful content.”

    3. Effective moderation: All services “must have content moderation systems in place to take swift action against content harmful to children when they become aware of it.” 

    Since these measures took effect in late July, social media platforms Reddit, Bluesky, Discord, and X all introduced age checks to block children from seeing harmful content on their sites. Porn websites like Pornhub and YouPorn implemented age assurance checks on their sites, now asking users to either upload government-issued ID, provide an email address for technology to analyze other online services where it has been used, or submit their information to a third-party vendor for age verification. Sites like Spotify are also requiring users to submit face scans to third-party digital identity company Yoti to access content labelled 18+. Ofcom, which oversees implementation of the OSA, went further by sending letters to try to enforce the UK legislation on U.S.-based companies such as the right-wing platform Gab

    The UK Must Do Better

    The UK is not alone in pursuing such a misguided approach to protect children online: the U.S. Supreme Court recently paved the way for states to require websites to check the ages of users before allowing them access to graphic sexual materials; courts in France last week ruled that porn websites can check users’ ages; the European Commission is pushing forward with plans to test its age-verification app; and Australia’s ban on youth under the age of 16 accessing social media is likely to be implemented in December. 

    But the UK’s scramble to find an effective age verification method shows us that there isn't one, and it’s high time for politicians to take that seriously. The Online Safety Act is a threat to the privacy of users, restricts free expression by arbitrating speech online, exposes users to algorithmic discrimination through face checks, and leaves millions of people without a personal device or form of ID excluded from accessing the internet.

    And, to top it all off, UK internet users are sending a very clear message that they do not want anything to do with this censorship regime. Just days after age checks came into effect, VPN apps became the most downloaded on Apple's App Store in the UK, and a petition calling for the repeal of the Online Safety Act recently hit more than 400,000 signatures. 

    The internet must remain a place where all voices can be heard, free from discrimination or censorship by government agencies. If the UK really wants to achieve its goal of being the safest place in the world to go online, it must lead the way in introducing policies that actually protect all users—including children—rather than pushing the enforcement of legislation that harms the very people it was meant to protect.

    Paige Collings

    TechEd Collab: Building Community in Arizona Around Tech Awareness

    1 week 2 days ago

    Earlier this year, EFF welcomed Technology Education Collaborative (TEC) into the Electronic Frontier Alliance (EFA). TEC empowers everyday people to become informed users of today's extraordinary technology, and helps people better understand the tech that surrounds them on a daily basis. TEC does this by hosting in-person, hands-on events, including right to repair workshops, privacy meetups, tech field trips, and demos. We got the chance to catch up with Connor Johnson, Chief Technology Officer of TEC, and speak with him about the work TEC is doing in the Greater Phoenix area:

    Connor, tell us how Technology Education Collaborative got started, and about its mission.

    TEC was started with the idea of creating a space where industry professionals, students, and the community at large could learn about technology together. We teamed up with Gateway Community College to build the Advanced Cyber Systems Lab. A lot of tech groups in Phoenix meet at varying locations, because they can’t afford or find a dedicated space. TEC hosts community technology-focused groups at the Advanced Cyber Systems Lab, so they can have the proper equipment to work on and collaborate on their projects.

    Speaking of projects, let's talk about some of the main priorities of TEC: right to repair, privacy, and cybersecurity. Having the only right to repair hub in the greater Phoenix metro valley, what concerns do you see on the horizon? 

    One of our big concerns is that many companies have slowly shifted away from repairability to a sense of convenience. We are thankful for the donations from iFixIt that allow people to use the tools they may otherwise not know they need or could afford. Community members and IT professionals have come to use our anti-static benches to fix everything from TVs to 3D printers. We are also starting to host ‘Hardware Happy Hour’ so anyone can bring their hardware projects in and socialize with like-minded people.

    How’s your privacy and cybersecurity work resonating with the community?

    We have had a host of different speakers discuss the current state of privacy and how it can affect different individuals. It was also wonderful to have your Surveillance Litigation Director, Andrew Crocker, speak at our July edition of Privacy PIE. So many of the attendees were thrilled to be able to ask him questions and get clarification on current issues. Christina, CEO of TEC, has done a great job leading our Privacy PIE events and discussing the legal situation surrounding many privacy rights people take for granted. One of my favorite presentations was when we discussed privacy concerns with modern cars, where she touched on aspects like how the cameras are tied to car companies' systems and data collection.

    TEC’s current goal is to focus on building a community that is not just limited to cybersecurity itself. One problem that we’ve noticed is that there are a lot of groups focused on security but don’t branch out into other fields in tech. Security affects all aspects of technology, which is why TEC has been branching out its efforts to other fields within tech like hardware and programming. A deeper understanding of the fundamentals can help us to build better systems from the ground up, rather than applying cybersecurity as an afterthought.

    In the field of cybersecurity, we have been working on a project building a small business network. The idea behind this initiative is to allow small businesses to independently set up their network, so that provides a good layer of security. Many shops don’t either have the money to afford a security-hardened network or don’t have the technical know-how to set one up. We hope this open-source project will allow people to set up the network themselves, and allow students a way to gain valuable work experience.

    It’s awesome to hear of all the great things TEC is doing in Phoenix! How can people plug in and get engaged and involved?

    TEC can always benefit from more volunteers or donations. Our goal is to build community, and we are happy to have anyone join us. All are welcome to the Advanced Cyber System lab at Gateway Community College – Washington Campus Monday through Thursday 4 pm to 8 pm. Our website is www.techedcollab.org and on facebook we’re: www.facebook.com/techedcollab People can also join our discord server for some great discussions and updates on our upcoming events!

    Christopher Vines

    👮 Amazon Ring Is Back in the Mass Surveillance Game | EFFector 37.9

    1 week 4 days ago

    EFF is gearing up to beat the heat in Las Vegas for the summer security conferences! Before we make our journey to the Strip, we figured let's get y'all up-to-speed with a new edition of EFFector.

    This time we're covering an illegal mass surveillance scheme by the Sacramento Municipal Utility District, calling out dating apps for using intimate data—like sexual preferences or identity—to train AI , and explaining why we're backing the Wikimedia Foundation in their challenge to the UK’s Online Safety Act.

    Don't forget to also check out our audio companion to EFFector as well! We're interviewing staff about some of the important work that they're doing. This time, EFF Senior Policy Analyst Matthew Guariglia explains how Amazon Ring is cashing in on the rising tide of techno-authoritarianism. Listen now on YouTube or the Internet Archive.

    Listen TO EFFECTOR

    EFFECTOR 37.9 - Amazon Ring Is Back in the Mass Surveillance Game

    Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

    Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

    Christian Romero

    Podcast Episode: Smashing the Tech Oligarchy

    1 week 4 days ago

    Many of the internet’s thorniest problems can be attributed to the concentration of power in a few corporate hands: the surveillance capitalism that makes it profitable to invade our privacy, the lack of algorithmic transparency that turns artificial intelligence and other tech into impenetrable black boxes, the rent-seeking behavior that seeks to monopolize and mega-monetize an existing market instead of creating new products or markets, and much more.

    %3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2Fe4b50178-f872-4b2c-9015-cec3a88bc5de%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

       

    (You can also find this episode on the Internet Archive and on YouTube.) 

    Kara Swisher has been documenting the internet’s titans for almost 30 years through a variety of media outlets and podcasts. She believes that with adequate regulation we can keep people safe online without stifling innovation, and we can have an internet that’s transparent and beneficial for all, not just a collection of fiefdoms run by a handful of homogenous oligarchs. 

    In this episode you’ll learn about:

    • Why it’s so important that tech workers speak out about issues they want to improve and work to create companies that elevate best practices
    • Why completely unconstrained capitalism turns technology into weapons instead of tools
    • How antitrust legislation and enforcement can create a healthier online ecosystem
    • Why AI could either bring abundance for many or make the very rich even richer
    • The small online media outlets still doing groundbreaking independent reporting that challenges the tech oligarchy 

    Kara Swisher is one of the world's foremost tech journalists and critics, and currently hosts two podcasts: On with Kara Swisher and Pivot, the latter co-hosted by New York University Professor Scott Galloway.  She's been covering the tech industry since the 1990s for outlets including the Washington Post, the Wall Street Journal, and the New York Times; she is an New York Magazine editor-at-large, a CNN contributor, and cofounder of the tech news sites Recode and All Things Digital. She also has authored several books, including “Burn Book” (Simon & Schuster, 2024) in which she documents the history of Silicon Valley and the tech billionaires who run it. 

    Resources:

    What do you think of “How to Fix the Internet?” Share your feedback here.

    Transcript

    KARA SWISHER: It's a tech that's not controlled by a small group of homogeneous people. I think that's pretty much it. I mean, and there's adequate regulation to allow for people to be safe and at the same time, not too much in order to be innovative and do things – you don't want the government deciding everything.
    It's a place where the internet, which was started by US taxpayers, which was paid for, is beneficial for people, and that there's transparency in it, and that we can see what's happening and what's doing. And again, the concentration of power in the hands of a few people really is at the center of the problem.

    CINDY COHN: That's Kara Swisher, describing the balance she'd like to see in a better digital future. I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation

    JASON KELLEY: And I'm Jason Kelley -- EFF's Activism Director. You're listening to How to Fix the Internet.

    CINDY COHN: This show is about envisioning a better digital future that we can all work towards.

    JASON KELLEY: And we are excited to have a guest who has been outspoken in talking about how we get there, pointing out the good, the bad and the ugly sides of the tech world.

    CINDY COHN: Kara Swisher is one of the world's foremost tech journalists and critics. She's been covering the industry since the 1990s, and she currently hosts two podcasts: On with Kara Swisher and Pivot, and she's written several books, including last year's Burn Book where she documents the history of Silicon Valley and the tech billionaires who run it.
    We are delighted that she's here. Welcome, Kara.

    KARA SWISHER: Thank you.

    CINDY COHN: We've had a couple of tech critics on the podcast recently, and one of the kind of themes that's come up for us is you kind of have to love the internet before you can hate on it. And I've heard you describe your journey that way as well. And I'd love for you to talk a little bit about it, because you didn't start off, really, looking for all the ways that things have gone wrong.

    KARA SWISHER: I don't hate it. I don't. It's just, you know, I have eyes and I can see, you know, I mean, uh, one of the expressions I always use is you should, um, believe what you see, not see what you believe. And so I always just, that's what's happening. You can see it happening. You can see the coarsening of our dialogue now offline being affected by online. You could just see what's happened.
    But I still love the the possibilities of technology and the promise of it. And I think that's what attracted me to it in the first place, and it's a question of how you use it as a tool or a weapon. And so I always look at it as a tool and some people have taken a lot of these technologies and use them as a weapon.

    CINDY COHN: So what was that moment? Did you, do you have a moment when you decided you were really interested in tech and that you really found it to be important and worth devoting your time to?

    KARA SWISHER: I was always interested in it because I had studied propaganda and the uses of TV and radio and stuff. So I was always interested in media, and this was the media on steroids. And so I recall downloading an entire book onto my computer and I thought, oh, look at this. Everything is digital. And so the premise that I came to at the time, or the idea I came to was that everything that can be digitized would be digitized, and that was a huge idea because that means entire industries would change.

    CINDY COHN: Yeah.

    JASON KELLEY: Kara, you started by talking about this concentration of power, which is obvious to anyone who's been paying attention, and at the same time, you know, we did use to have tech leaders who, I think, they had less power. It was less concentrated, but also people were more focused, I think, on solving real problems.
    You know, you talk a lot about Steve Jobs. There was a goal of improving people's lives with technology, that that didn't necessarily it, it helped the bottom line, but the focus wasn't just on quarterly profits. And I wonder if you can talk a little bit about what you think it would look like if we returned to that in some way. Is that gone?

    KARA SWISHER: I don't think we were there. I think they were always focused on quarterly profits. I think that was a canard. I wrote about it, that they would pretend that they were here to help. You know, it's sort of like the Twilight Zone episode To Serve Man. It's a cookbook. I always thought it was a cookbook for these people.
    And they were always formulated in terms of making money and maximizing value for their shareholders, which was usually themselves. I wasn't stupid. I understood what they were doing, especially when these stocks went to the moon, especially the early internet days and their first boom. And they became instant, instant-airs, I think they were called that, which was instant millionaires and, and then now beyond that.
    And so I was always aware of the money, even if they pretended they weren't, they were absolutely aware And so I don't have a romantic version of this at the beginning, um, except among a small group of people, you know, who, who, who were seeing it, like the Whole Earth Catalog and things like that, which we're looking at it as a way to bring everybody together or to spread knowledge throughout the world, which I also believed in too.

    JASON KELLEY: Do you think any of those people are still around?

    KARA SWISHER: No, they’re dead.

    JASON KELLEY: I mean, literally, you know, they're literally dead, but are there any heirs of theirs?

    KARA SWISHER: No, I mean, I don't think they had any power. I don't, I think that some of the theoretical stuff was about that, but no, they didn't have any power. The people that had power were the, the Mark Zuckerbergs, the Googles, and even, you know, the Microsofts, I mean, Bill Gates is kind of the exemplification of all that. As he, he took other people's ideas and he made it into an incredibly powerful company and everybody else sort of followed suit.

    JASON KELLEY: And so mostly for you, the concentration of power is the biggest shift that's happened and you see regulation or, you know, anti-competitive moves as ways to get us back.

    KARA SWISHER: We don't have any, like, if we had any laws, that would be great, but we don't have any that, that constrain them. And now under President Trump, there's not gonna be any rules around AI, probably. There aren't gonna be any rules around any significant rules, at least around any of it.
    So they, the first period, which was the growth of where we are now, was not constrained in any way, and now it's not just not constrained, but it's helping whether it's cryptocurrency or things like that. And so I don't feel like there's any restrictions, like at this point, in fact, there's encouragement by government to do whatever you want.

    CINDY COHN: I think that's a really big worry. And you know, I think you're aware, as are we, that, you know, just because somebody comes in and says they're gonna do something about a problem with legislation doesn't mean that they're, they're actually having that. And I think sometimes we feel like we sit in this space where we're like, we agree with you on the harm, but this thing you wanna do is a terrible idea and trying to get the means and the ends connected is kind of a lot of where we live sometimes, and I think you've seen that as well, that like once you've articulated the harm, that's kind of the start of the journey about whether the thing that you're talking about doing will actually meet that moment.

    KARA SWISHER: Absolutely. The harms, they don't care about, that's the issue. And I think I was always cognizant of the harms, and that can make you seem like, you know, a killjoy of some sort. But it's not, it's just saying, wow, if you're gonna do this social media, you better pay attention to this or that.
    They acted like the regular problems that people had didn't exist in the world, like racism, you know, sexism. They said, oh, that can be fixed, and they never offered any solutions, and then they created tools that made it worse.

    CINDY COHN: I feel like the people who thought that we could really use technology to build a better world, I, I don't think they were wrong or naive. I just think they got stomped on by the money. Um, and, you know, uh.

    KARA SWISHER: Which inevitably happens.

    CINDY COHN: It does. And the question is, how do you squeeze out something, you know, given that this is the dynamic of capitalism, how do you squeeze out space for protecting people?
    And we've had times in our society when we've done that better, and we've done that worse. And I feel like there are ways in which this is as bad as has gotten in my lifetime. You know, with the government actually coming in really strongly on the side of, empowering the powerful and disempowering the disempowered.
    I see competition as a way to do this. EFF was, you know, it was primarily an organization focused on free speech and privacy, but we kind of backed into talking about competition 'cause we felt like we couldn't get at any of those problems unless we talked about the elephant in the room.
    And I think you think about it, really on the individual, you know, you know all these guys, and on that very individual level of what, what kinds of things will, um, impact them.
    And I'm wondering if you have some thoughts about the kinds of rules or regulations that might actually, you know, have an impact and not, not turn into, you know, yet another cudgel that they get to wield.

    KARA SWISHER: Well any, any would be good. Like I don't, I don't, there isn't any, there isn't any you could speak of that's really problematic for them, except for the courts which are suing over antitrust issues or some regulatory agencies. But in general, what they've done is created an easy glide path for themselves.
    I mean, we don't have a national privacy regulation. We don't have algorithmic transparency bills. We don't have data protection really, and to speak of for people. We don't have, you know, transparency into the data they collect. You know, we have more rules and laws on airplanes and cigarettes and everybody else, but we don't have any here. So you know, antitrust is a whole nother area of, of changing, of our antitrust rules. So these are all areas that have to be looked at. But we haven't, they haven't, they haven't passed a thing. I mean, lots of legislators have tried, but, um, it hasn't worked really.

    CINDY COHN: You know, a lot of our supporters are people who work in tech but aren't necessarily the. You know, the tech giants, they're not the tops of these companies, but they work in the companies.
    And one of the things that I, you know, I don't know if you have any insights if you've thought about this, but we speak with them a lot and they're dismayed at what's going on, but they kind of feel powerless. And I'm wondering if you have thoughts like, you know, speaking to the people who aren't, who aren't the Elons and the, the guys at the top, but who are there, and who I think are critical to keeping these companies going. Are there ways that they can make their voices heard that you've thought of that would, that might work? I guess I, I'm, I'm pulling on your insight because you know the actual people.

    KARA SWISHER: Yeah, you know, speak out. Just speak out. You know, everybody gets a voice these days and there's all kinds of voices that never would've gotten heard and to, you know, talk to legislators, involve customers, um, create businesses where you do those good practices. Like that's the best way to do it is create wealth and capitalism and then use best practices there. That to me is the best way to do that.

    CINDY COHN: Are there any companies that you look at from where you sit that you think are doing a pretty good job or at least trying? I don't know if you wanna call anybody out, but, um, you know, we see a few, um, and I kind of feel like all the air gets sucked out of the room.

    KARA SWISHER: In bits and pieces. In bits and pieces, you know, Apple's good on the privacy thing, but then it's bad on a bunch of other things. Like you could, like, you, you, the problem is, you know, these are shareholder driven companies and so they're gonna do what's best for them and they could, uh, you know, wave over to privacy or wave over to, you know, more diversity, but they really are interested in making money.
    And so I think the difficulty is figuring out, you know, do they have duties as citizens or do they just have duties as corporate citizens? And so that's always been a difficult thing in our society and will continue to be.

    CINDY COHN: Yeah.

    JASON KELLEY: We've always at EFF really stood up for the user in, in this way where sometimes we're praising a company that normally people are upset with because they did a good thing, right? Apple is good on privacy. When they do good privacy things we say, that's great. You know, and if Apple makes mistakes, we say that too.
    And it feels like, um, you know, we're in the middle of, I guess, a “tech lash.” I don't know when it started. I don't know if it'll ever end. I don't know if there's, if that's even a real term in terms of like, you know, tech journalism. But do you find that it's difficult? Two, get people to accept sort of like any positive praise for companies that are often just at this point, completely easy to ridicule for all the mistakes they've made.

    KARA SWISHER: I think the tech journalism has gotten really strong. It's gotten, I mean, just look at the DOGE coverage. I think it really, I'll point to WIRED as a good example, as they've done astonishing stuff. I think a lot of people have done a lot on, on, uh, you know, the abuses of social media. I think they've covered a lot of issues from the overuse of technology to, you know, all the crypto stuff. It doesn't mean people follow along, but they've certainly been there and revealed a lot of the flaws there. Um, while also covering it as like, this is what's happening with ai. Like this is what's happening, here's where it's going. And so you have to cover as a thing. Like, this is what's being developed. but then there's, uh, others, you know, who have to look into the real problems.

    JASON KELLEY: I get a lot of news from 404 Media, right?

    KARA SWISHER: Yeah, they’re great.

    JASON KELLEY: That sort of model is relatively new and it sort of sits against some of these legacy models. Do you see, like, a growing role for things like that in a future?

    KARA SWISHER: There's lots of different things. I mean, I came from like, as you mean, part of the time, although I got away from it pretty quickly, but some of 'em are doing great. It just depends on the story, right? Some of the stories are great, like. Uh, you know, uh, there's a ton of people at the Times have done great stuff on, on, on lots of things around kids and abuses and social media.
    At the same time, there's all these really exciting young, not necessarily young, actually, um, independent media companies, whether it's Casey Newton, at Platformer, or Eric Newcomer covering VCs, or 404. There's all these really interesting new stuff. That's doing really well. WIRED is another one that's really seen a lot of bounce back under its current editor who just came on relatively recently.
    So it just depends. It depends on where it is, but there's, Verge does a great job. But I think it's individually the stories in, there's no like big name in this area. There's just a lot of people and then there's all these really interesting experts or people who work in tech who've written a lot. That is always very interesting too, to me. It's interesting to hear from insiders what they think is happening.

    CINDY COHN: Well, I'm happy to hear this, this optimism. 'Cause I worry a lot about, you know, the way that the business model for media has really been hollowed out. And then seeing things like, you know, uh, some of the big broadcast news people folding,

    KARA SWISHER: Yeah, but broadcast never did journalism for tech, come on. Like, some did, I mean, one or two, but it wasn't them who was doing it. It was usually, you know, either the New York Times or these smaller institutions have been doing a great job. There's just been tons and tons of different things, completely different things.

    JASON KELLEY: What do you think about the fear, maybe I'm, I'm misplacing it, maybe it's not as real as I imagine it is. Um, that results from something like a Gawker situation, right. You know, you have wealthy people.

    KARA SWISHER: That was a long time ago.

    JASON KELLEY: It was, but it, you know, a precedent was sort of set, right? I mean, do you think people in working in tech journalism can take aim at, you know, individual people that have a lot of power and wealth in, in the same way that they could before?

    KARA SWISHER: Yeah. I think they can, if they're accurate. Yeah, absolutely.

    CINDY COHN: Yeah, I think you're a good exhibit A for that, you pull no punches and things are okay. I mean, we get asked sometimes, um, you know, are, are you ever under attack because of your, your sharp advocacy? And I kind of think your sharp advocacy protects you as long as you're right. And I think of you as somebody who's also in, in a bit of that position.

    KARA SWISHER: Mmhm.

    CINDY COHN: You may say this is inevitable, but I I wanted to ask you, you know, I feel like when I talk with young technical people, um, they've kind of been poisoned by this idea that the only way you can be successful is, is if you're an asshole.
    That there's no, there's no model, um, that, that just just goes to the deal. So if they want to be successful, they have to be just an awful person. And so even if they might have thought differently beforehand, that's what they think they have to do. And I'm wondering if you run into this as well, and I sometimes find myself trying to think about, you know, alternate role models for technical people and if you have any that you think of.

    KARA SWISHER: Alternate role models? It's mostly men. But there are, there's all kinds of, like, I just did an interview with Lisa Su, who's head of AMD, one of the few women CEOs. And in AI, there's a number of women, uh, you know, you don't necessarily have to have diversity to make it better, but it sure helps, right? Because people have a different, not just diversity of gender or diversity of race, but diversity of backgrounds, politics. You know, the more diverse you are, the better products you make, essentially. That's my always been my feeling.
    Look, most of these companies are the same as it ever was, and in fact, there's fewer different people running them, essentially. Um, but you know, that's always been the nature of, of tech essentially, that it was sort of a, a man's world.

    CINDY COHN: Yeah, I see that as well. I just worry that young people or junior people coming up think that the only way that you can be successful is a, if you look like the guys who are already successful, but also, you know, if you're just kind of not, you know, if you're weird and not nice.

    KARA SWISHER: It's just depends on the person. It's just that when you get that wealthy, you have a lot of people licking you up and down all day, and so you end up in the crazy zone like Elon Musk, or the arrogant zone like Mark Zuckerberg or whatever. It's just they don't get a lot of pushback and when you don't get a lot of friction, you tend to think everything you do is correct.

    JASON KELLEY: Let's take a quick moment to thank our sponsor. How to Fix The Internet is supported by the Alfred P Sloan Foundation's program and public understanding of science and technology enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
    We also wanna thank EFF members and donors. You're the reason we exist, and EFF has been fighting for digital rights. And EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever. So please, if you like what we do, go to eff.org/pod to donate. Also, we'd love for you to join us at this year's EFF awards where we celebrate the people working towards the better digital future that we all care so much about.
    Those are coming up on September 10th in San Francisco. You can find more information about that at eff.org/awards.
    We also wanted to share that our friend Cory Doctorow has a new podcast. Have a listen to this: [WHO BROKE THE INTERNET TRAILER]
    And now back to our conversation with Kara Swisher.

    CINDY COHN: I mean, you watched all these tech giants kind of move over to the Trump side and then, you know, stand there on the inauguration. It sounds like you thought that might've been inevitable.

    KARA SWISHER: I said it was inevitable, they were all surprised. They're always surprised when I'm like, Elon's gonna crack up with the president. Oh look, they cracked up with, it's not hard to follow these people. In his case, he's, he's personally, there's something wrong with his head, obviously. He always cracks up with people. So that's what happened here.
    In that case, they just wanted things. They want things. You think they liked Donald Trump? You’re wrong there? I'll tell you. They don't like him. They need him. They wanna use him and they were irritated by Biden 'cause he presumed to push back on and he didn't do a very good job of it, honestly. But they definitely want things.

    CINDY COHN: I think the tech industry came up at a time when deregulation was all the rage, right? So in some ways they were kind of born into a world where regulation was an anathema and they took full advantage of the situation.
    As did lots of other areas that got deregulated or were not regulated in the first place. But I think tech, because of timing in some ways, tech was really born into this zone. And there was some good things for it too. I mean, you know, EFF was, was successful in the nineties at making sure that the internet got first Amendment protection, that we didn't, go to the other side with things like the Communications Decency Act and squelch any adult material from being put online and reduce everything to the side. But getting that right and kind of walking through the middle ground where you have regulation that supports people but doesn't squelch them is just an ongoing struggle,

    KARA SWISHER: Mm-hmm. Absolutely.

    JASON KELLEY: I have this optimistic hope that these companies and their owners sort of crumble as they continue to, as Cory Doctorow says, enshittify, right? The only reason they don't crumble is that they have this lock in with users. They have this monopoly power, but you see a, you know, a TikTok pops up and suddenly Instagram has a real competitor, not because rules have been put in place to change Instagram, but because a different, new maybe better platform.

    KARA SWISHER: There’s nothing like competition, making things better. Right? Competition always helps.

    JASON KELLEY: Yeah, when I think of competition law, I think of crushing companies, I think of breaking them up. But what do you think we can do to make this sort of world better and more fertile for new companies? You know, you talked earlier about tech workers.

    KARA SWISHER: Well, you have to pass those things where they don't get to. Antitrust is the best way to do that. Right? And, but those things move really slowly, unfortunately. And, you know, good antitrust legislation and antitrust enforcement, that's happening right now. But it opens up, I mean, the reason Google exists is 'cause of the antitrust actions around Microsoft.
    And so we have to like continue to press on things like that and continue to have regulators that are allowed to pursue cases like that. And then at the same time have a real focus on creating wealth. We wanna create wealth, we wanna create, we wanna give people breaks.
    We wanna have the government involved in funding some of these things, making it so that small companies don't get run over by larger companies.
    Not letting power concentrate into a small group of people. When that happens, that's what happens. You end up with less companies. They kill them in the crib, these companies. And so not letting things get bought, have a scrutiny over things, stuff like that.

    CINDY COHN: Yeah, I think a lot more merger review makes a lot of sense. I think a lot of thinking about, how are companies crushing each other and what are the things that we can do to try to stop that? Obviously we care a lot about interoperability, making sure that technologies that, that have you as a customer don't get to lock you in, and make it so that you're just stuck with their broken business model and can do other things.
    There's a lot of space for that kind of thing. I mean, you know, I always tell the story, I'm sure you know this, that, you know, if it weren't for the FCC telling AT&T that they had to let people plug something other than phones into the wall, we wouldn't have had the internet, you know, the home internet revolution anyway.

    KARA SWISHER: Right. Absolutely. 100%.

    CINDY COHN: Yeah, so I think we are in agreement with you that, you know, competition is really central, but it's, you know, it's kind of an all of the above and certainly around privacy issues. We can do a lot around this business model. Which I think is driving so many of the other bad things that we are seeing, um, with some comprehensive privacy law.
    But boy, it sure feels like right now, you know, we got two branches of government that are not on board with that. And the third one kind of doing okay, but not, you know, and the courts were doing okay, but slowly and inconsistently. Um, where do you see hope? Where are you, where are you looking for the for

    KARA SWISHER: I mean, some of this stuff around AI could be really great for humanity, or it could be great for a small amount of people. That's really, you know, which one do we want? Do we want this technology to be a tool or a weapon against us? Do we want it to be in the hands of bigger companies or in the hands of all of us and we make decisions around it?
    Will it help us be safer? Will it help us cure cancer or is it gonna just make a rich person a billion dollars richer? I mean, it's the age old story, isn't it? This is not a new theme in America where, the rich get richer and the poor get less. And so these, these technologies could, as you know, recently out a book all abundance.
    It could create lots of abundance. It could create lots of interesting new jobs, or it could just put people outta work and let the, let the people who are richer get richer. And I don't think that's a society we wanna have. And years ago I was talking about income inequality with a really wealthy person and I said, you either have to do something about, you know, the fact that people, that we didn't have a $25 minimum wage, which I think would help a lot, lots of innovation would come from that. If people made more money, they'd have a little more choices. And it's worth the investment in people to do that.
    And I said, we have to either deal with income inequality or armor plate your Tesla. Tesla. And I think he wanted to armor plate his Tesla. That's when ire, and then of course, cyber truck comes out. So there you have it. But, um, I think they don't care about that kind of stuff. You know, they're happy to create their little, we, those little worlds where they're highly protected, but it's not a world I wanna live in.

    CINDY COHN: Kara, thank you so much. We really appreciate you coming in. I think you sit in such a different place in the world than where we sit, and it's always great to get your perspective.

    KARA SWISHER: Absolutely. Anytime. You guys do amazing work and you know you're doing amazing work and you should always keep a watch on these people. It's not, you shouldn't be against everything. 'cause some people are right. But you certainly should keep a watch on people

    CINDY COHN: Well, great. We, we sure will.

    JASON KELLEY: up. Yeah, we'll keep doing it. Thank you,

    CINDY COHN: Thank you.

    KARA SWISHER: All right. Thank you so much.

    CINDY COHN: Well, I always appreciate how Kara gets right to the point about how the concentration of power among a few tech moguls has led to so many of the problems we face online and how competition. Along with some things, we so often hear about real laws requiring transparency, privacy protections, and data protections can help shift the tide.

    JASON KELLEY: Yeah, you know, some of these fixes are things that people have been talking about for a long time and I think we're at a point where everyone agrees on a big chunk of them. You know, especially the ones that we promote like competition and transparency oftentimes, and privacy. So it's great to hear that Kara, who's someone that, you know, has worked on this issue and in tech for a long time and thought about it and loves it, as she said, you know, agrees with us on some of the, some of the most important solutions.

    CINDY COHN: Sometimes these criticisms of the tech moguls can feel like something everybody does, but I think it's important to remember that Kara was really one of the first ones to start pointing this out. And I also agree with you, you know, she's a person who comes from the position of really loving tech. And Kara's even a very strong capitalist. She really loves making money as well. You know, her criticism comes from a place of betrayal, that, again, like Molly White, earlier this season, kind of comes from a position of, you know, seeing the possibilities and loving the possibilities, and then seeing how horribly things are really going in the wrong direction.

    JASON KELLEY: Yeah, she has this framing of, is it a tool or a weapon? And it feels like a lot of the tools that she loved became weapons, which I think is how a lot of us feel. You know, it's not always clear how to draw that line. But it's obviously a good question that people, you know, working in the tech field, and I think people even using technology should ask themselves, when you're really enmeshed with it, is the thing you're using or building or promoting, is it working for everyone?
    You know, what are the chances, how could it become a weapon? You know, this beautiful tool that you're loving and you have all these good ideas and, you know, ideas that, that it'll change the world and improve it. There's always a way that it can become a weapon. So I think it's an important question to ask and, and an important question that people, you know, working in the field need to ask.

    CINDY COHN: Yeah. And I think that, you know, that's the gem of her advice to tech workers. You know, find a way to make your voice heard if you see this happening. And there's a power in that. I do think that one thing that's still true in Silicon Valley is they compete for top talent.
    And, you know, top talent indicating that they're gonna make choices based on some values is one of the levers of power. Now I don't think anybody thinks that's the only one. This isn't an individual responsibility question. We need laws, we need structures. You know, we need some structural changes in antitrust law and elsewhere in order to make that happen. It's not all on the shoulders of the tech workers, but I appreciate that she really did say, you know, there's a role to be played here. You're not just pawns in this game.

    JASON KELLEY: And that's our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we'd love to hear from you. Visit eff.org/podcast and click on listen or feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch and just see what's happening in digital rights this week and every week.
    Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program for Public Understanding of Science and Technology. We'll see you next time. I'm Jason Kelley.

    CINDY COHN: And I'm Cindy Cohn.

    MUSIC CREDITS: This podcast is licensed Creative Commons Attribution 4.0 international, and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Additional music, theme remixes and sound design by Gaetan Harris.

    Josh Richman

    Ryanair’s CFAA Claim Against Booking.com Has Nothing To Do with Actual Hacking

    1 week 4 days ago

    The Computer Fraud and Abuse Act (CFAA) is supposed to be about attacks on computer systems. It is not, as a federal district court suggested in Ryanair v. Booking.com, applicable when someone uses valid login credentials to access information to which those credentials provide access. Now that the case is on appeal, EFF has filed an amicus brief asking the Third Circuit to clarify that this case is about violations to policy, not hacking, and does not qualify as access “without authorization” under CFAA.

    The case concerns transparency in airfare pricing. Ryanair complained that Booking republished Ryanair’s prices, some of which were only visible when a user logged in. Ryanair sent a cease and desist to Booking, but didn't deactivate the usernames and passwords associated with the uses they disliked. When the users allegedly connected to Booking kept using those credentials to gather pricing data, Ryanair claimed it was a CFAA violation. If this doesn’t sound like “computer hacking” to you, you’re right.

    The CFAA has proven bad for research, security, competition, and innovation. For years we’ve worked to limit its scope to Congress’s original intention: actual hacking that bypasses computer security. It should have nothing to do with Ryanair’s claims here: what amounts to a terms of use violation because the information that was accessed is available to anyone with login credentials. This is the course charted Van Buren v. United States, where the Supreme Court explained that “authorization” refers to technical concepts of computer authentication. As we stated in our brief:

    The CFAA does not apply to every person who merely violates terms of service by sharing account credentials with a family member or by withholding sensitive information like one’s real name and birthdate when making an account.

    Building on the good decisions in Van Buren and the Ninth Circuit’s ruling in hiQ Labs v. LinkedIn, we weighed in at the Third Circuit urging the court to hold clearly that triggering a CFAA violation requires bypassing a technology that restricts access. In this case, the login credentials that were created were legit access. But the rule adopted by the lower court would criminize many everyday behaviors, like logging into a streaming service account with a partner’s login, or logging into a spouse’s bank account to pay a bill at their behest. This is not hacking or a violation of the CFAA, it’s just violating a company’s wish list in its Terms of Service.

    This rule would be especially dangerous for journalists and academic researchers. Researchers often create a variety of testing accounts. For example, if they’re researching how a service displays housing offers, they may make different accounts associated with different race, gender, or language settings. These sorts of techniques may be adversarial to the company, but they shouldn’t be illegal. But according to the court’s opinion, if a company disagrees with this sort of research, the company could not just ban the researchers from using the site, it could render that research criminal by just sending a letter notifying the researcher that they’re not authorized to use the service in this way.

    Many other examples and common research techniques used by journalists, academic researchers, and security researchers would be at risk under this rule, but the end result would be the same no matter what: it would chill valuable research that keeps us all safer online.

    A broad reading of CFAA in this case would also undermine competition by providing a way for companies to limit data scraping, effectively cutting off one of the ways websites offer tools to compare prices and features.

    Courts must follow Van Buren’s lead and interpret the CFAA as narrowly as it was designed. Logging into a public website with valid credentials, even if you scrape the data once you’re logged in, is not hacking. A broad reading leads to unintended consequences, and website owners do not need new shields against independent accountability.

    You can read our amicus brief here.

    Thorin Klosowski

    You Went to a Drag Show—Now the State of Florida Wants Your Name

    1 week 5 days ago

    If you thought going to a Pride event or drag show was just another night out, think again. If you were in Florida, it might land your name in a government database.

    That’s what’s happening in Vero Beach, FL, where the Florida Attorney General’s office has subpoenaed a local restaurant, The Kilted Mermaid, demanding surveillance video, guest lists, reservation logs, and contracts of performers and other staff—all because the venue hosted an LGBTQ+ Pride event.

    To be clear: no one has been charged with a crime, and the law Florida is likely leaning on here—the so-called “Protection of Children Act” (which was designed to be a drag show ban)—has already been blocked by federal courts as likely unconstitutional. But that didn’t stop Attorney General James Uthmeier from pushing forward anyway. Without naming a specific law that was violated, the AG’s press release used pointed and accusatory language, stating that "In Florida, we don't sacrifice the innocence of children for the perversions of some demented adults.” His office is now fishing for personal data about everyone who attended or performed at the event. This should set off every civil liberties alarm bell we have.

    Just like the Kids Online Safety Act (KOSA) and other bills with misleading names, this isn’t about protecting children. It’s about using the power of the state to intimidate people government officials disagree with, and to censor speech that is both lawful and fundamental to American democracy.

    Drag shows—many of which are family-friendly and feature no sexual content—have become a political scapegoat. And while that rhetoric might resonate in some media environments, the real-world consequences are much darker: state surveillance of private citizens doing nothing but attending a fun community celebration. By demanding video surveillance, guest lists, and reservation logs, the state isn’t investigating a crime, it is trying to scare individuals from attending a legal gathering. These are people who showed up at a public venue for a legal event, while a law restricting it was not even in effect. 

    The Supreme Court has ruled multiple times that subpoenas forcing disclosure of members of peaceful organizations have a chilling effect on free expression. Whether it’s a civil rights protest, a church service, or, yes, a drag show: the First Amendment protects the confidentiality of lists of attendees.

    Even if the courts strike down this subpoena—and they should—the damage will already be done. A restaurant owner (who also happens to be the town’s vice mayor) is being dragged into a state investigation. Performers’ identities are potentially being exposed—whether to state surveillance, inclusion in law enforcement databases, or future targeting by anti-LGBTQ+ groups. Guests who thought they were attending a fun community event are now caught up in a legal probe. These are the kinds of chilling, damaging consequences that will discourage Floridians from hosting or attending drag shows, and could stamp out the art form entirely. 

    EFF has long warned about this kind of mission creep: where a law or policy supposedly aimed at public safety is turned into a tool for political retaliation or mass surveillance. Going to a drag show should not mean you forfeit your anonymity. It should not open you up to surveillance. And it absolutely should not land your name in a government database.

    Rindala Alajaji

    Just Banning Minors From Social Media Is Not Protecting Them

    1 week 6 days ago

    By publishing its guidelines under Article 28 of the Digital Services Act, the European Commission has taken a major step towards social media bans that will undermine privacy, expression, and participation rights for young people that are already enshrined in international human rights law. 

    EFF recently submitted feedback to the Commission’s consultation on the guidelines, emphasizing a critical point: Online safety for young people must include privacy and security for them and must not come at the expense of freedom of expression and equitable access to digital spaces.

    Article 28 requires online platforms to take appropriate and proportionate measures to ensure a high level of safety, privacy and security of minors on their services. But the article also prohibits targeting minors with personalized ads, a measure that would seem to require that platforms know that a user is a minor. The DSA acknowledges that there is an inherent tension between ensuring a minor’s privacy and requiring platforms to know the age of every user. The DSA does not resolve this tension. Rather, it states that service providers should not be incentivized to collect the age of their users, and Article 28(3) makes a point of not requiring service providers to collect and process additional data to assess whether a user is underage. 

    Thus, the question of age checks is a key to understanding the obligations of online platforms to safeguard minors online. Our submission explained the serious concerns that age checks pose to the rights and security of minors. All methods for conducting age checks come with serious drawbacks. Approaches to verify a user’s age generally involve some form of government-issued ID document, which millions of people in Europe—including migrants, members of marginalized groups and unhoused people, exchange students, refugees and tourists—may not have access to.

    Other age assurance methods, like biometric age estimation, age estimation based on email addresses or user activity, involve the processing of vast amounts of personal, sensitive data – usually in the hands of third parties. Beyond being potentially exposed to discrimination and erroneous estimations, users are asked to trust platforms’ intransparent supply chains and hope for the best. Age assurance methods always impact the rights of children and teenagers: Their rights to privacy and data protection, free expression, information and participation.

    The Commission's guidelines contain a wealth of measures elucidating the Commission's understanding of "age appropriate design" of online services. We have argued that some of them, including default settings to protect users’ privacy, effective content moderation and ensuring that recommender systems’ don’t rely on the collection of behavioral data, are practices that would benefit all users

    But while the initial Commission draft document considered age checks as only a tool to determine users’ ages to be able to tailor their online experiences according to their age, the final guidelines go far beyond that. Crucially, the European Commission now seems to consider “measures restricting access based on age to be an effective means to ensure a high level of privacy, safety and security for minors on online platforms” (page 14). 

    This is a surprising turn, as many in Brussels have considered social media bans like the one Australia passed (and still doesn’t know how to implement) disproportionate. Responding to mounting pressure from Member States like France, Denmark, and Greece to ban young people under a certain age from social media platforms, the guidelines contain an opening clause for national rules on age limits for certain services. According to the guidelines, the Commission considers such access restrictions  appropriate and proportionate where “union or national law, (...) prescribes a minimum age to access certain products or services (...), including specifically defined categories of online social media services”. This opens the door for different national laws introducing different age limits for services like social media platforms. 

    It’s concerning that the Commission generally considers the use of age verification proportionate in any situation where a provider of an online platform identifies risks to minors’ privacy, safety, or security and those risks “cannot be mitigated by other less intrusive measures as effectively as by access restrictions supported by age verification” (page 17). This view risks establishing a broad legal mandate for age verification measures.

    It is clear that such bans will do little in the way of making the internet a safer space for young people. By banning a particularly vulnerable group of users from accessing platforms, the providers themselves are let off the hook: If it is enough for platforms like Instagram and TikTok to implement (comparatively cheap) age restriction tools, there are no incentives anymore to actually make their products and features safer for young people. Banning a certain user group changes nothing about problematic privacy practices, insufficient content moderation or business models based on the exploitation of people’s attention and data. And assuming that teenagers will always find ways to circumvent age restrictions, the ones that do will be left without any protections or age-appropriate experiences.

    Svea Windwehr

    Zero Knowledge Proofs Alone Are Not a Digital ID Solution to Protecting User Privacy

    2 weeks 1 day ago

    In the past few years, governments across the world have rolled out digital identification options, and now there are efforts encouraging online companies to implement identity and age verification requirements with digital ID in mind. This blog is the first in this short series that will explain digital ID and the pending use case of age verification. The following posts will evaluate what real protections we can implement with current digital ID frameworks and discuss how better privacy and controls can keep people safer online.

    Age verification measures are having a moment, with policymakers in the U.S. and around the world passing legislation mandating online services and companies to introduce technologies that require people to verify their identities to access content deemed appropriate for their age. But for most people, having physical government documentation like a driver's license, passport, or other ID is not a simple binary of having it or not. Physical ID systems involve hundreds of factors that impact their accuracy and validity, and everyday situations occur where identification attributes can change, or an ID becomes invalid or inaccurate or needs to be reissued: addresses change, driver’s licenses expire or have suspensions lifted, or temporary IDs are issued in lieu of obtaining permanent identification.  

    The digital ID systems currently being introduced potentially solve some problems like identity fraud for business and government services, but leave the holder of the digital ID vulnerable to the needs of the companies collecting such information. State and federal embrace of digital ID is based on claims of faster access, fraud prevention, and convenience. But with digital ID being proposed as a means of online verification, it is just as likely to block claims of public assistance and other services as facilitate them. That’s why legal protections are as important as the digital IDs themselves. To add to this, in places that lack comprehensive data privacy legislation, verifiers are not heavily restricted in what they can and can’t ask the holder. In response, some privacy mechanisms have been suggested and few have been made mandatory, such as the promise that a feature called Zero Knowledge Proofs (ZKPs) will easily solve the privacy aspects of sharing ID attributes.

    Zero Knowledge Proofs: The Good News

    The biggest selling point of modern digital ID offerings, especially to those seeking to solve mass age verification, is being able to incorporate and share something called a Zero Knowledge Proof (ZKP) for a website or mobile application to verify ID information, and not have to share the ID itself or information explicitly on it. ZKPs provide a cryptographic way to not give something away, like your exact date of birth and age from your ID, instead offering a “yes-or-no” claim (like above or below 18) to a verifier requiring a legal age threshold. More specifically, two properties of ZKPs are “soundness” and “zero knowledge.” Soundness is appealing to verifiers and governments to make it hard for an ID holder to present forged information (the holder won’t know the “secret”). Zero-Knowledge can be beneficial to the holder, because they don’t have to share explicit information like a birth date, just cryptographic proof that said information exists and is valid. There have been recent announcements from major tech companies like Google who plan to integrate ZKPs for age verification and “where appropriate in other Google products”.

    Zero Knowledge Proofs: The Bad News

    What ZKPs don’t do is mitigate verifier abuse or limit their requests, such as over-asking for information they don’t need or limiting the number of times they request your age over time. They don’t prevent websites or applications from collecting other kinds of observable personally identifiable information like your IP address or other device information while interacting with them.

    ZKPs are a great tool for sharing less data about ourselves over time or in a one time transaction. But this doesn’t do a lot about the data broker industry that already has massive, existing profiles of data on people. We understand that this was not what ZKPs for age verification were presented to solve. But it is still imperative to point out that utilizing this technology to share even more about ourselves online through mandatory age verification establishes a wider scope for sharing in an already saturated ecosystem of easily linked, existing personal information online. Going from presenting your physical ID maybe 2-3 times a week to potentially proving your age to multiple websites and apps every day online is going to render going online itself as a burden at minimum and a barrier entirely at most for those who can’t obtain an ID.

    Protecting The Way Forward

    Mandatory age verification takes the potential privacy benefits of mobile ID and proposed ZKPs solutions, then warps them into speech chilling mechanisms.

    Until the hard questions of power imbalances for potentially abusive verifiers and prevention of phoning home to ID issuers are addressed, these systems should not be pushed forward without proper protections in place. A more private, holder-centric ID is more than just ZKPs as a catch all for privacy concerns. The case of safety online is not solved through technology alone, and involves multiple, ongoing conversations. Yes, that sounds harder to do than age checks online for everyone. Maybe, that’s why this is so tempting to implement. However, we encourage policy and law makers to look into what is best, and not what is easy.

    Alexis Hancock

    Canada’s Bill C-2 Opens the Floodgates to U.S. Surveillance

    2 weeks 1 day ago

    The Canadian government is preparing to give away Canadians’ digital lives—to U.S. police, to the Donald Trump administration, and possibly to foreign spy agencies.

    Bill C-2, the so-called Strong Borders Act, is a sprawling surveillance bill with multiple privacy-invasive provisions. But the thrust is clear: it’s a roadmap to aligning Canadian surveillance with U.S. demands. 

    It’s also a giveaway of Canadian constitutional rights in the name of “border security.” If passed, it will shatter privacy protections that Canadians have spent decades building. This will affect anyone using Canadian internet services, including email, cloud storage, VPNs, and messaging apps. 

    A joint letter, signed by dozens of Canadian civil liberties groups and more than a hundred Canadian legal experts and academics, puts it clearly: Bill C-2 is “a multi-pronged assault on the basic human rights and freedoms Canada holds dear,” and “an enormous and unjustified expansion of power for police and CSIS to access the data, mail, and communication patterns of people across Canada.”

    Setting The Stage For Cross-Border Surveillance 

    Bill C-2 isn’t just a domestic surveillance bill. It’s a Trojan horse for U.S. law enforcement—quietly building the pipes to ship Canadians’ private data straight to Washington.

    If Bill C-2 passes, Canadian police and spy agencies will be able to demand information about peoples’ online activities based on the low threshold of “reasonable suspicion.” Companies holding such information would have only five days to challenge an order, and blanket immunity from lawsuits if they hand over data. 

    Police and CSIS, the Canadian intelligence service, will be able to find out whether you have an online account with any organization or service in Canada. They can demand to know how long you’ve had it, where you’ve logged in from, and which other services you’ve interacted with, with no warrant required.

    The bill will also allow for the introduction of encryption backdoors. Forcing companies to surveil their customers is allowed under the law (see part 15), as long as these mandates don’t introduce a “systemic vulnerability”—a term the bill doesn’t even bother to define. 

    The information gathered under these new powers is likely to be shared with the United States. Canada and the U.S. are currently negotiating a misguided agreement to share law enforcement information under the US CLOUD Act. 

    The U.S. and U.K. put a CLOUD Act deal in place in 2020, and it hasn’t been good for users. Earlier this year, the U.K. home office ordered Apple to let it spy on users’ encrypted accounts. That security risk caused Apple to stop offering U.K. users certain advanced encryption features, and lawmakers and officials in the United States have raised concerns that the UK’s demands might have been designed to leverage its expanded CLOUD Act powers.

    If Canada moves forward with Bill C-2 and a CLOUD Act deal, American law enforcement could demand data from Canadian tech companies in secrecy—no notice to users would be required. Companies could also expect gag orders preventing them from even mentioning they have been forced to share information with US agencies.

    This isn’t speculation. Earlier this month, a Canadian government official told Politico that this surveillance regime would give Canadian police “the same kind of toolkit” that their U.S. counterparts have under the PATRIOT Act and FISA. The bill allows for “technical capability orders.” Those orders mean the government can force Canadian tech companies, VPNs, cloud providers, and app developers—regardless of where in the world they are based—to build surveillance tools into their products.

    Under U.S. law, non-U.S. persons have little protection from foreign surveillance. If U.S. cops want information on abortion access, gender-affirming care, or political protests happening in Canada—they’re going to get it. The data-sharing won’t necessarily be limited to the U.S., either. There’s nothing to stop authoritarian states from demanding this new trove of Canadians’ private data that will be secretly doled out by its law enforcement agencies. 

    EFF joins the Canadian Civil Liberties Association, OpenMedia, researchers at Citizen Lab, and dozens of other Canadian organizations and experts in asking the Canadian federal government to withdraw Bill C-2. 

    Further reading:

    • Joint letter opposing Bill C-2, signed by the Canadian Civil Liberties Association, OpenMedia, and dozens of other Canadian groups 
    • CCLA blog calling for withdrawal of Bill C-2
    • The Citizen Lab (University of Toronto) report on Canadian CLOUD Act deal
    • The Citizen Lab report on Bill C-2
    • EFF one-pager and blog on problems with the CLOUD Act, published before the bill was made law in 2018
    Joe Mullin

    You Shouldn’t Have to Make Your Social Media Public to Get a Visa

    2 weeks 3 days ago

    The Trump administration is continuing its dangerous push to surveil and suppress foreign students’ social media activity. The State Department recently announced an unprecedented new requirement that applicants for student and exchange visas must set all social media accounts to “public” for government review. The State Department also indicated that if applicants refuse to unlock their accounts or otherwise don’t maintain a social media presence, the government may interpret it as an attempt to evade the requirement or deliberately hide online activity.

    The administration is penalizing prospective students and visitors for shielding their social media accounts from the general public or for choosing to not be active on social media. This is an outrageous violation of privacy, one that completely disregards the legitimate and often critical reasons why millions of people choose to lock down their social media profiles, share only limited information about themselves online, or not engage in social media at all. By making students abandon basic privacy hygiene as the price of admission to American universities, the administration is forcing applicants to expose a wealth of personal information to not only the U.S. government, but to anyone with an internet connection.

    Why Social Media Privacy Matters

    The administration’s new policy is a dangerous expansion of existing social media collection efforts. While the State Department has required since 2019 that visa applicants disclose their social media handles—a policy EFF has consistently opposed—forcing applicants to make their accounts public crosses a new line.

    Individuals have significant privacy interests in their social media accounts. Social media profiles contain some of the most intimate details of our lives, such as our political views, religious beliefs, health information, likes and dislikes, and the people with whom we associate. Such personal details can be gleaned from vast volumes of data given the unlimited storage capacity of cloud-based social media platforms. As the Supreme Court has recognized, “[t]he sum of an individual’s private life can be reconstructed through a thousand photographs labeled with dates, locations, and descriptions”—all of which and more are available on social media platforms.

    By requiring visa applicants to share these details, the government can obtain information that would otherwise be inaccessible or difficult to piece together across disparate locations. For example, while visa applicants are not required to disclose their political views in their applications, applicants might choose to post their beliefs on their social media profiles.

    This information, once disclosed, doesn’t just disappear. Existing policy allows the government to continue surveilling applicants’ social media profiles even once the application process is over. And personal information obtained from applicants’ profiles can be collected and stored in government databases for decades.

    What’s more, by requiring visa applicants to make their private social media accounts public, the administration is forcing them to expose troves of personal, sensitive information to the entire internet, not just the U.S. government. This could include various bad actors like identity thieves and fraudsters, foreign governments, current and prospective employers, and other third parties.

    Those in applicants’ social media networks—including U.S. citizen family or friends—can also become surveillance targets by association. Visa applicants’ online activity is likely to reveal information about the users with whom they’re connected. For example, a visa applicant could tag another user in a political rant or posts photos of themselves and the other user at a political rally. Anyone who sees those posts might reasonably infer that the other user shares the applicant’s political beliefs. The administration’s new requirement will therefore publicly expose the personal information of millions of additional people, beyond just visa applicants.

    There are Very Good Reasons to Keep Social Media Accounts Private

    An overwhelming number of social media users maintain private accounts for the same reason we put curtains on our windows: a desire for basic privacy. There are numerous legitimate reasons people choose to share their social media only with trusted family and friends, whether that’s ensuring personal safety, maintaining professional boundaries, or simply not wanting to share personal profiles with the entire world.

    Safety from Online Harassment and Physical Violence

    Many people keep their accounts private to protect themselves from stalkers, harassers, and those who wish them harm. Domestic violence survivors, for example, use privacy settings to hide from their abusers, and organizations supporting survivors often encourage them to maintain a limited online presence.

    Women also face a variety of gender-based online harms made worse by public profiles, including stalking, sexual harassment, and violent threats. A 2021 study reported that at least 38% of women globally had personally experienced online abuse, and at least 85% of women had witnessed it. Women are, in turn, more likely to activate privacy settings than men.

    LGBTQ+ individuals similarly have good reasons to lock down their accounts. Individuals from countries where their identity puts them in danger rely on privacy protections to stay safe from state action. People may also reasonably choose to lock their accounts to avoid the barrage of anti-LGBTQ+ hate and harassment that is common on social media platforms, which can lead to real-world violence. Others, including LGBTQ+ youth, may simply not be ready to share their identity outside of their chosen personal network.

    Political Dissidents, Activists, and Journalists

    Activists working on sensitive human rights issues, political dissidents, and journalists use privacy settings to protect themselves from doxxing, harassment, and potential political persecution by their governments.

    Rather than protecting these vulnerable groups, the administration’s policy instead explicitly targets political speech. The State Department has given embassies and consulates a vague directive to vet applicants’ social media for “hostile attitudes towards our citizens, culture, government, institutions, or founding principles,” according to an internal State Department cable obtained by multiple news outlets. This includes looking for “applicants who demonstrate a history of political activism.” The cable did not specify what, exactly, constitutes “hostile attitudes.”

    Professional and Personal Boundaries

    People use privacy settings to maintain boundaries between their personal and professional lives. They share family photos, sensitive updates, and personal moments with close friends—not with their employers, teachers, professional connections, or the general public.

    The Growing Menace of Social Media Surveillance

    This new policy is an escalation of the Trump administration’s ongoing immigration-related social media surveillance. EFF has written about the administration’s new “Catch and Revoke” effort, which deploys artificial intelligence and other data analytic tools to review the public social media accounts of student visa holders in an effort to revoke their visas. And EFF recently submitted comments opposing a USCIS proposal to collect social media identifiers from visa and green card holders already living in the U.S., including when they submit applications for permanent residency and naturalization.

    The administration has also started screening many non-citizens' social media accounts for ambiguously-defined “antisemitic activity,” and previously announced expanded social media vetting for any visa applicant seeking to travel specifically to Harvard University for any purpose.

    The administration claims this mass surveillance will make America safer, but there’s little evidence to support this. By the government’s own previous assessments, social media surveillance has not proven effective at identifying security threats.

    At the same time, these policies gravely undermine freedom of speech, as we recently argued in our USCIS comments. The government is using social media monitoring to directly target and punish through visa denials or revocations foreign students and others for their digital speech. And the social media surveillance itself broadly chills free expression online—for citizens and non-citizens alike.

    In defending the new requirement, the State Department argued that a U.S. visa is a “privilege, not a right.” But privacy and free expression should not be privileges. These are fundamental human rights, and they are rights we abandon at our peril.

    Lisa Femia

    We're Envisioning A Better Future

    2 weeks 5 days ago

    Whether you've been following EFF for years or just discovered us (hello!), you've probably noticed that our team is kind of obsessed with the ✨future✨.

    From people soaring through the sky, to space cats, geometric unicorns, and (so many) mechas—we're always imagining what the future could look like when we get things right.

    That same spirit inspired EFF's 35th anniversary celebration. And this year, members can get our new EFF 35 Cityscape t-shirt plus a limited-edition challenge coin with a monthly or annual Sustaining Donation!

    Join eFF!

    Start a Convenient recurring donation Today!

    The EFF 35 Cityscape proposes a future where users are empowered to

    • Repair and tinker with their devices
    • Move freely without being tracked
    • Innovate with bold new ideas

    And this future isn't far off—we're building it now.

    EFF is pushing for right to repair laws across the country, exposing shady data brokers, and ensuring new technologies—like AI—have your rights in mind. EFF is determined and with your help, we're not backing down.

    We're making real progress—but we need your help. As a member-supported nonprofit, you are what powers this work.

    Start a Sustaining Donation of $5/month or $65/year by August 11, and we'll thank you with a limited-edition EFF35 Challenge Coin as well as this year's Cityscape t-shirt!

    Christian Romero

    EFF to Court: Protect Our Health Data from DHS

    2 weeks 5 days ago

    The federal government is trying to use Medicaid data to identify and deport immigrants. So EFF and our friends at EPIC and the Protect Democracy Project have filed an amicus brief asking a judge to block this dangerous violation of federal data privacy laws.

    Last month, the AP reported that the U.S. Department of Health and Human Services (HHS) had disclosed to the U.S. Department of Homeland Security (DHS) a vast trove of sensitive data obtained from states about people who obtain government-assisted health care. Medicaid is a federal program that funds health insurance for low-income people; it is partially funded and primarily managed by states. Some states, using their own funds, allow enrollment by non-citizens. HHS reportedly disclosed to DHS the Medicaid enrollee data from several of these states, including enrollee names, addresses, immigration status, and claims for health coverage.

    In response, California and 19 other states sued HHS and DHS. The states allege, among other things, that these federal agencies violated (1) the data disclosure limits in the Social Security Act, the Privacy Act, and HIPAA, and (2) the notice-and-comment requirements for rulemaking under the Administrative Procedure Act (APA).

    Our amicus brief argues that (1) disclosure of sensitive Medicaid data causes a severe privacy harm to the enrolled individuals, (2) the APA empowers federal courts to block unlawful disclosure of personal data between federal agencies, and (3) the broader public is harmed by these agencies’ lack of transparency about these radical changes in data governance.

    A new agency agreement, recently reported by the AP, allows Immigration and Customs Enforcement (ICE) to access the personal data of Medicaid enrollees held by HHS’ Centers for Medicare and Medicaid Services (CMS). The agreement states: “ICE will use the CMS data to allow ICE to receive identity and location information on aliens identified by ICE.”

    In the 1970s, in the wake of the Watergate and COINTELPRO scandals, Congress wisely enacted numerous laws to protect our data privacy from government misuse. This includes strict legal limits on disclosure of personal data within an agency, or from one agency to another. EFF sued over DOGE agents grabbing personal data from the U.S. Office of Personnel Management, and filed an amicus brief in a suit challenging ICE grabbing taxpayer data. We’ve also reported on the U.S. Department of Agriculture’s grab of food stamp data and DHS’s potential grab of postal data. And we’ve written about the dangers of consolidating all government information.

    We have data protection rules for good reason, and these latest data grabs are exactly why.

    You can read our new amicus brief here.

    Adam Schwartz

    Dating Apps Need to Learn How Consent Works

    2 weeks 6 days ago

    Staying safe whilst dating online should not be the responsibility of users—dating apps should be prioritizing our privacy by default, and laws should require companies to prioritize user privacy over their profit. But dating apps are taking shortcuts in safeguarding the privacy and security of users in favour of developing and deploying AI tools on their platforms, sometimes by using your most personal information to train their AI tools. 

    Grindr has big plans for its gay wingman bot, Bumble launched AI Icebreakers, Tinder introduced AI tools to choose profile pictures for users, OKCupid teamed up with AI photo editing platform Photoroom to erase your ex from profile photos, and Hinge recently launched an AI tool to help users write prompts.

    The list goes on, and the privacy harms are significant. Dating apps have built platforms that encourage people to be exceptionally open with sensitive and potentially dangerous personal information. But at the same time, the companies behind the platforms collect vast amounts of intimate details about their customers—everything from sexual preferences to precise location—who are often just searching for compatibility and connection. This data falling into the wrong hands can—and has—come with unacceptable consequences, especially for members of the LGBTQ+ community. 

    This is why corporations should provide opt-in consent for AI training data obtained through channels like private messages, and employ minimization practices for all other data. Dating app users deserve the right to privacy, and should have a reasonable expectation that the contents of conversations—from text messages to private pictures—are not going to be shared or used for any purpose that opt-in consent has not been provided for. This includes the use of personal data for building AI tools, such as chatbots and picture selection tools. 

    AI Icebreakers

    Back in December 2023, Bumble introduced AI Icebreakers to the ‘Bumble for Friends’ section of the app to help users start conversations by providing them with AI-generated messages. Powered by OpenAI’s ChatGPT, the feature was deployed in the app without ever asking for their consent. Instead, the company presented users with a pop-up upon entering the app which repeatedly nudged people to click ‘Okay’ or face the same pop-up every time the app is reopened until individuals finally relent and tap ‘Okay.’

    Obtaining user data without explicit opt-in consent is bad enough. But Bumble has taken this even further by sharing personal user data from its platform with OpenAI to feed into the company’s AI systems. By doing this, Bumble has forced its AI feature on millions of users in Europe—without their consent but with their personal data.

    In response, European nonprofit noyb recently filed a complaint with the Austrian data protection authority on Bumble’s violation of its transparency obligations under Article 5(1)(a) GDPR. In its report, noyb flagged concerns around Bumble’s data sharing with OpenAI, which allowed the company to generate an opening message based on information users shared on the app. 

    In its complaint, noyb specifically alleges that Bumble: 

    • Failed to provide information about the processing of personal data for its AI Icebreaker feature 
    • Confused users with a “fake” consent banner
    • Lacks a legal basis under Article 6(1) GDPR as it never sought user consent and cannot legally claim to base its processing on legitimate interest 
    • Can only process sensitive data—such as data involving sexual orientation—with explicit consent per Article 9 GDPR
    • Failed to adequately respond to the complainant’s access request, regulated through Article 15 GDPR.
    AI Chatbots for Dating

    Grindr recently launched its AI wingman. The feature operates like a chatbot and currently keeps track of favorite matches and suggests date locations. In the coming years, Grindr plans for the chatbot to send messages to other AI agents on behalf of users, and make restaurant reservations—all without human intervention. This might sound great: online dating without the time investment? A win for some! But privacy concerns remain. 

    The chatbot is being built in collaboration with a third party company called Ex-human, which raises concerns about data sharing. Grindr has communicated that its users’ personal data will remain on its own infrastructure, which Ex-Human does not have access to, and that users will be “notified” when AI tools are available on the app. The company also said that it will ask users for permission to use their chat history for AI training. But AI data poses privacy risks that do not seem fully accounted for, particularly in places where it’s not safe to be outwardly gay. 

    In building this ‘gay chatbot,’ Grindr’s CEO said one of its biggest limitations was preserving user privacy. It’s good that they are cognizant of these harms, particularly because the company has a terrible track record of protecting user privacy, and the company was also recently sued for allegedly revealing the HIV status of users. Further, direct messages on Grindr are stored on the company’s servers, where you have to trust they will be secured, respected, and not used to train AI models without your consent. Given Grindr’s poor record of not respecting user consent and autonomy on the platform, users need additional protections and guardrails for their personal data and privacy than currently being provided—especially for AI tools that are being built by third parties. 

    AI Picture Selection  

    In the past year, Tinder and Bumble have both introduced AI tools to help users choose better pictures for their profiles. Tinder’s AI-powered feature, Photo Selector, requires users to upload a selfie, after which its facial recognition technology can identify the person in their camera roll images. The Photo Selector then chooses a “curated selection of photos” direct from users’ devices based on Tinder’s “learnings” about good profile images. Users are not informed about the parameters behind choosing photos, nor is there a separate privacy policy introduced to guardrail privacy issues relating to the potential collection of biometric data, and collection, storage, and sale of camera roll images. 

    The Way Forward: Opt-In Consent for AI Tools and Consumer Privacy Legislation 

    Putting users in control of their own data is fundamental to protecting individual and collective privacy. We all deserve the right to control how our data is used and by whom. And when it comes to data like profile photos and private messages, all companies should require opt-in consent before processing those messages for AI. Finding love should not involve such a privacy impinging tradeoff.

    At EFF, we’ve also long advocated for the introduction of comprehensive consumer privacy legislation to limit the collection of our personal data at its source and prevent retained data being sold or given away, breached by hackers, disclosed to law enforcement, or used to manipulate a user’s choices through online behavioral advertising. This would help protect users on dating apps as reducing the amount of data collected prevents the subsequent use in ways like building AI tools and training AI models. 

    The privacy options at our disposal may seem inadequate to meet the difficult moments ahead of us, especially for vulnerable communities, but these steps are essential to protecting users on dating apps. We urge companies to put people over profit and protect privacy on their platforms.

    Paige Collings
    Checked
    10 minutes 41 seconds ago
    EFF's Deeplinks Blog: Noteworthy news from around the internet
    Subscribe to EFF update feed