Sunsetting Section 230 Will Hurt Internet Users, Not Big Tech 

2 months ago

As Congress appears ready to gut one of the internet’s most important laws for protecting free speech, they are ignoring how that law protects and benefits millions of Americans’ ability to speak online every day.  

The House Energy and Commerce Committee is holding a hearing on Wednesday on a bill that would end Section 230 (47 U.S.C. § 230) in 18 months. The authors of the bill argue that setting a deadline to either change or eliminate Section 230 will force the Big Tech online platforms to the bargaining table to create a new regime of intermediary liability. 

Take Action

Ending Section 230 Will Make Big Tech Monopolies Worse

As EFF has said for years, Section 230 is essential to protecting individuals’ ability to speak, organize, and create online. 

Congress knew exactly what Section 230 would do – that it would lay the groundwork for speech of all kinds across the internet, on websites both small and large. And that’s exactly what has happened.  

Section 230 isn’t in conflict with American values. It upholds them in the digital world. People are able to find and create their own communities, and moderate them as they see fit. People and companies are responsible for their own speech, but (with narrow exceptions) not the speech of others. 

The law is not a shield for Big Tech. Critically, the law benefits the millions of users who don’t have the resources to build and host their own blogs, email services, or social media sites, and instead rely on services to host that speech. Section 230 also benefits thousands of small online services that host speech. Those people are being shut out as the bill sponsors pursue a dangerously misguided policy.  

If Big Tech is at the table in any future discussion for what rules should govern internet speech, EFF has no confidence that the result will protect and benefit internet users, as Section 230 does currently. If Congress is serious about rewriting the internet’s speech rules, it needs to abandon this bill and spend time listening to the small services and everyday users who would be harmed should they repeal Section 230.  

Section 230 Protects Everyday Internet Users 

The bill introduced by House Energy & Commerce Chair Cathy McMorris Rogers (R-WA) and Ranking Member Frank Pallone (D-NJ) is based on a series of mistaken assumptions and fundamental misunderstandings about Section 230. Mike Masnick at TechDirt has already explained many of the flawed premises and factual errors that the co-sponsors have made. 

We won’t repeat the many errors that Masnick identifies. Instead, we want to focus on what we see as a glaring omission in the co-sponsor’s argument: how central Section 230 is to ensuring that every person can speak online.   

Let’s start with the text of Section 230. Importantly, the law protects both online services and users. It says that “no provider or user shall be treated as the publisher” of content created by another. That's in clear agreement with most American’s belief that people should be held responsible for their own speech—not that of other people.   

Section 230 protects individual bloggers, anyone who forwards an email, and social media users who have ever reshared or retweeted another person’s content online. Section 230 also protects individual moderators who might delete or otherwise curate others’ online content, along with anyone who provides web hosting services

As EFF has explained, online speech is frequently targeted with meritless lawsuits. Big Tech can afford to fight these lawsuits without Section 230. Everyday internet users, community forums, and small businesses cannot. Engine has estimated that without Section 230, many startups and small services would be inundated with costly litigation that could drive them offline. 

Deleting Section 230 Will Create A Field Day For The Internet’s Worst Users  

The co-sponsors say that too many websites and apps have “refused” to go after “predators, drug dealers, sex traffickers, extortioners and cyberbullies,” and imagine that removing Section 230 will somehow force these services to better moderate user-generated content on their sites.  

Nothing could be further from the truth. If lawmakers are legitimately motivated to help online services root out unlawful activity and terrible content appearing online, the last thing they should do is eliminate Section 230. The current law strongly incentivizes websites and apps, both large and small, to kick off their worst-behaving users, to remove offensive content, and in cases of illegal behavior, work with law enforcement to hold those users responsible. 

Take Action

Tell Congress: Ending Section 230 Will Hurt Users

If Congress deletes Section 230, the pre-digital legal rules around distributing content would kick in. That law strongly discourages services from moderating or even knowing about user-generated content. This is because the more a service moderates user content, the more likely it is to be held liable for that content. Under that legal regime, online services will have a huge incentive to just not moderate and not look for bad behavior. Taking the sponsors of the bill at their word, this would result in the exact opposite of their goal of protecting children and adults from harmful content online.  

Aaron Mackey

EFF to Court: Electronic Ankle Monitoring Is Bad. Sharing That Data Is Even Worse.

2 months ago

The government violates the privacy rights of individuals on pretrial release when it continuously tracks, retains, and shares their location, EFF explained in a friend-of-the-court brief filed in the Ninth Circuit Court of Appeals.

In the case, Simon v. San Francisco, individuals on pretrial release are challenging the City and County of San Francisco’s electronic ankle monitoring program. The lower court ruled the program likely violates the California and federal constitutions. We—along with Professor Kate Weisburd and the Cato Institute—urge the Ninth Circuit to do the same.

Under the program, the San Francisco County Sheriff collects and indefinitely retains geolocation data from people on pretrial release and turns it over to other law enforcement entities without suspicion or a warrant. The Sheriff shares both comprehensive geolocation data collected from individuals and the results of invasive reverse location searches of all program participants’ location data to determine whether an individual on pretrial release was near a specified location at a specified time.

Electronic monitoring transforms individuals’ homes, workplaces, and neighborhoods into digital prisons, in which devices physically attached to people follow their every movement. All location data can reveal sensitive, private information about individuals, such as whether they were at an office, union hall, or house of worship. This is especially true for the GPS data at issue in Simon, given its high degree of accuracy and precision. Both federal and state courts recognize that location data is sensitive, revealing information in which one has a reasonable expectation of privacy. And, as EFF’s brief explains, the Simon plaintiffs do not relinquish this reasonable expectation of privacy in their location information merely because they are on pretrial release—to the contrary, their privacy interests remain substantial.

Moreover, as EFF explains in its brief, this electronic monitoring is not only invasive, but ineffective and (contrary to its portrayal as a detention alternative) an expansion of government surveillance. Studies have not found significant relationships between electronic monitoring of individuals on pretrial release and their court appearance rates or  likelihood of arrest. Nor do studies show that law enforcement is employing electronic monitoring with individuals they would otherwise put in jail. To the contrary, studies indicate that law enforcement is using electronic monitoring to surveil and constrain the liberty of those who wouldn’t otherwise be detained.

We hope the Ninth Circuit affirms the trial court and recognizes the rights of individuals on pretrial release against invasive electronic monitoring.

Brendan Gilligan

EFF Urges Ninth Circuit to Hold Montana’s TikTok Ban Unconstitutional

2 months ago

Montana’s TikTok ban violates the First Amendment, EFF and others told the Ninth Circuit Court of Appeals in a friend-of-the-court brief and urged the court to affirm a trial court’s holding from December 2023 to that effect.

Montana’s ban (which EFF and others opposed) prohibits TikTok from operating anywhere within the state and imposes financial penalties on TikTok or any mobile application store that allows users to access TikTok. The district court recognized that Montana’s law “bans TikTok outright and, in doing so, it limits constitutionally protected First Amendment speech,” and blocked Montana’s ban from going into effect. Last year, EFF—along with the ACLU, Freedom of the Press Foundation, Reason Foundation, and the Center for Democracy and Technology—filed a friend-of-the-court brief in support of TikTok and Montana TikTok users’ challenge to this law at the trial court level.

As the brief explains, Montana’s TikTok ban is a prior restraint on speech that prohibits Montana TikTok users—and TikTok itself—from posting on the platform. The law also prohibits TikTok’s ability to make decisions about curating its platform.

Prior restraints such as Montana’s ban are presumptively unconstitutional. For a court to uphold a prior restraint, the First Amendment requires it to satisfy the most exacting scrutiny. The prior restraint must be necessary to further an urgent interest of the highest magnitude, and the narrowest possible way for the government to accomplish its precise interest. Montana’s TikTok ban fails to meet this demanding standard.

Even if the ban is not a prior restraint, the brief illustrates that it would still violate the First Amendment. Montana’s law is a “total ban” on speech: it completely forecloses TikTok users’ speech with respect to the entire medium of expression that is TikTok. As a result, Montana’s ban is subject to an exacting tailoring requirement: it must target and eliminate “no more than the exact source of the ‘evil’ it seeks to remedy.” Montana’s law is undeniably overbroad and fails to satisfy this scrutiny.

This appeal is happening in the immediate aftermath of President Biden signing into law federal legislation that effectively bans TikTok in its current form, by requiring TikTok to divest of any Chinese ownership within 270 days. This federal law raises many of the same First Amendment concerns as Montana’s.

It’s important that the Ninth Circuit take this opportunity to make clear that the First Amendment requires the government to satisfy a very demanding standard before it can impose these types of extreme restrictions on Americans’ speech.

Brendan Gilligan

Fair Use Still Protects Histories and Documentaries—Even Tiger King

2 months 1 week ago

Copyright’s fair use doctrine protects lots of important free expression against the threat of ruinous lawsuits. Fair use isn’t limited to political commentary or erudite works – it also protects popular entertainment like Tiger King, Netflix’s hit 2020 documentary series about the bizarre and sometimes criminal exploits of a group of big cat breeders. That’s why a federal appeals court’s narrow interpretation of fair use in a recent copyright suit threatens not just the producers of Tiger King but thousands of creators who make documentaries, histories, biographies, and even computer software. EFF and other groups asked the court to revisit its decision. Thankfully, the court just agreed to do so.

The case, Whyte Monkee Productions v. Netflix, was brought by a videographer who worked at the Greater Wynnewood Exotic Animal Park, the Oklahoma attraction run by Joe Exotic that was chronicled in Tiger King. The videographer sued Netflix for copyright infringement over the use of his video clips of Joe Exotic in the series. A federal district court in Oklahoma found Netflix’s use of one of the video clips—documenting Joe Exotic’s eulogy for his husband Travis Maldonado—to be a fair use. A three-judge panel of the Court of Appeals for the Tenth Circuit reversed that decision and remanded the case, ruling that the use of the video was not “transformative,” a concept that’s often at the heart of fair use decisions.

The appeals court based its ruling on a mistaken interpretation of the Supreme Court’s opinion in Andy Warhol Foundation for the Visual Arts v. Goldsmith. Warhol was a deliberately narrow decision that upheld the Supreme Court’s prior precedents about what makes a use transformative while emphasizing that commercial uses are less likely to be fair. The Supreme Court held that commercial re-uses of a copyrighted work—in that case, licensing an Andy Warhol print of the artist Prince for a magazine cover when the print was based on a photo that was also licensed for magazine covers—required a strong justification. The Warhol Foundation’s use of the photo was not transformative, the Supreme Court said, because Warhol’s print didn’t comment on or criticize the original photograph, and there was no other reason why the foundation needed to use a print based on that photograph in order to depict Prince. In Whyte Monkee, the Tenth Circuit honed in on the Supreme Court’s discussion about commentary and criticism but mistakenly read it to mean that only uses that comment on an original work are transformative. The court remanded the case to the district court to re-do the fair use analysis on that basis.

As EFF, along with Authors Alliance, American Library Association, Association of Research Libraries, and Public Knowledge explained in an amicus brief supporting Netflix’s request for a rehearing, there are many kinds of transformative fair uses. People creating works of history or biography frequently reproduce excerpts from others’ copyrighted photos, videos, or artwork as indispensable historical evidence. For example, using sketches from the famous Zapruder film in a book about the assassination of President Kennedy was deemed fair, as was reproducing the artwork from Grateful Dead posters in a book about the band. Software developers use excerpts from others’ code—particularly declarations that describe programming interfaces—to build new software that works with what came before. And open government organizations, like EFF client Public.Resource.Org, use technical standards incorporated into law to share knowledge about the law. None of these uses involves commentary or criticism, but courts have found them all to be transformative fair uses that don’t require permission.

The Supreme Court was aware of these uses and didn’t intend to cast doubt on their legality. In fact, the Supreme Court cited to many of them favorably in its Warhol decision. And the Court even engaged in some non-commentary fair use itself when it included photos of Prince in its opinion to illustrate how they were used on magazine covers. If the Court had meant to overrule decades of court decisions, including its own very recent Google v. Oracle decision about software re-use, it would have said so.

Fortunately, the Tenth Circuit heeded our warning, and the warnings of Netflix, documentary filmmakers, legal scholars, and the Motion Picture Association, all of whom filed briefs. The court vacated its decision and asked for further briefing about Warhol and what it means for documentary filmmakers.

The bizarre story of Joe Exotic and his friends and rivals may not be as important to history as the Kennedy assassination, but fair use is vital to bringing us all kinds of learning and entertainment. If other courts start treating the Warhol decision as a radical rewriting of fair use law when that’s not what the Supreme Court said at all, many kinds of free expression will face an uncertain future. That’s why we’re happy that the Tenth Circuit withdrew its opinion. We hope the court will, as the Supreme Court did, reaffirm the importance of fair use.

Mitch Stoltz

The Cybertiger Strikes Again! EFF's 8th Annual Tech Trivia Night

2 months 1 week ago

Being well into spring, with the weather getting warmer, we knew it was only a matter of time till the Cybertiger awoke from his slumber. But we were prepared. Prepared to quench the Cybertiger's thirst for tech nerds to answer his obscure and fascinating minutiae of tech-related questions.

But how did we prepare for the Cybertiger's quiz? Well, with our 8th Annual Tech Trivia Night of course! We gathered fellow digital freedom supporters to test their tech-know how, and to eat delicious tacos, churros, and special tech-themed drinks, including LimeWire, Moderated Content, and Zero Cool.

Nine teams gathered before the Cybertiger, ready to battle for the *new* wearable first, second, and third place prizes:

But this year, the Cybertiger had a surprise up his sleeve! A new way to secure points had been added: bribes. Now, teams could donate to EFF to sway the judges and increase their total points to secure their lead. Still, the winner of the first-place prize was the Honesty Winner, so participants needed to be on their A-game to win!

At the end of round two of six, team Bad @ Names and 0x41434142 were tied for first place, making a tense game! It wasn’t until the bonus question after round two, where the Cybertiger asked each team, “What prompt would you use to jailbreak the Cybertiger AI?” where the team Bad @ Names came in first place with their answer.

By the end of round 4, Bad @ Names was still in first place, only in the lead by three points! Could they win the bonus question again? This time, each team was asked to create a ridiculous company elevator pitch that would be on the RSA expo floor. (Spoiler alert: these company ideas were indeed ridiculous!)

After the sixth round of questions, the Cybertiger gave one last chance for teams to scheme their way to victory! The suspense built, but after some time, we got our winners... 

In third place, AI Hallucinations with 60 total points! 

In second place, and also winning the bribery award, 0x41434142, with 145 total points!

In first place... Bad @ Names with 68 total points!

EFF’s sincere appreciation goes out to the many participants who joined us for a great quiz over tacos and drinks while never losing sight of EFF’s mission to drive the world towards a better digital future. Thank you to the digital freedom supporters around the world helping to ensure that EFF can continue working in the courts and on the streets to protect online privacy and free expression.

Thanks to EFF's Luminary Organizational Members DuckDuckGo, No Starch Press, and the Hering Foundation for their year-round support of EFF's mission. If you or your company are interested in supporting a future EFF event, or would like to learn more about Organizational Membership, please contact Tierney Hamilton.

Learn about upcoming EFF events when you sign up for our email list, or just check out our event calendar. We hope to see you soon!

Christian Romero

Coalition to Calexico: Think Twice About Reapproving Border Surveillance Tower Next to a Public Park

2 months 1 week ago

Update May 15, 2024: The letter has been updated to include support from the Southern Border Communities Coalition. It was re-sent to the Calexico City Council. 

On the southwest side of Calexico, a border town in California’s Imperial Valley, a surveillance tower casts a shadow over a baseball field and a residential neighborhood. In 2000, the Immigration and Naturalization Service (the precursor to the Department of Homeland Security (DHS)) leased the corner of Nosotros Park from the city for $1 a year for the tower. But now the lease has expired, and DHS component Customs & Border Protection (CBP) would like the city to re-up the deal.  

But times—and technology—have changed. CBP’s new strategy calls for adopting powerful artificial intelligence technology to not only control the towers, but to scan, track and categorize everything they see.  

Now, privacy and social justice advocates including the Imperial Valley Equity and Justice Coalition, American Friends Service Committee, Calexico Needs Change, and Southern Border Communities Coalition have joined EFF in sending the city council a letter urging them to not sign the lease and either spike the project or renegotiate it to ensure that civil liberties and human rights are protected.  

The groups write:  

The Remote Video Surveillance System (RVSS) tower at Nosotros Park was installed in the early 2000s when video technology was fairly limited and the feeds required real-time monitoring by human personnel. That is not how these cameras will operate under CBP's new AI strategy. Instead, these towers will be controlled by algorithms that will autonomously detect, identify, track and classify objects of interest. This means that everything that falls under the gaze of the cameras will be scanned and categorized. To an extent, the AI will autonomously decide what to monitor and recommend when Border Patrol officers should be dispatched. While a human being may be able to tell the difference between children playing games or residents getting ready for work, AI is prone to mistakes and difficult to hold accountable. 

In an era where the public has grave concerns on the impact of unchecked technology on youth and communities of color, we do not believe enough scrutiny and skepticism has been applied to this agreement and CBP's proposal. For example, the item contains very little in terms of describing what kinds of data will be collected, how long it will be stored, and what measures will be taken to mitigate the potential threats to privacy and human rights. 

The letter also notes that CBP’s tower programs have repeatedly failed to achieve the promised outcomes. In fact, the DHS Inspector General found that the early 2000s program, “yielded few apprehensions as a percentage of detection, resulted in needless investigations of legitimate activity, and consumed valuable staff time to perform video analysis or investigate sensor alerts.”  

The groups are calling for Calexico to press pause on the lease agreement until CBP can answer a list of questions about the impact of the surveillance tower on privacy and human rights. Should the city council insist on going forward, they should at least require regular briefings on any new technologies connected to the tower and the ability to cancel the lease on much shorter notice than the 365 days currently spelled out in the proposed contract.  

Dave Maass

One (Busy) Day in the Life of EFF’s Activism Team

2 months 1 week ago

EFF is an organization of lawyers, technologists, policy professionals, and importantly–full-time activists–who fight to make sure that technology enhances rather than threatens civil liberties on a global scale. EFF’s activism team includes experienced issue experts, master communicators, and grassroots organizers who help to coordinate and orchestrate EFF’s activist campaigns that include but go well beyond litigation, technical analyses and solutions, and direct lobbying to legislators.

If you’ve ever wondered what it would be like to work on the activism team at EFF, or if you are curious about applying for a job at EFF, take a look at one exceptional (but also fairly ordinary) day in the life of five members of the team:

Jillian York, Director For International Freedom of Expression

I wake up around 9:00, make coffee, and check my email and internal messages (we use Mattermost, a self-hosted chat tool). I live in Berlin—between four and nine hours ahead of most of my colleagues—which on most days enables me to get some “deep work” done before anyone else is online.

I see that one of my colleagues in San Francisco left a late-night message asking for someone to edit a short blog post. No one else is awake yet, so I jump on it. I then work on a piece of writing of my own, documenting the case of Alaa Abd El Fattah, an Egyptian technologist, blogger, and EFF supporter who’s been imprisoned on and off for the past decade. After that, I respond to some emails and messages from colleagues from the day prior.

EFF offers us flexible hours, and since I’m in Europe I often have to take calls in the evening (6 or 7 pm my time is 9 or 10 am San Francisco time, when a lot of team meetings take place). I see this as an advantage, as it allows me to meet a friend for lunch and hit the gym before heading back to work. 

There’s a dangerous new bill being proposed in a country where we don’t have so much expertise, but which looks likely to have a greater impact across the region, so a colleague and I hop on a call with a local digital rights group to plan a strategy. When we work internationally, we always consult or partner with local groups to make sure that we’re working toward the best outcome for the local population.

While I’m on the call, my Signal messages start blowing up. A lot of the partners we work with in another region of the world prefer to organize there for reasons of safety, and there’s been a cyberattack on a local media publication. Our partners are looking for some assistance in dealing with it, so I send some messages to colleagues (both at EFF and other friendly organizations) to get them the right help.

After handling some administrative tasks, it’s time for the meeting of the international working group. In that group, we discuss threats facing people outside the U.S., often in areas that are underrepresented by both U.S. and global media.

After that meeting, it's off to prep for a talk I'll be giving at an upcoming conference. There have been improvements in social media takedown transparency reporting, but there are a lot of ways to continue that progress, and a former colleague and I will be hosting a mock game show about the heroes and anti-heroes of transparency. By the time I finish that, it's nearly 11 pm my time, so it's off to bed for me, but not for everyone else!

Matthew Guariglia, Senior Policy Analyst Responsible for Government Surveillance Advocacy

My morning can sometimes start surprisingly early. This morning, a reporter I often speak to called to if I had any comments about a major change to how Amazon Ring security cameras will allow police to request access to user’s footage. I quickly try to make sense of the new changes—Amazon’s press release doesn’t say nearly enough.  Giving a statement to the press requires a brief huddle between me, EFF’s press director, and other lawyers, technologists, and activists who have worked on our Ring campaign over the last few years. Soon, we have a statement that conveys exactly what we think Amazon needs to do differently, and what users and non-users should know about this change and its impact on their rights.. About an hour after that, we turn our brief statement into a longer blog post for everyone to read. 

For the rest of the day now, in between other obligations and meetings, I take press calls or do TV interviews from curious reporters asking whether this change in policy is a win for privacy. My first meeting is with representatives of about a dozen mostly-local groups in the Bay Area, where EFF is located, about the next steps for opposing Proposition E, a ballot measure that greatly reduces the amount of oversight on the San Francisco Police Department concerning what technology they use. I send a few requests to our design team about printing window signs and then talk with our Activism Director about making plans to potentially fly a plane over the city. Shortly after that, I’m in a coalition meeting of national civil liberties organizations discussing ways of keeping a clean reauthorization of Section 702 (a mass surveillance authority that expires this year) out of a must-pass bill that would continue to fund the government. 

In the afternoon, I watch and take notes as a Congressional committee holds a hearing about AI use in law enforcement. Keeping an eye on this allows me to see what arguments and talking points law enforcement is using, which members of Congress seem critical of AI use in policing and might be worth getting in touch with, and whether there are any revelations in the hearing that we should communicate to our members and readers. 

After the hearing, I have to briefly send notes to a Senator and their staff on a draft of a public letter they intend to send to industry leaders about data collection—and when law enforcement may or may not request access to stored user data. 

Tomorrow,  I’ll follow up on many of the plans made over the course of this day: I’ll need to send out a mass email to EFF supporters in the Bay Area rallying them to join in the fight against Proposition E, and review new federal legislation to see if it offers enough reform of Section 702 that EFF might consider supporting it. 

Hayley Tsukayama, Associate Director of Legislative Activism

I settle in with a big mug of tea to start a day full of online meetings. This probably sounds boring to a lot of people, but I know I'll have a ton of interesting conversations today.

Much of my job coordinating our state legislative work requires speaking with like-minded organizations across the country. EFF tries, but we can't be everywhere we want to be all of the time. So, for example, we host a regular call with groups pushing for stronger state consumer data privacy laws. This call gives us a place to share information about a dozen or more privacy bills in as many states. Some groups on the call focus on one state; others, like EFF, work in multiple states. Our groups may not agree on every bill, but we're all working toward a world where companies must respect our privacy by default.

You know, just a small goal.

Today, we get a summary of a hearing that a friendly lawmaker organized to give politicians from several states a forum to explain how big tech companies, advertisers, and data brokers have stymied strong privacy legislation. This is one reason we compare notes: the more we know about what they're doing, the better we can fight them—even though the other side has more money and staff for state legislative work than all of us combined.

From there, I jump to a call on emerging AI legislation in states. Many companies pushing weak AI regulation make software that monitors employees, so this work has connected me to a universe of labor advocates I've never gotten to work with before. I've learned so much from them, both about how AI affects working conditions and about the ways they organize and mobilize people. Working in coalitions shows me how different people bring their strengths to a broader movement.

At EFF, our activists know: we win with words. I make a note to myself to start drafting a blog post on some bad copy-paste AI bills showing up across the country, which companies have carefully written to exempt their own products.

My position lets me stick my nose into almost every EFF issue, which is one thing I love about it. For the rest of the day, I meet with a group of right-to-repair advocates whose decades of advocacy have racked up incredible wins in the past couple of years. I update a position letter to the California legislature about automotive data. I send a draft action to one of our lawyers—who I get to work with every day— about a great Massachusetts bill that would prohibit the sale of location data without permission. I debrief with two EFF staffers who testified this week in Sacramento on two California bills—one on IP issues, another on police surveillance. I polish a speech I'm giving with one of my colleagues, who has kindly made time to help me. I prep for a call with young activists who want to discuss a bill idea.

There is no "typical" day in my job. The one constant is that I get to work with passionate people, at EFF and outside of it, who want to make the world a better place. We tackle tough problems, big and small—but always ones that matter. And, sure, I have good days and bad days. But I can say this: they are rarely boring.

Rory Mir, Associate Director of Community Organizing 

As an organizer at EFF, I juggle long-term projects and needs with rapid responses for both EFF and our local allies in our grassroots network, Electronic Frontier Alliance. Days typically start with morning rituals that keep me grounded as a remote worker: I wake up, make coffee, put on music. I log in, set TODOs, clear my inbox. I get dressed, check the news, morning dog walk..

Back at my desk, I start with small tasks—reach out to a group I met at a conference, add an event to the EFF calendar, and promote EFA events on social media. Then, I get a call from a Portland EFA group. A city ordinance shedding light on police use of surveillance tech needs support. They’re working on a coalition letter EFF can sign, so I send it along to our street level surveillance team, schedule a meeting, and reach out to aligned groups in PDX.

Next up is a policy meeting on consumer privacy. Yesterday in Congress, the House passed a bill undermining privacy (again) and we need to kill it (again). We discuss key Senate votes, and I remember that an EFA group had a good relationship with one of those members in a campaign last year. I reach out to the group with links on our current campaign and see if they can help us lobby on the issue.

After a quick vegan lunch, I start a short Deeplinks post celebrating a major website connecting to the Fediverse, promoting folks autonomy online. I’m not quite done in time for my next meeting, planning an upcoming EFA meetup with my team. Before we get started though, an urgent message from San Diego interrupts us—the city council moved a crucial hearing on ALPRs to tomorrow. We reschedule and pivot to drafting an action alert email for the area as well as social media pushes to rally support.

In the home stretch, I set that meeting with Portland groups and make sure our newest EFA member has information on our workshop next week. After my last meeting for the day, a coalition call on Right to Repair (with Hayley!), I send my blog to a colleague for feedback, and wrap up the day in one of our off-topic chats. While passionately ranking Godzilla movies, my dog helpfully reminds me it’s time to log off and go on another walk.

Thorin Klosowski, Security and Privacy Activist

I typically start my day with reading—catching up on some broad policy things, but just as often poking through product-related news sites and consumer tech blogs—so I can keep an eye out for any new sorts of technology terrors that might be on the horizon, privacy promises that seem too good to be true, or any data breaches and other security guffaws that might need to be addressed.

If I’m lucky (or unlucky, depending on how you look at it), I’ll find something strange enough to bring to our Public Interest Technology crew for a more detailed look. Maybe it’ll be the launch of a new feature that promises privacy but doesn’t seem to deliver it, or in rare cases, a new feature that actually seems to. In either instance, if it seems worth a closer look, I’ll often then chat through all this with the technologists who specialize in the technology at play, then decide whether it’s worth writing something, or just keeping in our deep log of “terrible technologies to watch out for.” This process works in reverse, too—where someone on the PIT team brings up something they’re working on, like sketchyware on an Android tablet, and we’ll brainstorm some ways to help people who’re stuck with these types of things make them less sucky.

Today, I’m also tagging along with a couple of members of the PIT team at a meeting with representatives from a social media company that’s rolling out a new feature in its end-to-end encryption chat app. The EFF technologists will ask smart, technical questions and reference research papers with titles like, “Unbreakable: Designing for Trustworthiness in Private Messaging” while I furiously take notes and wonder how on earth we’ll explain all the positive (or negative) effects on individual privacy this feature might pose if it does in fact release.

With whatever time I have left, I’ll then work on Surveillance Self-Defense, our guide to protecting you and your friends from online spying. Today, I’m working through updating several of our encryption guides, which means chatting with our resident encryption experts both on the legal and PIT teams. What makes SSD so good, in my eyes, is how much knowledge backs every single word of every guide. This is what sets SSD apart from the graveyard of security guides online, but it also means a lot of wrangling to get eyes on everything that goes on the site. Sometimes a guide update clicks together smoothly and we update things quickly. Sometimes one update to a guide cascades across a half dozen others, and I start to feel like I have one of those serial killer boards, but I’m keeping track of several serial killers across multiple timelines. But however an SSD update plays out, it all needs to get translated, so I’ll finish off the day with a look at a spreadsheet of all the translations to make sure I don’t need to send anything new over (or just as often, realize I’ve already gotten translations back that need to put online).


We love giving people a picture of the work we do on a daily basis at EFF to help protect your rights online. Our former Activism Directors, Elliot Harmon and Rainey Reitman, each wrote one of these blogs in the past as well. If you’d like to join us on the EFF Activism Team, or anywhere else in the organization, check out opportunities to do so here.

Matthew Guariglia

Speaking Freely: Mohamed El Gohary

2 months 1 week ago

Interviewer: Jillian York

Mohamed El Gohary is an open-knowledge enthusiast. After majoring in Biomedical Engineering in October 2010, he switched careers to work as a Social Media manager for Al-Masry Al-Youm newspaper until October 2011, when he joined Global Voices contracts managing Lingua until the end of 2021. He now works for IFEX as the MENA Network Engagement Specialist.

This interview has been edited for length and clarity.*

York: What does free speech or free expression mean for you?

Free speech, for me, freedom of expression, means the ability for people to govern themselves. It means to me that the real meaning of democracy can not happen without freedom of speech, without people expressing their needs in different spectrums. The idea of civic space, the idea of people basically living their lives and using different means of communication for getting things done right through freedom of speech.

York: What’s an experience that shaped your views on freedom of expression?

Well, my background is using the internet. So I always believed, in the early days of using the internet, that it would enable people to express themselves in a way for a better democratic process. But right now that changed because of the decentralization of online spaces to centralized spaces which are the antithesis of democracy. So the internet turns into an oligarch’s world. Which is, again, going back to freedom of expression. I think there are ways that are unchartered territories in terms of activism, in terms of platforms online and offline, to maybe reinvent the wheel in a way for people to have a better democratic process in terms of freedom of expression. 

York: You came up in an era where social media had so much promise, and now, like you said about the oligarchical online space—which I tend to agree with—we’re in kind of a different era. What are your views right now on regulation of social media?

Well, it’s still related to the democratic process. It’s a similar conversation to, let’s say, the Internet Governance Forum where… where is the decision making? Who has the power dynamics around decision making? So there are governments, then there are private companies, then there is law and the rule of law, and then there is civil society. And there’s good civil society and there’s bad civil society, in terms of their relationship with both governments and companies. So it goes back to freedom of expression as a collective and in an individual manner. And it comes to people and freedom of assembly in terms of absolute right and in terms of practice, to reinvent the democratic process. It’s the whole system. It turns out it’s not just freedom of expression. Freedom of expression has an important role, and the democratic process can’t be reinvented without looking at freedom of expression. The whole system, democracy, Western democracy and how different countries apply it in ways that affects and creates the power of the rich and powerful while the rest of the population just loses their hope in different ways. Everything goes back to reinventing the democratic process. And freedom of expression is a big part of it.

York: So this is a special interview, we’re here at the IFEX general meeting. What are some of the things that you’re seeing here, either good or bad, and maybe even what are some things that give you hope about the IFEX network?

I think, inside the IFEX network and the extended IFEX network, it’s the importance of connection. It’s the importance of collaboration. Different governments try to always work together to establish their power structures, while the resources governments have is not always available to civil society. So it’s important for civil society organizations—and IFEX is an example of collaboration between a large number of organizations around the world—in all scales, in all directions, that these kinds of collaborations happen in different organizations while still encouraging every organization in itself to look at itself, to look at itself as an organization, to look at how it’s working. To ask themselves, is it just a job? Are we working for a cause? Are we working for a cause in the right way? It’s the other side of the coin to how governments work and maintain existing power structures. There needs to be the other side of the coin in terms of, again, reinventing the democratic process.

York: Is there anything I didn’t ask that you want to mention?

My only frustration is where organizations work as if it is a job, and they only do the minimum, for example. And that’s in a good case scenario. A bad case scenario is when a civil society organization is working for the government or for private companies—where organizations can be a burden more than a resource. I don’t know how to approach that without cost. Cost is difficult, cost is expensive, it’s ugly, it’s not something you look for when you start your day. And there is a very small number of people and organizations who would be willing to even think about paying the price of being an inconvenience to organizations that are burdening entities. That would be my immediate and long term frustration with civil society at least in my vicinity.

Who is your free speech hero?

For me, as an Egyptian, that would be Alaa Abd El-Fattah. As a person who is a perfect example of looking forward to being an inconvenience. And there are not a lot of people who would be this kind of inconvenience. There are many people who appear like they are an inconvenience, but they aren’t really. This would be my hero.

Jillian C. York

Big Tech to EU: "Drop Dead"

2 months 1 week ago

The European Union’s new Digital Markets Act (DMA) is a complex, many-legged beast, but at root, it is a regulation that aims to make it easier for the public to control the technology they use and rely on.  

One DMA rule forces the powerful “gatekeeper” tech companies to allow third-party app stores. That means that you, the owner of a device, can decide who you trust to provide you with software for it.  

Another rule requires those tech gatekeepers to offer interoperable gateways that other platforms can plug into - so you can quit using a chat client, switch to a rival, and still connect with the people you left behind (similar measures may come to social media in the future). 

There’s a rule banning “self-preferencing.” That’s when platforms push their often inferior, in-house products and hide superior products made by their rivals. 

And perhaps best of all, there’s a privacy rule, reinforcing the eight-year-old General Data Protection Regulation, a strong, privacy law that has been flouted  for too long, especially by the largest tech giants. 

In other words, the DMA is meant to push us toward a world where you decide which software runs on your devices,  where it’s easy to find the best products and services, where you can leave a platform for a better one without forfeiting your social relationships , and where you can do all of this without getting spied on. 

If it works, this will get dangerously close to better future we’ve spent the past thirty years fighting for. 

There’s just one wrinkle: the Big Tech companies don’t want that future, and they’re trying their damndest to strangle it in its cradle.

 Right from the start, it was obvious that the tech giants were going to war against the DMA, and the freedom it promised to their users. Take Apple, whose tight control over which software its customers can install was a major concern of the DMA from its inception.

Apple didn’t invent the idea of a “curated computer” that could only run software that was blessed by its manufacturer, but they certainly perfected it. iOS devices will refuse to run software unless it comes from Apple’s App Store, and that control over Apple’s customers means that Apple can exert tremendous control over app vendors, too. 

 Apple charges app vendors a whopping 30 percent commission on most transactions, both the initial price of the app and everything you buy from it thereafter. This is a remarkably high transaction fee —compare it to the credit-card sector, itself the subject of sharp criticism for its high 3-5 percent fees. To maintain those high commissions, Apple also restricts its vendors from informing their customers about the existence of other ways of paying (say, via their website) and at various times has also banned its vendors from offering discounts to customers who complete their purchases without using the app.  

Apple is adamant that it needs this control to keep its customers safe, but in theory and in practice, Apple has shown that it can protect you without maintaining this degree of control, and that it uses this control to take away your security when it serves the company’s profits to do so

Apple is worth between two and three trillion dollars. Investors prize Apple’s stock in large part due to the tens of billions of dollars it extracts from other businesses that want to reach its customers. 

The DMA is aimed squarely at these practices. It requires the largest app store companies to grant their customers the freedom to choose other app stores. Companies like Apple were given over a year to prepare for the DMA, and were told to produce compliance plans by March of this year. 

But Apple’s compliance plan falls very short of the mark: between a blizzard of confusing junk fees (like the €0.50 per use “Core Technology Fee” that the most popular apps will have to pay Apple even if their apps are sold through a rival store) and onerous conditions (app makers who try to sell through a rival app store are have their offerings removed from Apple’s store, and are permanently  banned from it), the plan in no way satisfies the EU’s goal of fostering competition in app stores. 

That’s just scratching the surface of Apple’s absurd proposal: Apple’s customers will have to successfully navigate a maze of deeply buried settings just to try another app store (and there’s some pretty cool-sounding app stores in the wings!), and Apple will disable all your third-party apps if you take your phone out of the EU for 30 days

Apple appears to be playing a high-stakes game of chicken with EU regulators, effectively saying, “Yes, you have 500 million citizens, but we have three trillion dollars, so why should we listen to you?” Apple inaugurated this performance of noncompliance by banning Epic, the company most closely associated with the EU’s decision to require third party app stores, from operating an app store and terminating its developer account (Epic’s account was later reinstated after the EU registered its disapproval). 

It’s not just Apple, of course.  

The DMA includes new enforcement tools to finally apply the General Data Privacy Regulation (GDPR) to US tech giants. The GDPR is Europe’s landmark privacy law, but in the eight years since its passage, Europeans have struggled to use it to reform the terrible privacy practices of the largest tech companies. 

Meta is one of the worst on privacy, and no wonder: its entire business is grounded in the nonconsensual extraction and mining of billions of dollars’ worth of private information from billions of people all over the world. The GDPR should be requiring Meta to actually secure our willing, informed (and revocable) consent to carry on all this surveillance, and there’s good evidence that more than 95 percent of us would block Facebook spying if we could

Meta’s answer to this is a “Pay or Okay” system, in which users who do not consent to Meta’s surveillance will have to pay to use the service, or be blocked from it. Unfortunately for Meta, this is prohibited (privacy is not a luxury good that only the wealthiest should be afforded).  

Just like Apple, Meta is behaving as though the DMA permits it to carry on its worst behavior, with minor cosmetic tweaks around the margins. Just like Apple, Meta is daring the EU to enforce its democratically enacted laws, implicitly promising to pit its billions against Europe’s institutions to preserve its right to spy on us. 

These are high-stakes clashes. As the tech sector grew more concentrated, it also grew less accountable, able to substitute lock-in and regulatory capture for making good products and having their users’ backs. Tech has found new ways to compromise our privacy rights, our labor rights, and our consumer rights - at scale. 

After decades of regulatory indifference to tech monopolization, competition authorities all over the world are taking on Big Tech. The DMA is by far the most muscular and ambitious salvo we’ve seen. 

Seen in that light, it’s no surprise that Big Tech is refusing to comply with the rules. If the EU successfully forces tech to play fair, it will serve as a starting gun for a global race to the top, in which tech’s ill-gotten gains - of data, power and money - will be returned to the users and workers from whom that treasure came. 

The architects of the DMA and DSA foresaw this, of course. They’ve announced investigations into Apple, Google and Meta, threatening fines of 10 percent of the companies’ global income, which will double to 20 percent if the companies don’t toe the line. 

It’s not just Big Tech that’s playing for all the marbles - it’s also the systems of democratic control and accountability. If Apple can sabotage the DMA’s insistence on taking away its veto over its customers’ software choices, that will spill over into the US Department of Justice’s case over the same issue, as well as the cases in Japan and South Korea, and the pending enforcement action in the UK



Cory Doctorow

Victory! FCC Closes Loopholes and Restores Net Neutrality

2 months 1 week ago

Thanks to weeks of the public speaking up and taking action the FCC has recognized the flaw in their proposed net neutrality rules. The FCC’s final adopted order on net neutrality restores bright line rules against all forms of throttling, once again creating strong federal protections for all Americans.

The FCC’s initial order had a narrow interpretation of throttling that could have allowed ISPs to create so-called fast lanes, speeding up access to certain sites and services and effectively slowing down other traffic flowing through your network. The order’s bright line rule against throttling now explicitly bans this kind of conduct, finding that the “decision to speed up ‘on the basis of Internet content, applications, or services’ would ‘impair or degrade’ other content, applications, or services which are not given the same treatment.” With this language, the order both hews more closely to the 2015 Order and further aligns with the strong protections Californians already enjoy via California’s net neutrality law.

As we celebrate this victory, it is important to remember that net neutrality is more than just bright line rules against blocking, throttling, and paid prioritization: It is the principle that ISPs should treat all traffic coming over their networks without discrimination. Customers, not ISPs, should decide for themselves how they would like to experience the internet. EFF—standing with users, innovators, creators, public interest advocates, libraries, educators and everyone else who relies on the open internet—will continue to champion this principle. 

Chao Liu

The FBI is Playing Politics with Your Privacy

2 months 1 week ago

A bombshell report from WIRED reveals that two days after the U.S. Congress renewed and expanded the mass-surveillance authority Section 702 of the Foreign Intelligence Surveillance Act, the deputy director of the Federal Bureau of Investigation (FBI), Paul Abbate, sent an email imploring agents to “use” Section 702 to search the communications of Americans collected under this authority “to demonstrate why tools like this are essential” to the FBI’s mission.

In other words, an agency that has repeatedly abused this exact authority—with 3.4 million warrantless searches of Americans’ communications in 2021 alone, thinks that the answer to its misuse of mass surveillance of Americans is to do more of it, not less. And it signals that the FBI believes it should do more surveillance–not because of any pressing national security threat—but because the FBI has an image problem.

The American people should feel a fiery volcano of white hot rage over this revelation. During the recent fight over Section 702’s reauthorization, we all had to listen to the FBI and the rest of the Intelligence Community downplay their huge number of Section 702 abuses (but, never fear, they were fixed by drop-down menus!). The government also trotted out every monster of the week in incorrect arguments seeking to undermine the bipartisan push for crucial reforms. Ultimately, after fighting to a draw in the House, Congress bent to the government’s will: it not only failed to reform Section 702, but gave the government authority to use Section 702 in more cases.

Now, immediately after extracting this expanded power and fighting off sensible reforms, the FBI’s leadership is urging the agency to “continue to look for ways” to make more use of this controversial authority to surveil Americans, albeit with the fig leaf that it must be “legal.” And not because of an identifiable, pressing threat to national security, but to “demonstrate” the importance of domestic law enforcement accessing the pool of data collected via mass surveillance. This is an insult to everyone who cares about accountability, civil liberties, and our ability to have a private conversation online. It also raises the question of whether the FBI is interested in keeping us safe or in merely justifying its own increased powers. 

Section 702 allows the government to conduct surveillance inside the United States by vacuuming up digital communications so long as the surveillance is directed at foreigners currently located outside the United States. Section 702 prohibits the government from intentionally targeting Americans. But, because we live in a globalized world where Americans constantly communicate with people (and services) outside the United States, the government routinely acquires millions of innocent Americans' communications “incidentally” under Section 702 surveillance. Not only does the government acquire these communications without a probable cause warrant, so long as the government can make out some connection to FISA’s very broad definition of “foreign intelligence,” the government can then conduct warrantless “backdoor searches” of individual Americans’ incidentally collected communications. 702 creates an end run around the Constitution for the FBI and, with the Abbate memo, they are being urged to use it as much as they can.

The recent reauthorization of Section 702 also expanded this mass surveillance authority still further, expanding in turn the FBI’s ability to exploit it. To start, it substantially increased the scope of entities who the government could require to turn over Americans’ data in mass under Section 702. This provision is written so broadly that it potentially reaches any person or company with “access” to “equipment” on which electronic communications travel or are stored, regardless of whether they are a direct provider, which could include landlords, maintenance people, and many others who routinely have access to your communications.

The reauthorization of Section 702 also expanded FISA’s already very broad definition of “foreign intelligence” to include counternarcotics: an unacceptable expansion of a national security authority to ordinary crime. Further, it allows the government to use Section 702 powers to vet hopeful immigrants and asylum seekers—a particularly dangerous authority which opens up this or future administrations to deny entry to individuals based on their private communications about politics, religion, sexuality, or gender identity.

Americans who care about privacy in the United States are essentially fighting a political battle in which the other side gets to make up the rules, the terrain…and even rewrite the laws of gravity if they want to. Politicians can tell us they want to keep people in the U.S. safe without doing anything to prevent that power from being abused, even if they know it will be. It’s about optics, politics, and security theater; not realistic and balanced claims of safety and privacy. The Abbate memo signals that the FBI is going to work hard to create better optics for itself so that it can continue spying in the future.   

Matthew Guariglia

No Country Should be Making Speech Rules for the World

2 months 1 week ago

It’s a simple proposition: no single country should be able to restrict speech across the entire internet. Any other approach invites a swift relay race to the bottom for online expression, giving governments and courts in countries with the weakest speech protections carte blanche to edit the internet.

Unfortunately, governments, including democracies that care about the rule of law, too often lose sight of this simple proposition. That’s why EFF, represented by Johnson Winter Slattery, has moved to intervene in support of X, formerly known as Twitter’s legal challenge to a global takedown order from Australia’s eSafety Commissioner. The Commissioner ordered X and Meta to take down a post with a video of a stabbing in a church. X complied by geo-blocking the post so Australian users couldn’t access it, but it declined to block it elsewhere. The Commissioner asked an Australian court to order a global takedown.

Our intervention calls the court’s attention to the important public interests at stake in this litigation, particularly for internet users who are not parties to the case but will nonetheless be affected by the precedent it sets. A ruling against X is effectively a declaration that an Australian court (or its eSafety Commissioner) can prevent internet users around the world from accessing something online, even if the law in their own country is quite different. In the United States, for example, the First Amendment guarantees that platforms generally have the right to decide what content they will host, and their users have a corollary right to receive it. 

We’ve seen this movie before. In Google v Equustek, a company used a trade secret claim to persuade a Canadian court to order Google to delete search results linking to sites that contained allegedly infringing goods from and all other Google domains, including and Google appealed, but both the British Columbia Court of Appeal and the Supreme Court of Canada upheld the order. The following year, a U.S. court held the ruling couldn’t be enforced against Google US. 

The Australian takedown order also ignores international human rights standards, restricting global access to information without considering less speech-intrusive alternatives. In other words: the Commissioner used a sledgehammer to crack a nut. 

If one court can impose speech-restrictive rules on the entire Internet—despite direct conflicts with laws a foreign jurisdiction as well as international human rights principles—the norms of expectations of all internet users are at risk. We’re glad X is fighting back, and we hope the judge will recognize the eSafety regulator’s demand for what it is—a big step toward unchecked global censorship—and refuse to let Australia set another dangerous precedent.

Related Cases: Google v. Equustek
Corynne McSherry

Free Speech Around the World | EFFector 36.6

2 months 2 weeks ago

Let's gather around the campfire and tell tales of the latest happenings in the fight for privacy and free expression online. Take care in roasting your marshmallows while we share ways to protect your data from political campaigns seeking to target you; seek nominees for our annual EFF Awards; and call for immediate action in the case of activist Alaa Abd El Fattah.

As the fire burns out, know that you can stay up-to-date on these issues with our EFFector newslettter! You can read the full issue here, or subscribe to get the next one in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:


EFFECTOR 36.6 - Free Speech Around the World

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

What Can Go Wrong When Police Use AI to Write Reports?

2 months 2 weeks ago

Axon—the makers of widely-used police body cameras and tasers (and that also keeps trying to arm drones)—has a new product: AI that will write police reports for officers. Draft One is a generative large language model machine learning system that reportedly takes audio from body-worn cameras and converts it into a narrative police report that police can then edit and submit after an incident. Axon bills this product as the ultimate time-saver for police departments hoping to get officers out from behind their desks. But this technology could present new issues for those who encounter police, and especially those marginalized communities already subject to a disproportionate share of police interactions in the United States.

Responsibility and the Codification of (Intended or Otherwise) Inaccuracies

We’ve seen it before. Grainy and shaky police body-worn camera video in which an arresting officer shouts, “Stop resisting!” This phrase can lead to greater use of force by officers or come with enhanced criminal charges.  Sometimes, these shouts may be justified. But as we’ve seen time and again, the narrative of someone resisting arrest may be a misrepresentation. Integrating AI into narratives of police encounters might make an already complicated system even more ripe for abuse.

If the officer says aloud in a body camera video, “the suspect has a gun” how would that translate into the software’s narrative final product?

The public should be skeptical of a language algorithm's ability to accurately process and distinguish between the wide range of languages, dialects, vernacular, idioms and slang people use. As we've learned from watching content moderation develop online, software may have a passable ability to capture words, but it often struggles with content and meaning. In an often tense setting such as a traffic stop, AI mistaking a metaphorical statement for a literal claim could fundamentally change how a police report is interpreted.

Moreover, as with all so-called artificial intelligence taking over consequential tasks and decision-making, the technology has the power to obscure human agency. Police officers who deliberately speak with mistruths or exaggerations to shape the narrative available in body camera footage now have even more of a veneer of plausible deniability with AI-generated police reports. If police were to be caught in a lie concerning what’s in the report, an officer might be able to say that they did not lie: the AI simply mistranscribed what was happening in the chaotic video.

It’s also unclear how this technology will work in action. If the officer says aloud in a body camera video, “the suspect has a gun” how would that translate into the software’s narrative final product? Would it interpret that by saying “I [the officer] saw the suspect produce a weapon” or “The suspect was armed”? Or would it just report what the officer said: “I [the officer] said aloud that the suspect has a gun”? Interpretation matters, and the differences between them could have catastrophic consequences for defendants in court.

Review, Transparency, and Audits

The issue of review, auditing, and transparency raises a number of questions. Although Draft One allows officers to edit reports, how will it ensure that officers are adequately reviewing for accuracy rather than rubber-stamping the AI-generated version? After all, police have been known to arrest people based on the results of a match by face recognition technology without any followup investigation—contrary to vendors’ insistence that such results should be used as an investigative lead and not a positive identification.

Moreover, if the AI-generated report is incorrect, can we trust police will contradict that version of events if it's in their interest to maintain inaccuracies? On the flip side, might AI report writing go the way of AI-enhanced body cameras? In other words, if the report consistently produces a narrative from audio that police do not like, will they edit it, scrap it, or discontinue using the software altogether?

And what of external reviewers’ ability to access these reports? Given police departments’ overly intense secrecy, combined with a frequent failure to comply with public records laws, how can the public, or any external agency, be able to independently verify or audit these AI-assisted reports? And how will external reviewers know which portions of the report are generated by AI vs. a human?

Police reports, skewed and biased as they often are, codify the police department’s memory. They reveal not necessarily what happened during a specific incident, but what police imagined to have happened, in good faith or not. Policing, with its legal power to kill, detain, or ultimately deny people’s freedom, is too powerful an institution to outsource its memory-making to technologies in a way that makes officers immune to critique, transparency, or accountability.

Matthew Guariglia

Speaking Freely : Nompilo Simanje

2 months 2 weeks ago

Nompilo Simanje is a lawyer by profession and is the Africa Advocacy and Partnerships Lead at the International Press Institute. She leads the IPI Africa Program which monitors and collects data on press freedom threats and violations across the continent, including threats to journalists’ safety and gendered attacks against journalists both online and offline to inform evidence-based advocacy. Nompilo is an expert on the intersection of technology, the law, and human rights. She has years of experience in advocacy and capacity building aimed at promoting media freedom, freedom of expression, access to information, and the right to privacy. She also currently serves on the Advisory Board of the Global Forum on Cyber Expertise. Simanje is an alumnus of the Open Internet for Democracy Leaders Program and the US State Department IVLP Program on Promoting Cybersecurity.

This interview has been edited for length and clarity.*

York: What does free expression mean to you? 

For me, free expression or free speech is the capacity for one to be able to communicate their views and their opinions without any fear or without thinking that there might be some reprisals or repercussions for freely engaging on any conversation or any issue which might be personal, but also even on any issue of public interest. 

What are some of the qualities that have made you passionate about free speech?

Being someone who works in the civil society sector, I think when I look at free speech and free expression, I view it as an avenue for the realization of several other rights. One key thing for me is that free expression encourages interactive dialogue, it encourages public dialogue, which is very important. Especially for democracy, but also for transparency and accountability. Being based in Africa, we are always having conversations around corruption, around accountability by government actors and public officials. And I feel that free expression is a vehicle for that, because it allows people to be able to question those that hold power and to criticize certain conduct by people that are in power. Those are some of the qualities that I feel are very important for me when I think about free expression. It enables transparency and accountability, but also holding those in power to account, which is something I believe is very important for democracies in Africa. 

So you work all around the African continent. Broadly speaking, what are some of the biggest online threats you’re seeing today? 

The digital age has been quite a revolutionary development, especially when you think about free expression. And I always talk about this when I engage on the topic of digital rights, but it has opened the avenue for people to communicate across boundaries, across borders, across countries, but, at the same time—in terms of the impact of threats and risks—they become equally huge as well. As part of the work that I have been doing, there are a few key things that I’ve seen online. One would be the issue of legislation—that countries have increased or upscaled their regulation of the online space. And one of the biggest threats for me has been lawfare, seeing how countries have been implementing old and new laws to undermine free expression online. For example, cybercrime laws or even existing criminal law code or penal codes. So I’ve seen that increasingly happening in Africa. 

Other key things that come to mind are online harassment, which is also happening in various forms. So just sometime last year at the 77th Session of the ACHPR (African Commission on Human and Peoples' Rights) we hosted a side event on the online safety of female journalists in Africa. And there were so many cases which were being shared about how female journalists are fearing online harassment. One big issue discussed was targeted disinformation. Where individuals spread false information about a certain individual as a way of discrediting them or undermining them or just attempting to silence them and ensure that they don’t communicate freely online. But also sometimes online harassment in the form of doxxing. Where personal details are shared online. Someone’s address. Someone’s email. And people are mobilized to attack that person. I’ve seen all those cases happening and I feel that online harassment especially towards female journalists and politicians continue to be some of the biggest threats to free expression in the region. In addition, of course, to what state actors are doing. 

I think also, generally, what I’m also seeing as part of the regulation aspect, is sometimes even the suspension of news websites. Where journalists are using those platforms—you know, like podcasts, Twitter spaces—to freely express. So this increase in regulation is one of the key things I feel continues to threaten online expression, particularly in the region.

You also work globally, you serve on a couple of advisory boards, and I’m curious, coming from an African perspective, how you see things like the Cybercrime Treaty or other international developments impacting the nations that you work in? 

It’s a brilliant question because the adjunct committee for the UN Cybercrime Treaty just recently met. I think one of the aspects I’ve noticed is that sometimes African civil society actors are not meaningfully participating in global processes. And as a result, they don’t get to share their experiences and get to reflect on how some developments at the global level will impact the region. 

Just taking on the example you shared about the UN Cybercrime Treaty, as part of my role at IPI, we actually submitted a letter to the adjunct committee with about 49 other civil society actors within Africa, highlighting to the committee that if this treaty is enacted in the way it was currently crafted, with wide scope in terms of the crimes and minimal human rights safeguards, it would actually undermine free expression. And this was informed by our experiences with cybercrime laws in the region. And we’re saying we have seen how some authoritarian governments in the region have been using cybercrime laws. So imagine having a global treaty or a global cybercrime convention. It can be a tool for other authoritarian governments to justify some of their conduct which has been targeted at undermining free expression. Some of the examples include criminalizing inciting public violence or criminalizing publishing falsehoods. We have seen that consistently in several countries and how those laws have been used to undermine expression. I definitely think that whenever there are global engagements about conventions that can undermine fundamental rights it’s very important for Africa to be represented, particularly civil society, because civil society is there to promote human rights and ensure that human rights are safeguarded. 

Also, there have been other key discussions happening, for example, with the open-ended working group on ICTs. We’ve had conversations about cyber capacity-building in the region and how that would also look for Africa where internet penetration is not at its highest and already there are additional divisions where everyone is not able to freely express themselves online. I think all those deliberations need to be taken into account and they need to be contextualized. My opinion is that when I look at global processes and I think about Africa, I always feel that it’s important for civil society actors and key stakeholders to contribute meaningfully to those processes, but also for us to contextualize some of those discussions and deliberate on how they will potentially impact us. Even when I think about the Global Digital Compact and all those issues around the Compact that the Compact seeks to address, we also need to contextualize them with our experiences with countries in the region which have ongoing conflicts and with countries in the region that are led by military regimes—especially in West Africa. All those issues need to be taken into account when we deliberate about global conventions or global policies. So that’s how I’ve been approaching these conversations around the global process, but trying to contextualize them based on what’s happening in the region and what our experiences have been with similar legislation and policies. 

I’m also really curious, has your work touched on issues of content moderation? 

Yes, but not broadly, because I think our interaction with the platforms has been quite minimal, but, yes, we have engaged platforms before. I think I’ll give you an example of Somalia. There’ve been so many reported cases by our partners at Somali Journalist Syndicate where individual accounts of journalists have been suspended, permanently suspended, and sometimes taken down, simply because political sympathizers of the government consistently report those accounts for expressing dissenting views. Or state actors have reached out to the platforms and asked them to intervene and suspend either pages or individual accounts. So we’ve had conversations with the platforms and we have issued public statements to highlight that, as far as content moderation is concerned, it is very important for the platforms to be transparent about requests that they’re receiving from governments, and also to be deliberate as far as media freedom is concerned. Especially where content relates to content or news that has been disseminated by media outlets or pages or accounts that have been utilized by journalists. Because in some countries you see governments consistently trying to undermine or ensure that journalists or media outlets do not fully utilize the online space. So that’s the angle that we have interacted with the platforms as far as content moderation is concerned—just ensuring that as they undertake their work they prioritize media freedom, they prioritize journalists, but also they understand the operating context, that there are countries that are quite authoritarian where dissenting voices are being targeted. So we always try to engage the platforms whenever we get an opportunity to raise awareness where platforms are suspending accounts or taking down content where such content genuinely relates to expressional protected speech. 

York: Did you have any formative experiences that helped shape your views on freedom of expression? 

Funny story actually. When I was in high school I was in certain positions of leadership as a head girl in my high school, but also serving in Junior Parliament. We had this institution put on by the Youth Council where young people in high school can form a shadow Parliament representing different constituencies across the country. I happened to be a part of that in high school. So, of course, that meant being in public spaces, and also generally my identity being known outside my circles. So what that also meant was that it opened an avenue for me to be targeted by trolls online. 

At some point when I was in high school people posted some defamatory, false information about me on an online platform. And over the years I’ve seen that post still there, still in existence. When that happened, I was in high school, I was still a child. But I was interacting on Facebook, you know, we have used Facebook for so many years, that’s the platform I think so many of us have been most familiar with from the time we were still kids. When this post was put up it was posted through a certain page that was a tabloid of sorts. And no one knew who was behind that page, no one knew who was the administrator of that page. What that meant for me was there was no recourse. Because I didn’t even know who was behind this post, who posted this defamatory and false information about me. 

I think from there it really triggered an interest in me about regulation of free expression online. How do you approach issues around anonymity and how far can we go in terms of protecting free expression online in instances where, indeed, rights of other people are also being undermined? It really helped to shape my thoughts around regulation of social media, regulation of content online. So I think, for me, the position even in terms of the work I’ve continued to do in my adult life around digital rights literacy, I’ve really tried to emphasize a digital citizenship where the key focus is really to ensure that we can freely express, but we need to ensure the rights of others. Which is why I strongly condemn hate speech. Which is why I strongly condemn targeted attacks, for instance, on female politicians and female journalists. Because I know that while we can freely express ourselves, there are certain limitations or boundaries that we shouldn’t cross. And I think I learned that from experiencing that targeted attack on me online. 

York: Is there anything I haven’t touched on yet that you’d like to talk about? 

I’d like to maybe just speak briefly about the implications of free expression being undermined especially in the online space. And I’m emphasizing this because we are in the digital age where the online space has really provided a platform for the full realization of so many fundamental rights. So one of the key things I’ve seen is the increase in self-censorship. For example, if individuals are being arrested over their Tweets and Facebook posts, news websites are being suspended, there’s also an increase in self-censorship. But also limited participation in public dialogue. We have so many elections happening in 2024, and we’ve had recent elections happen in the region, also. Nigeria was a big election. DRC was another big election. What I’ve been seeing is really limited participation, especially by high risk groups like women and LGBTQI communities. Especially, for example, when they’ve been targeted in Uganda through legislation. So there’s been limited participation and interactive dialogue in the region because of all these various developments that have been happening. 

Also, one aspect that comes to mind for me is the correlation between free expression and freedom of assembly and association. Because we are also interacting with groups and other like-minded people in the online space. So while we are freely expressing, the online space is also a platform for assembly and association. And some people are also being robbed of that experience, of freely associating online, because of the threats or the attacks that have been targeting free expression. I think it’s also important for Africa to think about these implications—that when you’re targeting free expression, you’re also targeting other fundamental rights. And I think that’s quite important for me to emphasize as part of this conversation. 

York: Who is your free speech hero? Someone who has really inspired you? 

I haven’t really thought about that actually! I don’t think I have a specific person in mind, but I generally just appreciate everyone who freely expresses their mind, especially on Twitter, because Twitter can be quite brutal at times. But there are several individuals that I look at and really admire for their tenacity in continuing to engage on the platforms even when they’re constantly being targeted. I won’t mention a specific person, but I think, from a Zimbabwen perspective, I would highlight that I’ve seen several female politicians in Zimbabwe being targeted. Actually, I will mention, there’s a female politician in Zimbabwe, Fadzayi Mahere, she’s also an advocate. I’ll mention her as a free speech hero. Because every time I speak about online attacks or online gender-based violence in digital rights trainings, I always mention her. That’s because I’ve seen how she has been able to stand against so many coordinated attacks from a political front and from a personal front. Just to highlight that last year she published a video which had been circulating and trending online about a case where police had allegedly assaulted a woman who had been carrying a child on her back. And she tweeted about that and she was actually arrested, charged, and convicted for, I think, “publishing falsehoods”, or, there’s a provision in the criminal law code that I think is like “publishing falsehoods to undermine public authority or the police service.” So I definitely think she is a press freedom hero, her story is quite an interesting story to follow in terms of her experiences in Zimbabwe as a young lawyer and as a politician, and a female politician at that. 

Jillian C. York

Podcast Episode: Building a Tactile Internet

2 months 2 weeks ago

Blind and low-vision people have experienced remarkable gains in information literacy because of digital technologies, like being able to access an online library offering more than 1.2 million books that can be translated into text-to-speech or digital Braille. But it can be a lot harder to come by an accessible map of a neighborhood they want to visit, or any simple diagram, due to limited availability of tactile graphics equipment, design inaccessibility, and publishing practices. Privacy info. This embed will serve content from


(You can also find this episode on the Internet Archive and on YouTube.)

Chancey Fleet wants a technological future that’s more organically attuned to people’s needs, which requires including people with disabilities in every step of the development and deployment process. She speaks with EFF’s Cindy Cohn and Jason Kelley about building an internet that’s just and useful for all, and why this must include giving blind and low-vision people the discretion to decide when and how to engage artificial intelligence tools to solve accessibility problems and surmount barriers. 

In this episode you’ll learn about: 

  • The importance of creating an internet that’s not text-only, but that incorporates tactile images and other technology to give everyone a richer, more fulfilling experience. 
  • Why AI-powered visual description apps still need human auditing. 
  • How inclusiveness in tech development is always a work in progress. 
  • Why we must prepare people with the self-confidence, literacy, and low-tech skills they need to get everything they can out of even the most optimally designed technology. 
  • Making it easier for everyone to travel the two-way street between enjoyment and productivity online. 

Chancey Fleet’s writing, organizing and advocacy explores how cloud-connected accessibility tools benefit and harm, empower and expose communities of disability. She is the Assistive Technology Coordinator at the New York Public Library’s Andrew Heiskell Braille and Talking Book Library, where she founded and maintains the Dimensions Project, a free open lab for the exploration and creation of accessible images, models and data representations through tactile graphics, 3D models and nonvisual approaches to coding, CAD and “visual” arts. She is a former fellow and current affiliate-in-residence at Data & Society; she is president of the National Federation of the Blind’s Assistive Technology Trainers Division; and she was recognized as a 2017 Library Journal Mover and Shaker


 What do you think of “How to Fix the Internet?” Share your feedback here


The fact is, as I see it, that if you are presented with what seems on a quick read, like good enough alt text, you're unlikely to do much labor to make it better, more nuanced, or more complete. What I've already noticed is blind people in droves dumping their descriptions of personal images, sentimental images, generated by AI onto social media, and there is a certain hyper-normative quality to the language. Any scene that contains a child or a dog is heartwarming. Any sunset or sunrise is vibrant. Anything with a couch and a lamp is calm or cozy. Idiosyncrasies are left by the wayside.

Unflattering little aspects of an image are often unremarked upon, and I feel like I'm being served some Ikea pressboard of reality, and it is so much better than anything that we've had before on demand without having to involve a sighted human being. And it's good enough to mail, kind of like a Hallmark card, but do I want the totality of digital description online to slide into this hyper normative, serene anodyne description? I do not. I think that we need to do something about it.

That's Chancey Fleet describing one of the problems that has arisen as AI is increasingly used in assistive technologies. 

I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

And I’m Jason Kelley, EFF’s Activism Director. This is our podcast, How to Fix the Internet.

On this show, we’re trying to fix the internet – or at least trying to envision what the world could look like if we start to get things right online. At EFF we spend a lot of time pointing out the way things could go wrong – and jumping in to the fight when they DO go wrong. But this show is about optimism, hope and bright ideas for the future.

According to a National Health Interview Survey from 2018, more than 32 million Americans reported that they had vision loss, including blindness. And as our population continues to age, this number only increases. And a big part of fixing the internet means fixing it so that it works properly for everyone who needs and wants to use it – blind, sighted, and everyone in between.

Our guest today is Chancey Fleet. She is the Assistive Technology Coordinator for the New York Public Library, where she teaches people how to use assistive technology to make their lives easier and more accessible. She’s also the president of the Assistive Technology Trainer’s Division for the National Federation of the Blind. 

We started our conversation as we often do – by asking Chancey what the world could be like if we started getting it right for blind and low vision people. 

The unifying feature of rightness for blind and low vision folks is that we encounter a digital commons that plays to our strengths, and that means that it's easy for us to find information that we can access and understand. That might mean that web content always has semantic structure that includes things like headings for navigation. 

But it also includes things that we don't have much of right now, like a non-visual way to access maps and diagrams and images, because of course, the internet hasn't been in text only mode for the rest of us for a really long time.

I think getting the internet right also means that we're able to find each other and build community because we're a really low incidence disability. So odds are your colleague, your neighbor, your family members aren't blind or low-vision, and so we really have to learn and produce knowledge and circulate knowledge with each other. And when the internet gets it right, that's something that's easy for us to do. 

I think that's so right. And it's honestly consistent with, I think, what every community wants, right? I mean, the Internet's highest and best use is to connect us to the people we wanna be connected to. And the way that it works best is if the people who are the users of it, the people who are relying on it have, not just a voice, but a role in how this works.

I've heard you talk about that in the context of what you call ‘ghostwritten code.’ Do you wanna explain what that is? Am I right? I think that's one of the things that has concerned you.

Yeah, you are right. A lot of people who work in design and development are used to thinking of blind and disabled people in terms of user stories and personas, and they may know on paper what the web content accessibility guidelines, for instance, say that a blind or low vision user or a keyboard-only user, or a switch user needs. The problems crop up when they interpret the concrete aspects of those guidelines without having a lived experience that leads them to understand usability in the real world.

I can give you one example. A few years ago, Google rolled out a transcribe feature within Google Translate, which I was personally super excited about. And by the way, I'm a refreshable Braille user, which means I use a Braille display with my iPhone. And if you were running VoiceOver, the screen reader for iPhone, when you launched the transcribed feature, it actually scolded you that it would not proceed, that it would not transcribe, until you plugged in headphones because well-meaning developers and designers thought, well, VoiceOver users have phones that talk, and if those phones are talking, it's going to ruin the transcription, so we'll just prevent that from happening. They didn't know about me. They didn't know about refreshable Braille users or users that might have another way to use VoiceOver that didn't involve speech out loud.

And so that, I guess you could call it a bug, I would call it a service denial, was around for a few weeks until our community communicated back about it, and if there had been blind people in the room or Braille users in the room, that would've never happened.

I think this will be really interesting and useful for the designers at EFF who think a lot in user personas and also about accessibility. And I think just hearing what happens when you get it wrong and how simple the mistake can be is really useful I think for folks to think about inclusion and also just how essential it is to make sure there's more in-depth testing and personas as you're saying. 

I wanna talk a little bit about the variety of things you brought up in your opening salvo, which I think we're gonna cover a lot of. But one of the points you mentioned was, or maybe you didn't say it this way in the opening, but you've written about it, and talked about it, which is tactile graphics and something that's called the problem of image poverty online.

And that basically, as you mentioned, the internet is a primarily text-based experience for blind and low-vision users. But there are these tools that, in a better future, will be more accessible, both available and usable and effective. And I wonder if you could talk about some of those tools like tablets and 3D printers and things like that.

So it's wild to me the way that our access to information as blind folks has evolved given the tools that we've had. So, since the eighties or nineties we've had Braille embossers that are also capable of creating tactile graphics, which is a fancy way to say raise drawings.

A graphics-capable embosser can emboss up to a hundred dots per inch. So if you look at it. Visually, it's a bit pixelated, but it approaches the limits of tactile perception. And in this way, we can experience media that includes maybe braille in the form of labels, but also different line types, dotted lines, dashed lines, textured infills.

Tactile design is a little bit different from visual design because our perceptual acuity is lower. It's good to scale things up. And it's good to declutter items. We may separate layers of information out to separate graphics. If Braille were print, it would be a thirty-six point font, so we use abbreviations liberally when we need to squeeze some braille onto an image.

And of course, we can't use color to communicate anything semantic. So when the idea of a red line or a blue line goes away we start thinking about a solid line versus a dashed or dotted line. When we think about a pie chart, we think about maybe textures or labels in place of colors. But what's interesting to me is that although tactile graphics equipment has been on the market since at least the eighties, probably someone will come along and correct me that it's even sooner than that.

Most of that equipment is on the wrong side of an institutional locked door, so it belongs to a disability services office in a university. It belongs to the makers of standardized tests. It belongs to publishers. I've often heard my library patrons say something along the lines of, oh yeah, there was a graphics embosser in my school, but I never got to touch it, I never got to use it. 

Sometimes the software that's used to produce tactile graphics is, in itself, inaccessible. And so I think blind people have experienced pretty remarkable gains in general in regard to our information literacy because of digital technologies and the internet. For example, I can go to, which is an online library for people with print disabilities and have my choice of a million books right now.

And those can automatically be translated to text-to-speech or to digital braille. But if I want a map of the neighborhood that I'm going to visit tomorrow, or if I want a glimpse of how electoral races play out, that can be really hard to come by. And I think it is a combination of the limited availability of tactile graphics equipment, inaccessibility of design and publishing practices for tactile graphics, and then this sort of vicious circular lack of demand that happens when people don't have access. 

When I ask most blind people, they'll say that they've maybe encountered two or three tactile graphics in the past year, maybe less. Um, a lot of us got more than that during our K-12 instruction. But what I find, at least for myself, is that when tactile graphics are so strongly associated with standardized testing and homework and never associated with my own curiosity or fun or playfulness or exploration, for a long time, that actually dampened down my desire to experience tactile graphics.

And so most of us would say probably, if I can be so bold as to think that I speak for the community for a second, most of us would say that yes, we have the right to an accessible web. Yes, we have the right to digital text. I think far fewer of us are comfortable saying, or understand the power of saying we also have a right to images and so in the best possible version of the internet that I imagine we have three things. We have tactile graphics equipment that is bought more frequently, and so there are economies of scale and the prices come down. We have tactile design and graphics design programs that are more accessible than what's on the market right now. And critically, we have enough access to tactile graphics online that people can find the kind of information that engages and compels them. And within 10 years or so, people are saying, we don't live in a text-only world, images aren't inherently visual, they are spacial, and we have a right to them.

I read a piece that you had written about the kind of importance of data visualizations during the pandemic and how important it was for that sort of flatten the curve graph to be able to be seen or, or touched in this case, um, by as many people as possible. But, and, and that really struck me, but I also love this idea that we shouldn't have to get these tools only because they're necessary, but also because people deserve to be able to enjoy the experience of the internet.

Right, and you never know when enjoyment is going to lead to something productive or when something productive you're doing spins out into enjoyment. Somebody sent me a book of tactile origami diagrams. It's a four volume book with maybe 40 models in it, and I've been working through them all. I can do almost all of them now, and it's really hard as a blind person to go online and find origami instructions that make any sense from an accessibility perspective.

There is a wonderful website called Lindy Vandermeer out of South Africa does great descriptive origami instruction. So it's all text directing you step by step by step. But the thing is, I'm a spatial thinker. I'm what you might think of as a visual thinker, and so I can get more out of a diagram that's showing me where to flip dot A to dot B, then I can in reading three paragraphs. It's faster, it's more fluid, it's more fun. And so I treasure this book and unfortunately every other blind person I show it to also treasures it and can't have it 'cause I've got one copy. And I just imagine a world in which, when there's a diagram on screen, we can use some kind of process to re-render it in a more optimal format for tactile exploration. That might mean AI or machine learning, and we can talk a little bit about that later. But a lot of what we learn about. What we're good at, what we enjoy, want, what we want more of in life. You know, we do find online these days, and I want to be able to dive into those moments of curiosity and interest without having to first engineer a seven step plan to get access to whatever it is that's on my screen.

Let’s pause for just a moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

And now back to our conversation with Chancey Fleet.

So let's talk a little bit about AI and I'd love to hear your perspective on where AI is gonna be helpful and where we ought to be cautious.

So if you are blind and reasonably online and you have a smartphone and you're somebody that's comfortable enough with your smartphone that like you download apps on a discretionary basis, there's a good chance that you've heard of a new feature in this app, be my eyes called be my AI, and it's a ChatGPT with computer vision powered describer.

You aim your camera at something, wait a few seconds, and a fairly rich description comes back. It's more detailed and nuanced than anything that AI or machine learning has delivered before, and so it strikes a lot of us as transformational and or uncanny, and it allows us to grab glimpses of what I would call a hypothesized visual world because as we all know, these AI make up stories out of whole cloth and include details that aren't there, and skip details that to the average human observer would be obviously relevant. So I can know that the description I'm getting is probably not prioritized and detailed in quite the same way that a human describer would approach it.

So what's interesting to me is that, since interconnected blind folks have such a dense social graph, we are all sort of diving into this together and advising each other on what's going well and what's not. And I think that a lot of us are deriving authentic value from this experience as bounded by caveats as it is. At the same time, I fear that when this technology scales, which it will, if other forces don't counteract it, it may become a convincing enough business case that organizations and institutions can skip. Human authoring of alt text to describe images online and substitute these rich seeming descriptions that are generated by an AI, and even if that's done in such a way that a human auditor can go in and make changes.

The fact is, as I see it, that if you are presented with. What seems on a quick read, like good enough alt text, you're unlikely to do much labor to make it better, more nuanced, or more complete. 

I think what I hear in the answer is it can be an augment to the humans doing the describing, um, but not a replacement for, and that's where the, you know, but it's cheaper part comes in. Right. And I think keeping our North Star on the, you know, using these systems in ways that assist people rather than replace people is coming up over and over again in the conversations around AI, and I'm hearing it in what you're saying as well.

Absolutely, and let me say as a positive it is both my due diligence as an educator and my personal joy to experiment with moments where AI technologies can make it easier for me to find information or learn things. For example, if I wanna get a quick visual description of the Bluebird trains that the MTA used to run, that's a question that I might ask AI.

I never would've bothered a human being with it. It was not central enough. But if I'm reading something and I want a quick visual description to fill it in, I'll do that.

I also really love using AI tools to look up questions about different artistic or architectural styles, or even questions about code.

I'm studying Python right now because when I go to look for information online on these subjects, often I'm finding websites that are riddled with. Lack of semantic structure that have graphics that are totally unlabeled, that have carousels, that are hard for screen reader users to navigate. And so one really powerful and compelling thing that current Conversational AI offers is that it lives in a text box and it won't violate the conventions of a chat by throwing a bunch of unwanted visual or structural clutter my way.

And when I just want an answer and I'm willing to grant myself that I'm going to have to live with the consequences of trusting that answer, or do some lateral reference, do some double checking, it can be worth my while. And in the best possible world moving forward, I'd like us to be able to harness that efficiency and that facility that conversational AI has for avoiding the hyper visual in a way that empowers us, but doesn't foreclose opportunities to find things out in other ways.

As you're describing it, I'm envisioning, you know, my drunk friend, right? They might do okay telling me stuff, but I wouldn't rely on them for stuff that really matters.


You've also talked a little bit about the role of data privacy and consent and the special concerns that blind people have around some of the technologies that are offered to them. But making sure that consent is real. I'd love for you to talk a little bit about that.

When AI is deployed on the server side to fix accessibility problems in lieu of baking, accessibility in from the ground up in a website or an application, that does a couple of things. It avoids changing the culture at the company, the customer company itself, around accessibility. It also involves an ongoing cost and technology debt to the overlay company that an organization is using and it builds in the need for ongoing supervision of the AI. So in a lot of ways, I think that that's not optimal. What I think is optimal is for developers and designers, perhaps, to use AI tools to flag issues in need of human remediation, and to use AI tools for education to speed up their immersion into accessibility and usability concepts.

You know, AI can be used to make short work of things that used to take a little bit more time. When it comes to deploying AI tools to solve accessibility problems, I think that that is a suite of tools that is best left to the discretion of the user. So we can decide, on the user side, for example, when to turn on a browser extension that tries to make those remediations. Because when they're made for us at scale, that doesn't happen with our consent and it can have a lot of collateral impacts that organizations might not expect.

The points you're making about being involved in different parts of the process. Right. It's clear that people that use these tools or that, that actually these tools are designed for should be able to decide when to deploy them.

And it's also clear that they should be more involved, as you've mentioned a few times, in the creation. And I wanted to talk a little bit about that idea of inclusion because it's sort of how we get to a place where consent is  actually, truly given. 

And it's also how we get to a place where these tools that are created do what they're supposed to do, and the companies that you're describing, um, build the, the web, the way that it should be built so that people can can access it.

We have to have inclusion in every step of the process to get to that place where these, all of these tools and the web and, and everything we're talking about actually works for everyone. Is inclusion sort of across the spectrum a solution that you see as well?

I would say that inclusion is never a solution because inclusion is a practice and a process. It's something that's never done. It's never achieved, and it's never comprehensive and perfect. 

What I see as my role as an educator, when it comes to inclusion, is meeting people where they are trying to raise awareness – among library patrons and everyone else – I serve about what technologies are available and the costs and benefits of each, and helping people road map a path from their goals and their intentions to achieving the things that they want to do.

And so I think of inclusion as sort of a guiding frame and a constant set of questions that I ask myself about what I'm noticing, what I may not be noticing, what I might be missing, who's coming in, for example, for tech lessons, versus who we're not reaching. And how the goals of the people I serve might differ from my goals for them.

And it's all kind of a spider web of things that add up to inclusion as far as I'm concerned.

I like that framing of inclusion as kind of a process rather than an end state. And I think that framing is good because I think it really moves away from the checkbox kind of approach to things like, you know, did we get the disabled person in the room? Check! 

Everybody has different goals and different things that work for them and there isn't just one box that can be checked for a lot of these kinds of things.

Blind library patrons and blind people in general are as diverse as any library patrons or people in general. And that impacts our literacy levels. It impacts our thoughts and the thoughts of our loved ones about disability. It impacts our educational attainment, and especially for those of us who lose our vision later in life, it impacts how we interact with systems and services.

I would venture to say that at this time in the U.S, if you lose your vision as an adult, or if you grow up blind in a school system, the quality of literacy and travel and independent living instruction you receive is heavily dependent on the quality of the systems and infrastructure around you, who you know, and who you know who is primed to be a disability advocate or a mentor.

And I see such different outcomes when it comes to technology based on those things. And so we can't talk about a best possible world in the technology sphere without also imagining a world that prepares people with the self-confidence, the literacy skills, and the supports for developing low tech skills that are necessary to get everything that one can get out of even the most optimally designed technology. 

A step by step app for walking directions can be as perfect as it gets. But if the person that you are equipping with that app is afraid to step out of their front door and start moving their cane back and forth and listening to the traffic and trusting their reflexes and their instincts because they have been taught how to trust those things, the app won't be used and there'll be people who are unreached and so technology can only succeed to the extent that the people using it are set up to succeed. And I think that that is where a lot of our toughest work resides.

We're trying to fix the internet here, but the internet rests on the rest of the world. And if the rest of the world isn't setting people up for success, technology can't swoop in and solve a lot of these problems.

It needs to rest upon a solid foundation. I think that's just a wonderful place to close because all of us sit on top of what John Perry Barlow called meatspace, right, and if meatspace isn't serving us, then the digital world can only, you know, it can't solve for the problems that are not digital.

I would have loved to talk to Chancey for another hour. That was fantastic.

Yeah, that was a really fun conversation. And I have to say, I just love the idea of the internet going tactile, right? That right now it's all very visual, and that we have the technology to make it tactile so that maps and other things that are, you know, pretty hard for people with low vision or blindness to navigate now, but we have technology, some of the, tools that she talked about that really could make the internet something you could feel as well as see? 

Yeah, I didn't know before talking to her that these tools even existed. And when you hear about it, you're like, oh, of course they do. But it was clear, uh, It was clear from what she said that a lot of people don't have access to them. The tools are relatively new and they need to be spread out more.  But when that happens, hopefully that does happen,  it sort of then requires us to rethink how the internet is built in some ways in terms of the hierarchy of text and what kinds of graphics exist and protocols for converting that information into tactile experiences for people. 

Yeah, I think so. And  it does sit upon something that she mentioned. I mean, she said these machines exist and have existed for a long time, but they're mainly in libraries or other places where people can't use them in their everyday lives. And, and I think, you know, one of the things that we ended with in the conversation was really important, which is, you know, we're all sitting upon a society that doesn't make a lot of these tools as widely available as they need to. 

And, you know, the good news in that is that the hard problem has been solved, which is how do you build a machine like this? The problem that we ought to be able to address as a society is how do we make it available much more broadly? I use this quote a lot, but you know, the future is here. It's just not evenly distributed. Seemed really, really clear in the way that she talked about these tools that like most blind people have used once or twice in school, but then don't get to use and turn part of their everyday life 

Yeah. The, the way I heard this was that we have this problem solved sort of at an institutional level where you can access these tools at an institution, but not at the individual level. And it's really.  It is helpful to hear and and optimistic to hear that they will exist in theory in people's homes if we can just get that to happen. And I think what was really rare for this conversation is that it, like you said, we actually do have the technology to do these things a lot of times we're talking about what we need to improve or change about the technology and and how that technology doesn't quite exist or will always be problematic and in this case, sure, the technology can always get better, but  it sounds like we're actually  At a point where we have a lot of the problems solved, whether it's using tactile tablets or, um,  creating ways for people to  use technology to guide each other through places, whether that's through like a person, through Be My Eyes or even in some cases an AI with the Be My AI version of that.

But we just haven't gotten to the point where those things work for everyone. And everyone has  a level of technological proficiency that lets them use those things. And that's something that clearly we'll need to work on in the future.

Yeah, but she also pointed out the work that needs to be done about making sure that we're continuing to build the tech that actually serves this community. And she, you know, and they're talking about, you know, ghostwritten code and things like that, where, you know, people who don't have the experience are writing things and building things based upon what they think the people who are blind might want. So, you know, on the one hand, there's good news because a lot of really good technology already exists, but I think she also didn't let us off the hook as a society about something that we, we see all across the board, which is, you know, it need, we need to have the direct input of the people who are going to be using the tools in the building of the tools, lest we end up on a whole other path with things that other than what people actually need. And, you know, this is one of the kind of old, you know, what did they say? The lessons will be repeated until they are learned. This is one of those things where over and over again, we find that the need for people who are building technologies to not just talk to the people who are going to be using them, but really embed those people in the development is one of the ways we stay true to our, to our goal, which is to build stuff that will actually be useful to people.

Thanks for joining us for this episode of How to Fix the Internet.

If you have feedback, we'd love to hear from you. Visit and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some limited edition merch like tshirts or buttons or stickers and just see what's happening in digital rights this week and every week.

This podcast is licensed Creative Commons Attribution 4. 0 International and includes music licensed Creative Commons Attribution 3.0 unported by their creators. In this episode, you heard Probably Shouldn't by J.Lang, commonGround by airtone and Klaus by Skill_Borrower

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

And How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time.

I’m Jason Kelley…


And I’m Cindy Cohn.

Josh Richman

Add Bluetooth to the Long List of Border Surveillance Technologies

2 months 2 weeks ago

A new report from news outlet NOTUS shows that at least two Texas counties along the U.S.-Mexico border have purchased a product that would allow law enforcement to track devices that emit Bluetooth signals, including cell phones, smartwatches, wireless earbuds, and car entertainment systems. This incredibly personal model of tracking is the latest level of surveillance infrastructure along the U.S.-Mexico border—where communities are not only exposed to a tremendous amount of constant monitoring, but also serves as a laboratory where law enforcement agencies at all levels of government test new technologies.

The product now being deployed in Texas, called TraffiCatch, can detect wifi and Bluetooth signals in moving cars to track them. Webb County, which includes Laredo, has had TraffiCatch technology since at least 2019, according to GovSpend procurement data. Val Verde County, which includes Del Rio, approved the technology in 2022. 

This data collection is possible because all Bluetooth devices regularly broadcast a Bluetooth Device Address. This address can be either a public address or a random address. Public addresses don’t change for the lifetime of the device, making them the easiest to track. Random addresses are more common and have multiple levels of privacy, but for the most part change regularly (this is the case with most modern smartphones and products like AirTags.) Bluetooth products with random addresses would be hard to track for a device that hasn’t paired with them. But if the tracked person is also carrying a Bluetooth device that has a public address, or if tracking devices are placed close to each other so a device is seen multiple times before it changes its address, random addresses could be correlated with that person over long periods of time.

It is unclear whether TraffiCatch is doing this sort of advanced analysis and correlation, and how effective it would be at tracking most modern Bluetooth devices.

According to TraffiCatch’s manufacturer, Jenoptik, this data derived from Bluetooth is also combined with data collected from automated license plate readers, another form of vehicle tracking technology placed along roads and highways by federal, state, and local law enforcement throughout the Texas border. ALPRs are well understood technology for vehicle tracking, but the addition of Bluetooth tracking may allow law enforcement to track individuals even if they are using different vehicles.

This mirrors what we already know about how Immigration and Customs Enforcement (ICE) has been using cell-site simulators (CSSs). Also known as Stingrays or IMSI catchers, CSS are devices that masquerade as legitimate cell-phone towers, tricking phones within a certain radius into connecting to the device rather than a tower. In 2023, the Department of Homeland Security’s Inspector General released a troubling report detailing how federal agencies like ICE, its subcomponent Homeland Security Investigations (HSI), and the Secret Service have conducted surveillance using CSSs without proper authorization and in violation of the law. Specifically, the Inspector General found that these agencies did not adhere to federal privacy policy governing the use of CSS and failed to obtain special orders required before using these types of surveillance devices.

Law enforcement agencies along the border can pour money into overlapping systems of surveillance that monitor entire communities living along the border thanks in part to Operation Stonegarden (OPSG), a Department of Homeland Security (DHS) grant program, which rewards state and local police for collaborating in border security initiatives. DHS doled out $90 million in OPSG funding in 2023, $37 million of which went to Texas agencies. These programs are especially alarming to human rights advocates due to recent legislation passed in Texas to allow local and state law enforcement to take immigration enforcement into their own hands.

As a ubiquitous wireless interface to many of our personal devices and even our vehicles, Bluetooth is a large and notoriously insecure attack surface for hacks and exploits. And as TraffiCatch demonstrates, even when your device’s Bluetooth tech isn’t being actively hacked, it can broadcast uniquely identifiable information that make you a target for tracking. This is one in the many ways surveillance, and the distrust it breeds in the public over technology and tech companies, hinders progress. Hands-free communication in cars is a fantastic modern innovation. But the fact that it comes at the cost of opening a whole society up to surveillance is a detriment to all.

Matthew Guariglia

EFF Zine on Surveillance Tech at the Southern Border Shines Light on Ever-Growing Spy Network

2 months 2 weeks ago
Guide Features Border Tech Photos, Locations, and Explanation of Capabilities

SAN FRANCISCO—Sensor towers controlled by AI, drones launched from truck-bed catapults, vehicle-tracking devices disguised as traffic cones—all are part of an arsenal of technologies that comprise the expanding U.S surveillance strategy along the U.S.-Mexico border, revealed in a new EFF zine for advocates, journalists, academics, researchers, humanitarian aid workers, and borderland residents.

Formally released today and available for download online in English and Spanish, “Surveillance Technology at the U.S.-Mexico Border” is a 36-page comprehensive guide to identifying the growing system of surveillance towers, aerial systems, and roadside camera networks deployed by U.S.-law enforcement agencies along the Southern border, allowing for the real-time tracking of people and vehicles.

The devices and towers—some hidden, camouflaged, or moveable—can be found in heavily populated urban areas, small towns, fields, farmland, highways, dirt roads, and deserts in California, Arizona, New Mexico, and Texas.

The zine grew out of work by EFF’s border surveillance team, which involved meetings with immigrant rights groups and journalists, research into government procurement documents, and trips to the border. The team located, studied, and documented spy tech deployed and monitored by the Department of Homeland Security (DHS), Customs and Border Protection (CBP), Immigration and Customs Enforcement (ICE), National Guard, and Drug Enforcement Administration (DEA), often working in collaboration with local law enforcement agencies.

“Our team learned that while many people had an abstract understanding of the so-called ‘virtual wall,’ the actual physical infrastructure was largely unknown to them,” said EFF Director of Investigations Dave Maass. “In some cases, people had seen surveillance towers, but mistook them for cell phone towers, or they’d seen an aerostat flying in the sky and not known it was part of the U.S. border strategy.

“That's why we put together this zine; it serves as a field guide to spotting and identifying the large range of technologies that are becoming so ubiquitous that they are almost invisible,” said Maass.

The zine also includes a copy off EFF’s pocket guide to crossing the U.S. border and protecting information on smart phones, computers, and other digital devices.

The zine is available for republication and remixing under EFF’s Creative Commons Attribution License and features photography by Colter Thomas and Dugan Meyer, whose exhibit “Infrastructures of Control,”—which incorporates some of EFF’s border research—opened in April at the University of Arizona. EFF has previously released a gallery of images of border surveillance that are available for publications to reuse, as well as a living map of known surveillance towers that make up the so-called “virtual wall.”

To download the zine:

For more on border surveillance:

For EFF’s searchable Atlas of Surveillance: 


Contact:  DaveMaassDirector of
Karen Gullo

CCTV Cambridge, Addressing Digital Equity in Massachusetts

2 months 2 weeks ago

Here at EFF digital equity is something that we advocate for, and we are always thrilled when we hear a member of the Electronic Frontier Alliance is advocating for it as well. Simply put, digital equity is the condition in which everyone has access to technology that allows them to participate in society; whether it be in rural America or the inner cities—both places where big ISPs don’t find it profitable to make such an investment. EFF has long advocated for affordable, accessible, future-proof internet access for all. I recently spoke with EFA member CCTV Cambridge, as they partnered with the Massachusetts Broadband Institute to tackle this issue and address the digital divide in their state:

How did the partnership with the Massachusetts Broadband Institute come about, and what does it entail?

Mass Broadband Institute and Mass Hire Metro North are the key funding partners. We were moving forward with lifting up digital equity and saw an opportunity to apply for this funding, which is going to several communities in the Metro North area. So, this collaboration was generated in Cambridge for the partners in this digital equity work. Key program activities will entail hiring and training “Digital Navigators” to be placed in the Cambridge Public Library and Cambridge Public Schools, working in partnership with navigators at CCTV and Just A Start. CCTV will employ a coordinator as part of the project, who will serve residents and coordinate the digital navigators across partners to build community, skills, and consistency in support for residents. Regular meetings will be coordinated for Digital Navigators across the city to share best practices, discuss challenging cases, exchange community resources, and measure impact from data collection. These efforts will align with regional initiatives supported through the Mass Broadband Institute Digital Navigator coalition.

What is CCTV Cambridge’s approach to digital equity and why is it an important issue?

CCTV’s approach to digital equity has always been about people over tech. We really see the Digital Navigators as more like digital social workers rather than IT people in a sense that technology is required to be a fully civically engaged human, someone who is connected to your community and family, someone who can have a sense of well being and safety in the world. We really feel like what digital equity means is not just being able to use the tools but to be able to have access to the tools that make your life better. You really can’t operate in an equal way in the world without the access to technology, you can’t make a doctor’s appointment, talk to your grandkids on zoom, you can’t even park your car without an app! You can’t be civically engaged without access to tech. We risk marginalizing a bunch of folks if we don’t, as a community, bring them into digital equity work. We’re community media, it’s in our name, and digital equity is the responsibility of the community. It’s not okay to leave people behind.

It’s amazing to see organizations like CCTV Cambridge making a difference in the community, what do you envision as the results of having the Digital Navigators?

Hopefully we’re going to increase community and civic engagement in Cambridge, particularly amongst people who might not have the loudest voice. We’re going to reach people we haven't reached in the past, including people who speak languages other than English and haven’t had exposure to community media. It’s a really great opportunity for intergenerational work which is also a really important community building tool.

How can people both locally in Massachusetts and across the country plug-in and support?

People everywhere are welcomed and invited to support this work through donations, which you can do by visiting! When the applications open for the Digital Navigators, share in your networks with people you think would love to do this work; spread the word on social media and follow us on all platforms @cctvcambridge! 

Christopher Vines

The U.S. House Version of KOSA: Still a Censorship Bill

2 months 2 weeks ago

A companion bill to the Kids Online Safety Act (KOSA) was introduced in the House last month. Despite minor changes, it suffers from the same fundamental flaws as its Senate counterpart. At its core, this bill is still an unconstitutional censorship bill that restricts protected online speech and gives the government the power to target services and content it finds objectionable. Here, we break down why the House version of KOSA is just as dangerous as the Senate version, and why it’s crucial to continue opposing it. 

Core First Amendment Problems Persist

EFF has consistently opposed KOSA because, through several iterations of the Senate bill, it continues to open the door to government control over what speech content can be shared and accessed online. Our concern, which we share with others, is that the bill’s broad and vague provisions will force platforms to censor legally protected content and impose age-verification requirements. The age verification requirements will drive away both minors and adults who either lack the proper ID, or who value their privacy and anonymity.   

The House version of KOSA fails to resolve these fundamental censorship problems.



Dangers for Everyone, Especially Young People

One of the key concerns with KOSA has been its potential to harm the very population it aims to protect—young people. KOSA’s broad censorship requirements would limit minors’ access to critical information and resources, including educational content, social support groups, and other forms of legitimate speech. This version does not alleviate that concern. For example, this version of KOSA could still: 

  • Suppress search results for young people seeking sexual health and reproductive rights information; 
  • Block content relevant to the history of oppressed groups, such as the history of slavery in the U.S; 
  • Stifle youth activists across the political spectrum by preventing them from connecting and advocating on their platforms; and 
  • Block young people seeking help for mental health or addiction problems from accessing resources and support. 

As thousands of young people have told us, these concerns are just the tip of the iceberg. Under the guise of protecting them, KOSA will limit minors’ ability to self-explore, to develop new ideas and interests, to become civically engaged citizens, and to seek community and support for the very harms KOSA ostensibly aims to prevent. 

What’s Different About the House Version?

Although there are some changes in the House version of KOSA, they do little to address the fundamental First Amendment problems with the bill. We review the key changes here.

1. Duty of Care Provision   

We’ve been vocal about our opposition to KOSA’s “duty of care” censorship provision. This section outlines a wide collection of harms to minors that platforms have a duty to prevent and mitigate by exercising “reasonable care in the creation and implementation of any design feature” of their product. The list includes self-harm, suicide, eating disorders, substance abuse, depression, anxiety, and bullying, among others. As we’ve explained before, this provision would cause platforms to broadly over-censor the internet so they don’t get sued for hosting otherwise legal content that the government—in this case the FTC—claims is harmful.

The House version of KOSA retains this chilling effect, but limits the "duty of care" requirement to what it calls “high impact online companies,” or those with at least $2.5 billion in annual revenue or more than 150 million global monthly active users. So while the Senate version requires all “covered platforms” to exercise reasonable care to prevent the specific harms to minors, the House version only assigns that duty of care to the biggest platforms.

While this is a small improvement, its protective effect is ultimately insignificant. After all, the vast majority of online speech happens on just a handful of platforms, and those platforms—including Meta, Snap, X, WhatsApp, and TikTok—will still have to uphold the duty of care under this version of KOSA. Smaller platforms, meanwhile, still face demanding obligations under KOSA’s other sections. When government enforcers want to control content on smaller websites or apps, they can just use another provision of KOSA—such as one that allows them to file suits based on failures in a platform’s design—to target the same protected content.

2. Tiered Knowledge Standard 

Because KOSA’s obligations apply specifically to users who are minors, there are open questions as to how enforcement would work. How certain would a platform need to be that a user is, in fact, a minor before KOSA liability attaches? The Senate version of the bill has one answer for all covered platforms: obligations attach when a platform has “actual knowledge” or “knowledge fairly implied on the basis of objective circumstances” that a user is a minor. This is a broad, vague standard that would not require evidence that a platform actually knows a user is a minor for it to be subject to liability. 

The House version of KOSA limits this slightly by creating a tiered knowledge standard under which platforms are required to have different levels of knowledge based on the platform’s size. Under this new standard, the largest platforms—or "high impact online companies”—are required to carry out KOSA’s provisions with respect to users they “knew or should have known” are minors. This, like the Senate version’s standard, would not require proof that a platform actually knows a user is a minor for it to be held liable. Mid-sized platforms would be held to a slightly less stringent standard, and the smallest platforms would only be liable where they have actual knowledge that a user was under 17 years old. 

While, again, this change is a slight improvement over the Senate’s version, the narrowing effect is small. The knowledge standard is still problematically vague, for one, and where platforms cannot clearly decipher when they will be liable, they are likely to implement dangerous age verification measures anyway to avoid KOSA’s punitive effects.

Most importantly, even if the House’s tinkering slightly reduces liability for the smallest platforms, this version of the bill still incentivizes large and mid-size platforms—which, again, host the vast majority of all online speech—to implement age verification systems that will threaten the right to anonymity and create serious privacy and security risks for all users.

3. Exclusion for Non-Interactive Platforms

The House bill excludes online platforms where chat, comments, or interactivity is not the predominant purpose of the service. This could potentially narrow the number of platforms subject to KOSA's enforcement by reducing some of the burden on websites that aren't primarily focused on interaction.

However, this exclusion is legally problematic because its unclear language will again leave platforms guessing as to whether it applies to them. For instance, does Instagram fall into this category or would image-sharing be its predominant purpose? What about TikTok, which has a mix of content-sharing and interactivity? This ambiguity could lead to inconsistent enforcement and legal challenges—the mere threat of which tend to chill online speech.

4. Definition of Compulsive Usage 

Finally, the House version of KOSA also updates the definition of “compulsive usage” from any “repetitive behavior reasonably likely to cause psychological distress” to any “repetitive behavior reasonably likely to cause a mental health disorder,” which the bill defines as anything listed in the Diagnostic and Statistical Manual of Mental Disorders, or DSM. This change pays lip service to concerns we and many others have expressed that KOSA is overbroad, and will be used by state attorneys general to prosecute platforms for hosting any speech they deem harmful to minors. 

However, simply invoking the name of the healthcare professionals’ handbook does not make up for the lack of scientific evidence that minors’ technology use causes mental health disorders. This definition of compulsive usage still leaves the door open for states to go after any platform that is claimed to have been a factor in any child’s anxiety or depression diagnosis. 

KOSA Remains a Censorship Threat 

Despite some changes, the House version of KOSA retains its fundamental constitutional flaws.  It encourages government-directed censorship, dangerous digital age verification, and overbroad content restrictions on all internet users, and further harms young people by limiting their access to critical information and resources. 

Lawmakers know this bill is controversial. Some of its proponents have recently taken steps to attach KOSA as an amendment to the five-year reauthorization of the Federal Aviation Administration, the last "must-pass" legislation until the fall. This would effectively bypass public discussion of the House version. Just last month Congress attached another contentious, potentially unconstitutional bill to unrelated legislation, by including a bill banning TikTok inside of a foreign aid package. Legislation of this magnitude deserves to pass—or fail—on its own merits. 

We continue to oppose KOSA—in its House and Senate forms—and urge legislators to instead seek alternatives such as comprehensive federal privacy law that protect young people without infringing on the First Amendment rights of everyone who relies on the internet.  



Molly Buckley
1 hour 49 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed