You Went to a Drag Show—Now the State of Florida Wants Your Name

2 weeks 4 days ago

If you thought going to a Pride event or drag show was just another night out, think again. If you were in Florida, it might land your name in a government database.

That’s what’s happening in Vero Beach, FL, where the Florida Attorney General’s office has subpoenaed a local restaurant, The Kilted Mermaid, demanding surveillance video, guest lists, reservation logs, and contracts of performers and other staff—all because the venue hosted an LGBTQ+ Pride event.

To be clear: no one has been charged with a crime, and the law Florida is likely leaning on here—the so-called “Protection of Children Act” (which was designed to be a drag show ban)—has already been blocked by federal courts as likely unconstitutional. But that didn’t stop Attorney General James Uthmeier from pushing forward anyway. Without naming a specific law that was violated, the AG’s press release used pointed and accusatory language, stating that "In Florida, we don't sacrifice the innocence of children for the perversions of some demented adults.” His office is now fishing for personal data about everyone who attended or performed at the event. This should set off every civil liberties alarm bell we have.

Just like the Kids Online Safety Act (KOSA) and other bills with misleading names, this isn’t about protecting children. It’s about using the power of the state to intimidate people government officials disagree with, and to censor speech that is both lawful and fundamental to American democracy.

Drag shows—many of which are family-friendly and feature no sexual content—have become a political scapegoat. And while that rhetoric might resonate in some media environments, the real-world consequences are much darker: state surveillance of private citizens doing nothing but attending a fun community celebration. By demanding video surveillance, guest lists, and reservation logs, the state isn’t investigating a crime, it is trying to scare individuals from attending a legal gathering. These are people who showed up at a public venue for a legal event, while a law restricting it was not even in effect. 

The Supreme Court has ruled multiple times that subpoenas forcing disclosure of members of peaceful organizations have a chilling effect on free expression. Whether it’s a civil rights protest, a church service, or, yes, a drag show: the First Amendment protects the confidentiality of lists of attendees.

Even if the courts strike down this subpoena—and they should—the damage will already be done. A restaurant owner (who also happens to be the town’s vice mayor) is being dragged into a state investigation. Performers’ identities are potentially being exposed—whether to state surveillance, inclusion in law enforcement databases, or future targeting by anti-LGBTQ+ groups. Guests who thought they were attending a fun community event are now caught up in a legal probe. These are the kinds of chilling, damaging consequences that will discourage Floridians from hosting or attending drag shows, and could stamp out the art form entirely. 

EFF has long warned about this kind of mission creep: where a law or policy supposedly aimed at public safety is turned into a tool for political retaliation or mass surveillance. Going to a drag show should not mean you forfeit your anonymity. It should not open you up to surveillance. And it absolutely should not land your name in a government database.

Rindala Alajaji

[B] 【参政党幻視その1】「日本人ファースト」が生む狂気と暴力

2 weeks 4 days ago
参政党の大躍進で25年参院選は終わりました。ヨーロッパのメディアは日本に極右有力政党出現、と報じています。この後、この国の政治権力の一角に極右が居座るのか、今が頂点でやがて溶けてしまうのか、評価は二分していていますが、こういう政治勢力があっという間に肥大化することの気持ち悪さは拭えません。(大野和興)
日刊ベリタ

【お知らせ】『世田谷区史編纂問題─著作者人格権をめぐる闘いの記録』が完成!=出版部会

2 weeks 4 days ago
 出版ネッツは、2年にわたって闘われてきた世田谷区史編纂争議の経過と成果を、記録に残すため報告集『世田谷区史編纂問題 著作者人格権をめぐる闘いの記録』を作成しました。 本書は、本編と資料編で構成しています。本編では、この闘いに心を寄せてくださった方々からのメッセージを載せています。メッセージをお寄せくださった方々に、心より御礼申し上げます。 「PART2 闘いの意義と軌跡」では、出版ネッツがどのような戦略を立て、いかに闘ってきたかについて述べています。 「PART4 集会の記..
JCJ

A popular recipe for digital sovereignty

2 weeks 4 days ago
The Movimento dos Trabalhadores Sem Teto (Homeless Workers’ Movement – MTST) is a grassroots urban movement in Brazil. Between 2020 and 2021, in response to the social and health crisis worsened by…
Fábio Neves with the collaboration of Criz

Just Banning Minors From Social Media Is Not Protecting Them

2 weeks 4 days ago

By publishing its guidelines under Article 28 of the Digital Services Act, the European Commission has taken a major step towards social media bans that will undermine privacy, expression, and participation rights for young people that are already enshrined in international human rights law. 

EFF recently submitted feedback to the Commission’s consultation on the guidelines, emphasizing a critical point: Online safety for young people must include privacy and security for them and must not come at the expense of freedom of expression and equitable access to digital spaces.

Article 28 requires online platforms to take appropriate and proportionate measures to ensure a high level of safety, privacy and security of minors on their services. But the article also prohibits targeting minors with personalized ads, a measure that would seem to require that platforms know that a user is a minor. The DSA acknowledges that there is an inherent tension between ensuring a minor’s privacy and requiring platforms to know the age of every user. The DSA does not resolve this tension. Rather, it states that service providers should not be incentivized to collect the age of their users, and Article 28(3) makes a point of not requiring service providers to collect and process additional data to assess whether a user is underage. 

Thus, the question of age checks is a key to understanding the obligations of online platforms to safeguard minors online. Our submission explained the serious concerns that age checks pose to the rights and security of minors. All methods for conducting age checks come with serious drawbacks. Approaches to verify a user’s age generally involve some form of government-issued ID document, which millions of people in Europe—including migrants, members of marginalized groups and unhoused people, exchange students, refugees and tourists—may not have access to.

Other age assurance methods, like biometric age estimation, age estimation based on email addresses or user activity, involve the processing of vast amounts of personal, sensitive data – usually in the hands of third parties. Beyond being potentially exposed to discrimination and erroneous estimations, users are asked to trust platforms’ intransparent supply chains and hope for the best. Age assurance methods always impact the rights of children and teenagers: Their rights to privacy and data protection, free expression, information and participation.

The Commission's guidelines contain a wealth of measures elucidating the Commission's understanding of "age appropriate design" of online services. We have argued that some of them, including default settings to protect users’ privacy, effective content moderation and ensuring that recommender systems’ don’t rely on the collection of behavioral data, are practices that would benefit all users

But while the initial Commission draft document considered age checks as only a tool to determine users’ ages to be able to tailor their online experiences according to their age, the final guidelines go far beyond that. Crucially, the European Commission now seems to consider “measures restricting access based on age to be an effective means to ensure a high level of privacy, safety and security for minors on online platforms” (page 14). 

This is a surprising turn, as many in Brussels have considered social media bans like the one Australia passed (and still doesn’t know how to implement) disproportionate. Responding to mounting pressure from Member States like France, Denmark, and Greece to ban young people under a certain age from social media platforms, the guidelines contain an opening clause for national rules on age limits for certain services. According to the guidelines, the Commission considers such access restrictions  appropriate and proportionate where “union or national law, (...) prescribes a minimum age to access certain products or services (...), including specifically defined categories of online social media services”. This opens the door for different national laws introducing different age limits for services like social media platforms. 

It’s concerning that the Commission generally considers the use of age verification proportionate in any situation where a provider of an online platform identifies risks to minors’ privacy, safety, or security and those risks “cannot be mitigated by other less intrusive measures as effectively as by access restrictions supported by age verification” (page 17). This view risks establishing a broad legal mandate for age verification measures.

It is clear that such bans will do little in the way of making the internet a safer space for young people. By banning a particularly vulnerable group of users from accessing platforms, the providers themselves are let off the hook: If it is enough for platforms like Instagram and TikTok to implement (comparatively cheap) age restriction tools, there are no incentives anymore to actually make their products and features safer for young people. Banning a certain user group changes nothing about problematic privacy practices, insufficient content moderation or business models based on the exploitation of people’s attention and data. And assuming that teenagers will always find ways to circumvent age restrictions, the ones that do will be left without any protections or age-appropriate experiences.

Svea Windwehr

【お知らせ】メディアの世界は今――。学生向け開催 JCJジャーナリスト入門講座 8月30日から11月15日まで全7回 受講生募集中!

2 weeks 5 days ago
 ■学生向けに開く 2025年8月~11月(全7回)「JCJジャーナリスト入門講座」 記者の仕事って、面白い? どうしたら新聞やテレビの世界で働けるのか? SNS全盛の時代に記者は何ができるのか? こうした様々な疑問や不安に応えるために、日本ジャーナリスト会議(JCJ)は学生向けに「ジャーナリスト入門講座」をこの秋に開講します。興味はあるけれど、自分が力を発揮できるかどうかわからない、と漠然と考えている方々にぜひ受講していただきたいと思っています。「記者とは何か」。その魅力と..
JCJ

【おすすめ本】吉田敏浩『ルポ 軍事優先社会 暮らしの中の「戦争準備」 』―日本各地を歩いて分析 急速に強化される戦争態勢=末浪靖司(ジャーナリスト)

2 weeks 6 days ago
 本書は、米軍と自衛隊の一体化が進み、強化されている実態を分析し、この問題に関わる情報を豊かに分かりやすく読者に提供している。 岸田内閣が決定し石破内閣が実行している「国家安全保障戦略」などの安保3文書と、そこに書かれた敵基地攻撃能力について、日本がアメリカの中国攻撃戦略の捨石のように利用されるという指摘は、問題を国際的にも見ており興味深い。 とくに台湾有事で中国と戦争になることを想定した九州・沖縄各地のルポは臨場感がある。 自衛隊は18歳から22歳までの若者の名簿を市区町村..
JCJ

Zero Knowledge Proofs Alone Are Not a Digital ID Solution to Protecting User Privacy

3 weeks ago

In the past few years, governments across the world have rolled out digital identification options, and now there are efforts encouraging online companies to implement identity and age verification requirements with digital ID in mind. This blog is the first in this short series that will explain digital ID and the pending use case of age verification. The following posts will evaluate what real protections we can implement with current digital ID frameworks and discuss how better privacy and controls can keep people safer online.

Age verification measures are having a moment, with policymakers in the U.S. and around the world passing legislation mandating online services and companies to introduce technologies that require people to verify their identities to access content deemed appropriate for their age. But for most people, having physical government documentation like a driver's license, passport, or other ID is not a simple binary of having it or not. Physical ID systems involve hundreds of factors that impact their accuracy and validity, and everyday situations occur where identification attributes can change, or an ID becomes invalid or inaccurate or needs to be reissued: addresses change, driver’s licenses expire or have suspensions lifted, or temporary IDs are issued in lieu of obtaining permanent identification.  

The digital ID systems currently being introduced potentially solve some problems like identity fraud for business and government services, but leave the holder of the digital ID vulnerable to the needs of the companies collecting such information. State and federal embrace of digital ID is based on claims of faster access, fraud prevention, and convenience. But with digital ID being proposed as a means of online verification, it is just as likely to block claims of public assistance and other services as facilitate them. That’s why legal protections are as important as the digital IDs themselves. To add to this, in places that lack comprehensive data privacy legislation, verifiers are not heavily restricted in what they can and can’t ask the holder. In response, some privacy mechanisms have been suggested and few have been made mandatory, such as the promise that a feature called Zero Knowledge Proofs (ZKPs) will easily solve the privacy aspects of sharing ID attributes.

Zero Knowledge Proofs: The Good News

The biggest selling point of modern digital ID offerings, especially to those seeking to solve mass age verification, is being able to incorporate and share something called a Zero Knowledge Proof (ZKP) for a website or mobile application to verify ID information, and not have to share the ID itself or information explicitly on it. ZKPs provide a cryptographic way to not give something away, like your exact date of birth and age from your ID, instead offering a “yes-or-no” claim (like above or below 18) to a verifier requiring a legal age threshold. More specifically, two properties of ZKPs are “soundness” and “zero knowledge.” Soundness is appealing to verifiers and governments to make it hard for an ID holder to present forged information (the holder won’t know the “secret”). Zero-Knowledge can be beneficial to the holder, because they don’t have to share explicit information like a birth date, just cryptographic proof that said information exists and is valid. There have been recent announcements from major tech companies like Google who plan to integrate ZKPs for age verification and “where appropriate in other Google products”.

Zero Knowledge Proofs: The Bad News

What ZKPs don’t do is mitigate verifier abuse or limit their requests, such as over-asking for information they don’t need or limiting the number of times they request your age over time. They don’t prevent websites or applications from collecting other kinds of observable personally identifiable information like your IP address or other device information while interacting with them.

ZKPs are a great tool for sharing less data about ourselves over time or in a one time transaction. But this doesn’t do a lot about the data broker industry that already has massive, existing profiles of data on people. We understand that this was not what ZKPs for age verification were presented to solve. But it is still imperative to point out that utilizing this technology to share even more about ourselves online through mandatory age verification establishes a wider scope for sharing in an already saturated ecosystem of easily linked, existing personal information online. Going from presenting your physical ID maybe 2-3 times a week to potentially proving your age to multiple websites and apps every day online is going to render going online itself as a burden at minimum and a barrier entirely at most for those who can’t obtain an ID.

Protecting The Way Forward

Mandatory age verification takes the potential privacy benefits of mobile ID and proposed ZKPs solutions, then warps them into speech chilling mechanisms.

Until the hard questions of power imbalances for potentially abusive verifiers and prevention of phoning home to ID issuers are addressed, these systems should not be pushed forward without proper protections in place. A more private, holder-centric ID is more than just ZKPs as a catch all for privacy concerns. The case of safety online is not solved through technology alone, and involves multiple, ongoing conversations. Yes, that sounds harder to do than age checks online for everyone. Maybe, that’s why this is so tempting to implement. However, we encourage policy and law makers to look into what is best, and not what is easy.

Alexis Hancock