Tell Governor Newsom: Rein in AI Police Reports
Californians should urge Gov. Gavin Newsom to sign S.B. 524: a common-sense bill that takes important first-step reforms to regulate police reports written by generative AI. This is crucial, as watchdogs struggle to figure out where and how AI is being used in a police context.
California, Tell Governor Newsom: Regulate AI Police Reports and Sign S.B. 524
The California legislature has passed a necessary piece of legislation, S.B. 524, which starts to regulate police reports written by generative AI. Now, it’s up to us to make sure Governor Newsom will sign the bill.
We must make our voices heard. These technologies obscure certain records and drafts from public disclosure. Vendors have invested heavily on their ability to sell police genAI.
AI-generated police reports are spreading rapidly. The most popular product on the market is Axon’s Draft One, which is already one of the country’s biggest purveyors of police tech, including body-worn cameras. By bundling their products together, Axon has capitalized on its customer base to spread their untransparent and potentially harmful genAI product.
Many things can go wrong when genAI is used to write narrative police reports. First, because the product relies on body-worn camera audio, there’s a big chance of the AI draft missing context like sarcasm, culturally-specific or contextual vocabulary use and slang, languages other than English. While police are expected to edit the AI’s version of events to make up for these flaws, many officers will defer to the AI. Police are also supposed to make an independent decision before arresting a person who was identified by face recognition–and police mess that up all the time. The prosecutor of King County, Washington, has forbidden local officers from using Draft One out of fear that it is unreliable.
Then, of course, there’s the matter of dishonesty. Many public defenders and criminal justice practitioners have voiced concerns about what this technology would do to cross examination. If caught with a different story on the stand than the one in their police report, an officer can easily say, “the AI wrote that and I didn’t edit well enough.” The genAI creates a layer of plausible deniability. Carelessness is a very different offense than lying on the stand.
To make matters worse, an investigation by EFF found that Axon’s Draft One product defies transparency by design. The technology is deliberately built to obscure what portion of a finished report was written by AI and which portions were written by an officer–making it difficult to determine if an officer is lying about which portions of a report were written by AI.
But now, California has an important chance to join with other states like Utah that are passing laws to reign in these technologies, and what minimum safeguards and transparency must go along with using them.
S.B. 524 does several important things: It mandates that police reports written by AI include disclaimers on every page or within the body of the text that make it clear that this report was written in part or in total by a computer. It also says that any reports written by AI must retain their first draft. That way, it should be easier for defense attorneys, judges, police supervisors, or any other auditing entity to see which portions of the final report were written by AI and which parts were written by the officer. Further, the bill requires officers to sign and verify that they read the report and its facts are correct. And it bans AI vendors from selling or sharing the information a police agency provided to the AI.
These common-sense, first-step reforms are important: watchdogs are struggling to figure out where and how AI is being used in a police context. In fact, Axon’s Draft One, would be out of compliance with this bill, which would require them to redesign their tool to make it more transparent—a small win for communities everywhere.
So now we’re asking you: help us make a difference. Use EFF’s Action Center to tell Governor Newsom to sign S.B. 524 into law!
【おすすめ本】八木 絹『ハンセン病差別の歴史を旅する 「救済」への問いかけ』―差別の事実と向き合い 「救済」への歩みを辿る=霜村三二(元都留文科大学講師)
〈AIと原発〉想田和弘
食事由来の化学物質のばく露評価ワーキンググループ(第5回)の開催について【9月24日開催】
令和8年度食品健康影響評価技術研究課題の公募について
遺伝子組換え食品等専門調査会(第268回)の開催について(非公開)【9月24日開催】
令和7年度第3回政治資金適正化委員会
情報通信審議会 情報通信技術分科会 ITU部会 地上業務委員会(第93回)
電波監理審議会(第1147回)会議資料
令和7年度過疎地域持続的発展優良事例表彰における総務大臣賞及び全国過疎地域連盟会長賞の選定
夕張市財政再生計画の変更の同意
情報通信審議会 情報通信技術分科会 衛星通信システム委員会作業班(第35回)の開催について
2025年電磁界の健康影響に関する国際コーディネート会合の開催
通信履歴の保存の在り方に関する要請の実施
村上総務大臣閣議後記者会見の概要
情報通信審議会 情報通信技術分科会 電波利用環境委員会 CISPR B作業班(第28回)開催案内
第6回中央選挙管理会において決定された事項
Our Stop Censoring Abortion Campaign Uncovers a Social Media Censorship Crisis
This is the first installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here.
We’ve been hearing that social media platforms are censoring abortion-related content, even when no law requires them to do so. Now, we’ve got the receipts.
For months, EFF has been investigating stories from users whose abortion-related content has been taken down or otherwise suppressed by major social media platforms. In collaboration with our allies—including Plan C, Women on Web, Reproaction, and Women First Digital—we launched the #StopCensoringAbortion campaign to collect and amplify these stories.
Submissions came from a variety of users, including personal accounts, influencers, healthcare clinics, research organizations, and advocacy groups from across the country and abroad—a spectrum that underscores the wide reach of this censorship. Since the start of the year, we’ve seen nearly 100 examples of abortion-related content taken down by social media platforms.
We analyzed these takedowns, deletions, and bans, comparing the content to what platform policies allow—particularly those of Meta—and found that almost none of the submissions we received violated any of the platforms’ stated policies. Most of the censored posts simply provided factual, educational information. This Threads post is a perfect example:
Screenshot submitted by Lauren Kahre to EFF
In this post, health policy strategist Lauren Kahre discussed abortion pills’ availability via mail. She provided factual information about two FDA approved medications (mifepristone and misoprostol), including facts like shelf life and how to store pills safely.
Lauren’s post doesn’t violate any of Meta’s policies and shouldn’t have been removed. But don’t just take our word for it: Meta has publicly insisted that posts like these should not be censored. In a February 2024 letter to Amnesty International, Meta Human Rights Policy Director Miranda Sissons wrote: “Organic content (i.e., non paid content) educating users about medication abortion is allowed and does not violate our Community Standards. Additionally, providing guidance on legal access to pharmaceuticals is allowed.”
Still, shortly after Lauren shared this post, Meta took it down. Perhaps even more perplexing was their explanation for doing so. According to Meta, the post was removed because “[they] don’t allow people to buy, sell, or exchange drugs that require a prescription from a doctor or a pharmacist.”
Screenshot submitted by Lauren Kahre to EFF
In the submissions we received, this was the most common reason Meta gave for removing abortion-related content. The company frequently claimed that posts violated policies on Restricted Goods and Services, which prohibit any “attempts to buy, sell, trade, donate, gift or ask for pharmaceutical drugs.”
Yet in Lauren’s case and others, the posts very clearly did no such thing. And as Meta itself has explained: “Providing guidance on how to legally access pharmaceuticals is permitted as it is not considered an offer to buy, sell or trade these drugs.”
In fact, Meta’s policies on Restricted Goods & Services further state: “We allow discussions about the sale of these goods in stores or by online retailers, advocating for changes to regulations of goods and services covered in this policy, and advocating for or concerning the use of pharmaceutical drugs in the context of medical treatment, including discussion of physical or mental side effects.” Also, “Debating or advocating for the legality or discussing scientific or medical merits of prescription drugs is allowed. This includes news and public service announcements.”
Over and over again, the policies say one thing, but the actual enforcement says another.
We spoke with multiple Meta representatives to share these findings. We asked hard questions about their policies and the gap between how they’re being applied. Unfortunately, we were mostly left with the same concerns, but we’re continuing to push them to do better.
In the coming weeks, we will share a series of blogs further examining trends we found, including stories of unequal enforcement, where individuals and organizations needed to rely on internal connections at Meta to get wrongfully censored posts restored; examples of account suspensions without sufficient warnings; an exploration of Meta’s ad policies; practical tips for users to avoid being censored; and concrete steps platforms should take to reform their abortion content moderation practices. For a preview, we’ve already shared some of our findings with Barbara Ortutay at The Associated Press, whose report on some of these takedowns was published today.
We hope this series highlighting examples of abortion content censorship will help the public and the platforms understand the breadth of this problem, who is affected, and with what consequences. These stories collectively underscore the urgent need for platforms to review and consistently enforce their policies in a fair and transparent manner.
With reproductive rights under attack both in the U.S. and abroad, sharing accurate information about abortion online has never been more critical. Together, we can hold platforms like Meta accountable, demand transparency in moderation practices, and ultimately stop the censorship of this essential, sometimes life-saving information.
This is the first post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion