Tell Governor Newsom: Rein in AI Police Reports

1 week 2 days ago

Californians should urge Gov. Gavin Newsom to sign S.B. 524: a common-sense bill that takes important first-step reforms to regulate police reports written by generative AI. This is crucial, as watchdogs struggle to figure out where and how AI is being used in a police context.

Electronic Frontier Foundation

California, Tell Governor Newsom: Regulate AI Police Reports and Sign S.B. 524

1 week 2 days ago

The California legislature has passed a necessary piece of legislation, S.B. 524, which starts to regulate police reports written by generative AI. Now, it’s up to us to make sure Governor Newsom will sign the bill. 

We must make our voices heard. These technologies obscure certain records and drafts from public disclosure. Vendors have invested heavily on their ability to sell police genAI. 

TAKE ACTION

AI-generated police reports are spreading rapidly. The most popular product on the market is Axon’s Draft One, which is already one of the country’s biggest purveyors of police tech, including body-worn cameras. By bundling their products together, Axon has capitalized on its customer base to spread their untransparent and potentially harmful genAI product. 

Many things can go wrong when genAI is used to write narrative police reports. First, because the product relies on body-worn camera audio, there’s a big chance of the AI draft missing context like sarcasm, culturally-specific or contextual vocabulary use and slang, languages other than English. While police are expected to edit the AI’s version of events to make up for these flaws, many officers will defer to the AI. Police are also supposed to make an independent decision before arresting a person who was identified by face recognition–and police mess that up all the time. The prosecutor of King County, Washington, has forbidden local officers from using Draft One out of fear that it is unreliable.
Then, of course, there’s the matter of dishonesty. Many public defenders and criminal justice practitioners have voiced concerns about what this technology would do to cross examination. If caught with a different story on the stand than the one in their police report, an officer can easily say, “the AI wrote that and I didn’t edit well enough.” The genAI creates a layer of plausible deniability. Carelessness is a very different offense than lying on the stand. 

To make matters worse, an investigation by EFF found that Axon’s Draft One product defies transparency by design. The technology is deliberately built to obscure what portion of a finished report was written by AI and which portions were written by an officer–making it difficult to determine if an officer is lying about which portions of a report were written by AI. 

But now, California has an important chance to join with other states like Utah that are passing laws to reign in these technologies, and what minimum safeguards and transparency must go along with using them. 

S.B. 524 does several important things: It mandates that police reports written by AI include disclaimers on every page or within the body of the text that make it clear that this report was written in part or in total by a computer. It also says that any reports written by AI must retain their first draft. That way, it should be easier for defense attorneys, judges, police supervisors, or any other auditing entity to see which portions of the final report were written by AI and which parts were written by the officer. Further, the bill requires officers to sign and verify that they read the report and its facts are correct. And it bans AI vendors from selling or sharing the information a police agency provided to the AI.

These common-sense, first-step reforms are important: watchdogs are struggling to figure out where and how AI is being used in a police context. In fact, Axon’s Draft One, would be out of compliance with this bill, which would require them to redesign their tool to make it more transparent—a small win for communities everywhere. 

So now we’re asking you: help us make a difference. Use EFF’s Action Center to tell Governor Newsom to sign S.B. 524 into law! 

TAKE ACTION

Matthew Guariglia

【おすすめ本】八木 絹『ハンセン病差別の歴史を旅する 「救済」への問いかけ』―差別の事実と向き合い 「救済」への歩みを辿る=霜村三二(元都留文科大学講師)

1 week 2 days ago
 「ハンセン病差別の歴史を旅する」という著者の呼びかけに、私はどう答えたらいいのか。差別の事実と向き合うときの「驚き」「憤り」という 感情を丁寧に深く追うことを「歴史を旅する」と 表現した著者の真っ直ぐな言葉が迫る。 日本のハンセン病差別は、歴史の中でも最大の差別事案であり、家族までにも人生被害を与えてきた。それにもかかわらず、重大な人権問題として日本社会には広く共有されてはいなかったことが、顕わになったのがコロナ禍での感染症差別だった。こんなにも簡単に差別感情が広く振りま..
JCJ

〈AIと原発〉想田和弘

1 week 2 days ago
 東京電力福島第一原発事故の惨劇から、さすがに人類は教訓を得て、原発は時代遅れの“オワコン”になっていくものだと信じていた。ところが、意外なところに伏兵が潜んでいたものだ。  米国IT企業大手が、今になって盛んに原発に投 […]
admin

Our Stop Censoring Abortion Campaign Uncovers a Social Media Censorship Crisis

1 week 3 days ago

This is the first installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

We’ve been hearing that social media platforms are censoring abortion-related content, even when no law requires them to do so. Now, we’ve got the receipts. 

For months, EFF has been investigating stories from users whose abortion-related content has been taken down or otherwise suppressed by major social media platforms. In collaboration with our allies—including Plan C, Women on Web, Reproaction, and Women First Digital—we launched the #StopCensoringAbortion campaign to collect and amplify these stories.  

Submissions came from a variety of users, including personal accounts, influencers, healthcare clinics, research organizations, and advocacy groups from across the country and abroad—a spectrum that underscores the wide reach of this censorship. Since the start of the year, we’ve seen nearly 100 examples of abortion-related content taken down by social media platforms. 

We analyzed these takedowns, deletions, and bans, comparing the content to what platform policies allow—particularly those of Meta—and found that almost none of the submissions we received violated any of the platforms’ stated policies. Most of the censored posts simply provided factual, educational information. This Threads post is a perfect example: 

Screenshot submitted by Lauren Kahre to EFF

In this post, health policy strategist Lauren Kahre discussed abortion pills’ availability via mail. She provided factual information about two FDA approved medications (mifepristone and misoprostol), including facts like shelf life and how to store pills safely.  

Lauren’s post doesn’t violate any of Meta’s policies and shouldn’t have been removed. But don’t just take our word for it: Meta has publicly insisted that posts like these should not be censored. In a February 2024 letter to Amnesty International, Meta Human Rights Policy Director Miranda Sissons wrote: “Organic content (i.e., non paid content) educating users about medication abortion is allowed and does not violate our Community Standards. Additionally, providing guidance on legal access to pharmaceuticals is allowed.” 

Still, shortly after Lauren shared this post, Meta took it down. Perhaps even more perplexing was their explanation for doing so. According to Meta, the post was removed because “[they] don’t allow people to buy, sell, or exchange drugs that require a prescription from a doctor or a pharmacist.” 

Screenshot submitted by Lauren Kahre to EFF

In the submissions we received, this was the most common reason Meta gave for removing abortion-related content. The company frequently claimed that posts violated policies on Restricted Goods and Services, which prohibit any “attempts to buy, sell, trade, donate, gift or ask for pharmaceutical drugs.”  

Yet in Lauren’s case and others, the posts very clearly did no such thing. And as Meta itself has explained: “Providing guidance on how to legally access pharmaceuticals is permitted as it is not considered an offer to buy, sell or trade these drugs.” 

In fact, Meta’s policies on Restricted Goods & Services further state: “We allow discussions about the sale of these goods in stores or by online retailers, advocating for changes to regulations of goods and services covered in this policy, and advocating for or concerning the use of pharmaceutical drugs in the context of medical treatment, including discussion of physical or mental side effects.” Also, “Debating or advocating for the legality or discussing scientific or medical merits of prescription drugs is allowed. This includes news and public service announcements.” 

Over and over again, the policies say one thing, but the actual enforcement says another. 

We spoke with multiple Meta representatives to share these findings. We asked hard questions about their policies and the gap between how they’re being applied. Unfortunately, we were mostly left with the same concerns, but we’re continuing to push them to do better.  

In the coming weeks, we will share a series of blogs further examining trends we found, including stories of unequal enforcement, where individuals and organizations needed to rely on internal connections at Meta to get wrongfully censored posts restored; examples of account suspensions without sufficient warnings; an exploration of Meta’s ad policies; practical tips for users to avoid being censored; and concrete steps platforms should take to reform their abortion content moderation practices. For a preview, we’ve already shared some of our findings with Barbara Ortutay at The Associated Press, whose report on some of these takedowns was published today.  

We hope this series highlighting examples of abortion content censorship will help the public and the platforms understand the breadth of this problem, who is affected, and with what consequences. These stories collectively underscore the urgent need for platforms to review and consistently enforce their policies in a fair and transparent manner.  

With reproductive rights under attack both in the U.S. and abroad, sharing accurate information about abortion online has never been more critical. Together, we can hold platforms like Meta accountable, demand transparency in moderation practices, and ultimately stop the censorship of this essential, sometimes life-saving information. 

This is the first post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion    

Jennifer Pinsof