Axon Tests Face Recognition on Body-Worn Cameras

3 months 2 weeks ago

Axon Enterprise Inc. is working with a Canadian police department to test the addition of face recognition technology (FRT) to its body-worn cameras (BWCs). This is an alarming development in government surveillance that should put communities everywhere on alert. 

As many as 50 officers from the Edmonton Police Department (EPD) will begin using these FRT-enabled BWCs today as part of a proof-of-concept experiment. EPD is the first police department in the world to use these Axon devices, according to a report from the Edmonton Journal

This kind of technology could give officers instant identification of any person that crosses their path. During the current trial period, the Edmonton officers will not be notified in the field of an individual’s identity but will review identifications generated by the BWCs later on. 

“This Proof of Concept will test the technology’s ability to work with our database to make officers aware of individuals with safety flags and cautions from previous interactions,” as well as “individuals who have outstanding warrants for serious crime,” Edmonton Police described in a press release, suggesting that individuals will be placed on a watchlist of sorts.

FRT brings a rash of problems. It relies on extensive surveillance and collecting images on individuals, law-abiding or otherwise. Misidentifications can cause horrendous consequences for individuals, including prolonged and difficult fights for innocence and unfair incarceration for crimes never committed. In a world where police are using real-time face recognition, law-abiding individuals or those participating in legal, protected activity that police may find objectionable — like protest — could be quickly identified. 

With the increasing connections being made between disparate data sources about nearly every person, BWCs enabled with FRT can easily connect a person minding their own business, who happens to come within view of a police officer, with a whole slew of other personal information. 

Axon had previously claimed it would pause the addition of face recognition to its tools due to concerns raised in 2019 by the company’s AI and Policing Technology Ethics Board. However, since then, the company has continued to research and consider the addition of FRT to its products. 

This BWC-FRT integration signals possible other FRT integrations in the future. Axon is building an entire arsenal of cameras and surveillance devices for law enforcement, and the company grows the reach of its police surveillance apparatus, in part, by leveraging relationships with its thousands of customers, including those using its flagship product, the Taser. This so-called “ecosystem” of surveillance technologyq includes the Fusus system, a platform for connecting surveillance cameras to facilitate real-time viewing of video footage. It also involves expanding the use of surveillance tools like BWCs and the flying cameras of “drone as first responder” (DFR) programs.

Face recognition undermines individual privacy, and it is too dangerous when deployed by police. Communities everywhere must move to protect themselves and safeguard their civil liberties, insisting on transparency, clear policies, public accountability, and audit mechanisms. Ideally, communities should ban police use of the technology altogether. At a minimum, police must not add FRT to BWCs.

Beryl Lipton

After Years of Controversy, the EU’s Chat Control Nears Its Final Hurdle: What to Know

3 months 2 weeks ago

After a years-long battle, the European Commission’s “Chat Control” plan, which would mandate mass scanning and other encryption-breaking measures, at last codifies agreement on a position within the Council of the EU, representing EU States. The good news is that the most controversial part, the forced requirement to scan encrypted messages, is out. The bad news is there’s more to it than that.

Chat Control has gone through several iterations since it was first introduced, with the EU Parliament backing a position that protects fundamental rights, while the Council of the EU spent many months pursuing an intrusive law-enforcement-focused approach. Many proposals earlier this year required the scanning and detection of illicit content on all services, including private messaging apps such as WhatsApp and Signal. This requirement would fundamentally break end-to-end encryption

Thanks to the tireless efforts of digital rights groups, including European Digital Rights (EDRi), we won a significant improvement: the Council agreed on its position, which removed the requirement that forces providers to scan messages on their services. It also comes with strong language to protect encryption, which is good news for users.

But here comes the rub: first, the Council’s position allows for “voluntary” detection, where tech platforms can scan personal messages that aren’t end-to-end encrypted. Unlike in the U.S., where there is no comprehensive federal privacy law, voluntary scanning is not technically legal in the EU, though it’s been possible through a derogation set to expire in 2026. It is unclear how this will play out over time, though we are concerned that this approach to voluntary scanning will lead to private mass-scanning of non-encrypted services and might limit the sorts of secure communication and storage services big providers offer. With limited transparency and oversight, it will be difficult to know how services approach this sort of detection. 

With mandatory detection orders being off the table, the Council has embraced another worrying system to protect children online: risk mitigation. Providers will have to take all reasonable mitigation measures” to reduce risks on their services. This includes age verification and age assessment measures. We have written about the perils of age verification schemes and recent developments in the EU, where regulators are increasingly focusing on AV to reduce online harms.

If secure messaging platforms like Signal or WhatsApp are required to implement age verification methods, it would fundamentally reshape what it means to use these services privately. Encrypted communication tools should be available to everyone, everywhere, of all ages, freely and without the requirement to prove their identity. As age verification has started to creep in as a mandatory risk mitigation measure under the EU’s Digital Services Act in certain situations, it could become a de facto requirement under the Chat Control proposal if the wording is left broad enough for regulators to treat it as a baseline. 

Likewise, the Council’s position lists “voluntary activities” as a potential risk mitigation measure. Pull the thread on this and you’re left with a contradictory stance, because an activity is no longer voluntary if it forms part of a formal risk management obligation. While courts might interpret its mention in a risk assessment as an optional measure available to providers that do not use encrypted communication channels, this reading is far from certain, and the current language will, at a minimum, nudge non-encrypted services to perform voluntary scanning if they don’t want to invest in alternative risk mitigation options. It’s largely up to the provider to choose how to mitigate risks, but it’s up to enforcers to decide what is effective. Again, we're concerned about how this will play out in practice.

For the same reason, clear and unambiguous language is needed to prevent authorities from taking a hostile view of what is meant by “allowing encryption” if that means then expecting service providers to implement client-side scanning. We welcome the clear assurance in the text that encryption cannot be weakened or bypassed, including through any requirement to grant access to protected data, but even greater clarity would come from an explicit statement that client-side scanning cannot coexist with encryption.

As we approach the final “trilogue” negotiations of this regulation, we urge EU lawmakers to work on a final text that fully protects users’ right to private communication and avoids intrusive age-verification mandates and risk benchmark systems that lead to surveillance in practice.

Christoph Schmon

EFF Tells Patent Office: Don’t Cut the Public Out of Patent Review

3 months 3 weeks ago

EFF has submitted its formal comment to the U.S. Patent and Trademark Office (USPTO) opposing a set of proposed rules that would sharply restrict the public’s ability to challenge wrongly granted patents. These rules would make inter partes review (IPR)—the main tool Congress created to fix improperly granted patents—unavailable in most of the situations where it’s needed most.

If adopted, they would give patent trolls exactly what they want: a way to keep questionable patents alive and out of reach.

If you haven’t commented yet, there’s still time. The deadline is today, December 2.

TAKE ACTION

Tell USPTO: The public has a right to challenge bad patents

Sample comment:

I oppose the USPTO’s proposed rule changes for inter partes review (IPR), Docket No. PTO-P-2025-0025. The IPR process must remain open and fair. Patent challenges should be decided on their merits, not shut out because of legal activity elsewhere. These rules would make it nearly impossible for the public to challenge bad patents, and that will harm innovation and everyday technology users.

IPR Is Already Under Siege, And These Rules Would Make It Worse

Since USPTO Director John Squires was sworn into office just over two months ago, we’ve seen the Patent Office take an increasingly aggressive stance against IPR petitions. In a series of director-level decisions, the USPTO has denied patent challengers the chance to be heard—sometimes dozens of them at a time—without explanation or reasoning. 

That reality makes this rulemaking even more troubling. The USPTO is already denying virtually every new petition challenging patents. These proposed rules would cement that closed-door approach and make it harder for challengers to be heard. 

What EFF Told the USPTO

Our comment lays out how these rules would make patent challenges nearly impossible to pursue for small businesses, nonprofits, software developers, and everyday users of technology. 

Here are the core problems we raised:

First, no one should have to give up their court defenses just to use IPR. The USPTO proposal would force defendants to choose: either use IPR and risk losing their legal defenses, or keep their defenses and lose IPR.

That’s not a real choice. Anyone being sued or threatened for patent infringement needs access to every legitimate defense. Patent litigation is devastatingly expensive, and forcing people to surrender core rights in federal court is unreasonable and unlawful.

Second, one early case should not make a bad patent immune forever. Under the proposed rules, if a patent survives any earlier validity fight—no matter how rushed, incomplete, or poorly reasoned—everyone else could be barred from filing an IPR later.

New prior art? Doesn’t matter. Better evidence? Doesn’t matter. 

Congress never intended IPR to be a one-shot shield for bad patents. 

Third, patent owners could manipulate timing to shut down petitions. The rules would let the USPTO deny IPRs simply because a district court case might move faster.

Patent trolls already game the system by filing in courts with rapid schedules. This rule would reward that behavior. It allows patent owners—not facts, not law, not the merits—to determine whether an IPR can proceed. 

IPR isn't supposed to be a race to the courthouse. It’s supposed to be a neutral review of whether the Patent Office made a mistake.

Why Patent Challenges Matter

IPR isn’t perfect, and it doesn’t apply to every patent. But compared to multimillion-dollar federal litigation, it’s one of the only viable tools available to small companies, developers, and the public. It needs to remain open. 

When an overbroad patent gets waved at hundreds or thousands of people—podcasters, app developers, small retailers—IPR is often the only mechanism that can actually fix the underlying problem: the patent itself. These rules would take that option away.

There’s Still Time To Add Your Voice

If you haven’t submitted a comment yet, now is the time. The more people speak up, the harder it becomes for these changes to slip through.

Comments don’t need to be long or technical. A few clear sentences in your own words are enough. We’ve written a short sample comment below. It’s even more powerful if you add a sentence or two describing your own experience. If you mention EFF in your comment, it helps our collective impact. 

TAKE ACTION

Sample comment: 

I oppose the USPTO’s proposed rule changes for inter partes review (IPR), Docket No. PTO-P-2025-0025. The IPR process must remain open and fair. Patent challenges should be decided on their merits, not shut out because of legal activity elsewhere. These rules would make it nearly impossible for the public to challenge bad patents, and that will harm innovation and everyday technology users.

Further reading:

Joe Mullin
Checked
2 hours 54 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed