令和7年11月18日大分市佐賀関の大規模火災に係る特別交付税(12月交付分)の繰上げ交付
情報通信審議会 情報通信技術分科会 IPネットワーク設備委員会(第91回)
情報通信審議会 情報通信技術分科会 衛星通信システム委員会700MHz帯衛星ダイレクト通信検討作業班(第2回)
情報通信行政・郵政行政審議会 郵政行政分科会(第100回)配布資料・議事概要・議事録
電気通信紛争処理委員会(第256回)
✋ Get A Warrant | EFFector 37.17
Even with the holidays coming up, the digital rights news doesn't stop. Thankfully, EFF is here to keep you up-to-date with our EFFector newsletter!
In our latest issue, we’re explaining why politicians latest attempts to ban VPNs is a terrible idea; asking supporters to file public comments opposing new rules that would make bad patents untouchable; and sharing a privacy victory—Sacramento is forced to end its dragnet surveillance program of power meter data.
Prefer to listen in? Check out our audio companion, where EFF Surveillance Litigation Director Andrew Crocker explains our new lawsuit challenging the warrantless mass surveillance of drivers in San Jose. Catch the conversation on YouTube or the Internet Archive.
EFFECTOR 37.17 - ✋ GET A WARRANT
Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression.
Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.
【25年JCJ賞受賞者スピーチ】=南京と沖縄つなげる 〝新しい戦前〟にはしない 中村 万里子さん(琉球新報社)
Kenya Community-Centred Connectivity Initiatives. Strategic Plan 2025-2027
From silence to signal: Nigerian women building digital pathways to justice
Rights Organizations Demand Halt to Mobile Fortify, ICE's Handheld Face Recognition Program
Mobile Fortify, the new app used by Immigration and Customs Enforcement (ICE) to use face recognition technology (FRT) to identify people during street encounters, is an affront to the rights and dignity of migrants and U.S. citizens alike. That's why a coalition of privacy, civil liberties and civil rights organizations are demanding the Department of Homeland Security (DHS) shut down the use of Mobile Fortify, release the agency's privacy analyses of the app, and clarify the agency's policy on face recognition.
As the organizations, including EFF, Asian Americans Advancing Justice and the Project on Government Oversight, write in a letter sent by EPIC:
ICE’s reckless field practices compound the harm done by its use of facial recognition. ICE does not allow people to opt-out of being scanned, and ICE agents apparently have the discretion to use a facial recognition match as a definitive determination of a person’s immigration status even in the face of contrary evidence. Using face identification as a definitive determination of immigration status is immensely disturbing, and ICE’s cavalier use of facial recognition will undoubtedly lead to wrongful detentions, deportations, or worse. Indeed, there is already at least one reported incident of ICE mistakenly determining a U.S. citizen “could be deported based on biometric confirmation of his identity.”
As if this dangerous use of nonconsensual face recognition isn't bad enough, Mobile Fortify also queries a wide variety of government databases. Already there have been reports that federal officers may be using this FRT to target protesters engaging in First Amendment-protected activities. Yet ICE concluded it did not need to conduct a new Privacy Impact Assessment, which is standard practice for proposed government technologies that collect people's data.
While Mobile Fortify is the latest iteration of ICE’s mobile FRT, EFF has been tracking this type of technology for more than a decade. In 2013, we identified how a San Diego agency had distributed face recognition-equipped phones to law enforcement agencies across the region, including federal immigration officers. In 2019, EFF helped pass a law temporarily banning collecting biometric data with mobile devices, resulting in the program's cessation.
We fought against handheld FRT then, and we will fight it again today.
Privacy is For the Children (Too)
In the past few years, governments across the world have rolled out different digital identification options, and now there are efforts encouraging online companies to implement identity and age verification requirements with digital ID in mind. This blog is the third in a short series that explains digital ID and the pending use case of age verification. Here, we cover alternative frameworks on age controls, updates on parental controls, and the importance of digital privacy in an increasingly hostile climate politically. You can read the first two posts here, and here.
Observable harms of age verification legislation in the UK, US, and elsewhere:As we witness the effects of the Online Safety Act in the UK and over 25 state age verification laws in the U.S, it has become even more apparent that mandatory age verification is more of a detriment than a benefit to the public. Here’s what we’re seeing:
- Scope creep is already occurring with various website shutdowns and excessive requests from various websites.
- Data is being handed to age assurance providers with questionable privacy policies.
- Older teenagers (voting age is set to be 16 in the UK) are having their ability to communicate online blocked on various platforms.
- Adults aren’t able to use the internet to access content that is lawfully available to them if they do not agree to third-party usage of their information.
It’s obvious: age verification will not keep children safe online. Rather, it is a large proverbial hammer that nails everyone—adults and young people alike—into restrictive parameters of what the government deems appropriate content. That reality is more obvious and tangible now that we’ve seen age-restrictive regulations roll out in various states and countries. But that doesn’t have to be the future if we turn away from age-gating the web.
Keeping kids safe online (or anywhere IRL, let’s not forget) is a complex social issue that cannot be resolved with technology alone.The legislators responsible for online age verification bills must confront that they are currently addressing complex social issues with a problematic array of technology. Most of policymakers’ concerns about minors' engagement with the internet can be sorted into one of three categories:
- Content risks: The negative implications from exposure to online content that might be age-inappropriate, such as violent or sexually explicit content, or content that incites dangerous behavior like self-harm.
- Conduct risks: Behavior by children or teenagers that might be harmful to themselves or others, like cyberbullying, sharing intimate or personal information or problematic overuse of a service.
- Contact risks: The potential harms stemming from contact with people that might pose a risk to minors, including grooming or being forced to exchange sexually explicit material.
These three categories of possible risks will not be eliminated by mandatory age verification—or any form of techno-solutionism, for that matter. Mandatory age checks will instead block access to vital online communities and resources for those people—including young people—who need them the most. It’s an ineffective and disproportionate tool to holistically address young people’s online safety.
However, these can be partially addressed with better-utilized and better-designed parental controls and family accounts. Existing parental controls are woefully underutilized, according to one survey that collected answers from 1,000 parents. Adoption of parental controls varied widely, from 51% on tablets to 35% on video game consoles. Making parental controls more flexible and accessible, so parents better understand the tools and how to use them, could increase adoption and address content risk more effectively than a broad government censorship mandate.
Recently, Android made its parental controls easier to set up. It rolled out features that directly address content risk by assisting parents who wish to block specific apps and filter out mature content from Google Chrome and Google Search. Apple also updated its parental controls settings this past summer by instituting new ways for parents to manage child accounts and giving app developers access to a Declared Age Range API. Where parents can declare age range and apps can respond to declared ranges established in child accounts, without giving over a birthdate. With this, parents are given some flexibility like age-range information beyond just 13+. A diverse range of tools and flexible settings provide the best options for families and empower parents and guardians to decide and tailor what online safety means for their own children—at any age, maturity level, or type of individual risk.
Privacy laws can also help minors online.Parental controls are useful in the hands of responsible guardians. But what about children who are neglected or abused by those in charge of them? Age verification laws cannot solve this problem; these laws simply share possible abuse of power with the state. To address social issues, we need more efforts directed at the family and community structures around young people, and initiatives that can mitigate the risk factors of abuse instead of resorting to government control over speech.
While age verification is not the answer, those seeking legislative solutions can instead focus their attention on privacy laws—which are more than capable of assisting minors online, no matter the state of their at-home care. Comprehensive data privacy, which EFF has long advocated for, is perhaps the most obvious way to keep the data of young people safe online. Data brokers gather a vast amount of data and assemble new profiles of information as a young person uses the internet. These data sets also contribute to surveillance and teach minors that it is normal to be tracked as they use the web. Banning behavioral ads would remove a major incentive for companies to collect as much data as they do and be able to sell it to whomever will buy it from them. For example, many age-checking tools use data brokers to establish “age estimation” on emails used to sign up for an online service, further incentivizing a vicious cycle of data collection and retention. Ultimately, privacy-encroaching companies are rewarded for the years of mishandling our data with lucrative government contracts.
These systems create much more risk online and offline for young people in terms of their privacy over time from online surveillance and in authoritarian political climates. Age verification proponents often acknowledge that there are privacy risks, and dismiss the consequences by claiming the trade off will “protect children.” These systems don’t foster safer online practices for young people; they encourage increasingly invasive ways for governments to define who is and isn’t free to roam online. If we don’t re-establish ways to maintain online anonymity today, our children’s internet could become unrecognizable and unusable for not only them, but many adults as well.
Actions you can take today to protect young people online:- Use existing parental controls to decide for yourself what your kid should and shouldn’t see, who they should engage with, etc.
- Discuss the importance of online privacy and safety with your kids and community.
- Provide spaces and resources for young people to flexibly communicate with their schools, guardians, and community.
- Support comprehensive privacy legislation for all.
- Support legislators’ efforts to regulate the out-of-control data broker industry by banning behavioral ads.
Join EFF in opposing mandatory age verification and age gating laws—help us keep your kids safe and protect the future of the internet, privacy, and anonymity.