日中韓自由貿易協定(FTA)交渉の第10 回交渉会合(局長/局次長会合)が開催されます
「活力あふれる『ビンテージ・ソサエティ』の実現に向けて」(研究会報告書)をとりまとめました
自動走行との連携が期待される、地図情報に関する国際規格が発行されました
東京電力株式会社の会社分割について、電気事業法に基づき認可しました
Tips to Protect Your Posts About Reproductive Health From Being Removed
This is the ninth installment in a blog series documenting EFF’s findings from the Stop Censoring Abortion campaign. You can read additional posts here.
Meta has been getting content moderation wrong for years, like most platforms that host user-generated content. Sometimes it’s a result of deliberate design choices—privacy rollbacks, opaque policies, features that prioritize growth over safety—made even when the company knows that those choices could negatively impact users. Other times, it’s simply the inevitable outcome of trying to govern billions of posts with a mix of algorithms and overstretched human reviewers. Importantly, users shouldn’t have to worry about their posts being deleted or their accounts getting banned when they share factual health information that doesn’t violate the platforms' policies. But knowing more about what the algorithmic moderation is likely to flag can help you to avoid its mistakes.
We analyzed the roughly one-hundred survey submissions we received from social media users in response to our Stop Censoring Abortion campaign. Their stories revealed some clear patterns: certain words, images, and phrases seemed to trigger takedowns, even when posts didn’t come close to violating Meta’s rules.
For example, your post linking to information on how people are accessing abortion pills online clearly is not an offer to buy or sell pills, but an algorithm, or a human content reviewer who doesn’t know for sure, might wrongly flag it for violating Meta’s policies on promoting or selling “restricted goods.”
That doesn’t mean you’re powerless. For years, people have used “algospeak”—creative spelling, euphemisms, or indirection—to sidestep platform filters. Abortion rights advocates are now forced into similar strategies, even when their speech is perfectly legal. It’s not fair, but it might help you keep your content online. Here are some things we learned from our survey:
Practical Tips to Reduce the Risk of TakedownsWhile traditional social media platforms can help people reach larger audiences, using them also generally means you have to hand over control of what you and others are able to see to the people who run the company. This is the deal that large platforms offer—and while most of us want platforms to moderate some content (even if that moderation is imperfect), current systems of moderation often reflect existing societal power imbalances and impact marginalized voices the most.
There are ways companies and governments could better balance the power between users and platforms. In the meantime, there are steps you can take right now to break the hold these platforms have:
- Images and keywords matter. Posts with pill images, or accounts with “pill” in their names, were flagged often—even when the posts weren’t offering to sell medication. Before posting, consider whether you need to include an image of, or the word “pill,” or whether there’s another way to communicate your message.
- Clarity beats vagueness. Saying “we can help you find what you need” or “contact me for more info” might sound innocuous, but to an algorithm, it can look like an offer to sell drugs. Spell out what kind of support you do and don’t provide—for example: “We can talk through options and point you toward trusted resources. We don’t provide medical services or medication.”
- Be careful with links. Direct links to organizations or services that provide abortion pills were often flagged, even if the organizations operate legally. Instead of linking, try spelling out the name of the site or account.
- Certain word combos are red flags. Posts that included words like “mifepristone,” “abortion,” and “mail” together were frequently removed. You may still want to use them—they’re accurate and important—but know they make your post more likely to be flagged.
- Ads are even stricter. Meta requires pharmaceutical advertisers to prove they’re licensed in the countries they target. If you boost posts, assume the more stringent advertising standards will be applied.
Big platforms give you reach, but they also set the rules—and those rules usually favor corporate interests over human rights. You don’t have to accept that as the only way forward:
- Keep a backup. Export your data regularly so you’re not left empty-handed if your account disappears overnight.
- Build your own space. Hosting a website isn’t free, but it puts you in control.
- Explore other platforms. Newsletters, Discord, and other community tools offer more control than Facebook or Instagram. Decentralized platforms like Mastodon and Bluesky aren’t perfect, but they show what’s possible when moderation isn’t dictated from the top down. (Learn more about the differences between Mastodon, Bluesky, and Threads, and how these kinds of platforms help us build a better internet.)
- Push for interoperability. Imagine being able to take your audience with you when you leave a platform. That’s the future we should be fighting for. (For more on interoperability and Meta, check out this video where Cory Doctorow explains what an interoperable Facebook would look like.)
If you’re working in abortion access—whether as a provider, activist, or volunteer—your privacy and security matter. The same is true for patients. Check out EFF’s Surveillance Self-Defense for tailored guides. Look at resources from groups like Digital Defense Fund and learn how location tracking tools can endanger abortion access. If you run an organization, consider some of the ways you can minimize what information you collect about patients, clients, or customers, in our guide to Online Privacy for Nonprofits.
Platforms like Meta insist they want to balance free expression and safety, but their blunt systems consistently end up reinforcing existing inequalities—silencing the very people who most need to be heard. Until they do better, it’s on us to protect ourselves, share our stories, and keep building the kind of internet that respects our rights.
This is the ninth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion
Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people.
Opt Out October: Daily Tips to Protect Your Privacy and Security
Trying to take control of your online privacy can feel like a full-time job. But if you break it up into small tasks and take on one project at a time it makes the process of protecting your privacy much easier. This month we’re going to do just that. For the month of October, we’ll update this post with new tips every weekday that show various ways you can opt yourself out of the ways tech giants surveil you.
Online privacy isn’t dead. But the tech giants make it a pain in the butt to achieve. With these incremental tweaks to the services we use, we can throw sand in the gears of the surveillance machine and opt out of the ways tech companies attempt to optimize us into advertisement and content viewing machines. We’re also pushing companies to make more privacy-protective defaults the norm, but until that happens, the onus is on all of us to dig into the settings.
All month long we’ll share tips, including some with the help from our friends at Consumer Reports’ Security Planner tool. Use the Table of Contents here to jump straight to any tip.
Table of Contents
- Tip 1: Establish Good Digital Hygiene
- Tip 2: Learn What a Data Broker Knows About You
- Tip 3: Coming October 3
- Tip 4: Coming October 6
- Tip 5: Coming October 7
- Tip 6: Coming October 8
- Tip 7: Coming October 9
- Tip 8: Coming October 10
- Tip 9: Coming October 14
- Tip 10: Coming October 15
- Tip 11: Coming October 16
- Tip 12: Coming October 17
- Tip 13: Coming October 20
- Tip 14: Coming October 21
- Tip 15: Coming October 22
- Tip 16: Coming October 23
- Tip 17: Coming October 24
- Tip 18: Coming October 27
- Tip 19: Coming October 28
- Tip 20: Coming October 29
- Tip 21: Coming October 30
- Tip 22: Coming October 31
Before we can get into the privacy weeds, we need to first establish strong basics. Namely, two security fundamentals: using strong passwords (a password manager helps simplify this) and two-factor authentication for your online accounts. Together, they can significantly improve your online privacy by making it much harder for your data to fall into the hands of a stranger.
Using unique passwords for every web login means that if your account information ends up in a data breach, it won’t give bad actors an easy way to unlock your other accounts. Since it’s impossible for all of us to remember a unique password for every login we have, most people will want to use a password manager, which generates and stores those passwords for you.
Two-factor authentication is the second lock on those same accounts. In order to login to, say, Facebook for the first time on a particular computer, you’ll need to provide a password and a “second factor,” usually an always-changing numeric code generated in an app or sent to you on another device. This makes it much harder for someone else to get into your account because it’s less likely they’ll have both a password and the temporary code.
This can be a little overwhelming to get started if you’re new to online privacy! Aside from our guides on Surveillance Self-Defense, we recommend taking a look at Consumer Reports’ Security Planner for ways to help you get started setting up your first password manager and turning on two-factor authentication.
Tip 2: Learn What a Data Broker Knows About YouHundreds of data brokers you’ve never heard of are harvesting and selling your personal information. This can include your address, online activity, financial transactions, relationships, and even your location history. Once sold, your data can be abused by scammers, advertisers, predatory companies, and even law enforcement agencies.
Data brokers build detailed profiles of our lives but try to keep their own practices hidden. Fortunately, several state privacy laws give you the right to see what information these companies have collected about you. You can exercise this right by submitting a data access request to a data broker. Even if you live in a state without privacy legislation, some data brokers will still respond to your request.
There are hundreds of known data brokers, but here are a few major ones to start with:
Data brokers have been caught ignoring privacy laws, so there’s a chance you won’t get a response. If you do, you’ll learn what information the data broker has collected about you and the categories of third parties they’ve sold it to. If the results motivate you to take more privacy action, encourage your friends and family to do the same. Don’t let data brokers keep their spying a secret.
You can also ask data brokers to delete your data, with or without an access request. We’ll get to that later this month and explain how to do this with people-search sites, a category of data brokers.
Come back tomorrow for another tip!
LocNet Catalytic Microgrant – 2025: Supporting co-creation and prototyping of community-driven digital solutions for climate resilience and advancing environmental justice in Kenya
【月刊マスコミ評・新聞】各紙は「政権交代」一顧だにしない=白垣 詔男
Flock’s Gunshot Detection Microphones Will Start Listening for Human Voices
Flock Safety, the police technology company most notable for their extensive network of automated license plate readers spread throughout the United States, is rolling out a new and troubling product that may create headaches for the cities that adopt it: detection of “human distress” via audio. As part of their suite of technologies, Flock has been pushing Raven, their version of acoustic gunshot detection. These devices capture sounds in public places and use machine learning to try to identify gunshots and then alert police—but EFF has long warned that they are also high powered microphones parked above densely-populated city streets. Cities now have one more reason to follow the lead of many other municipalities and cancel their Flock contracts, before this new feature causes civil liberties harms to residents and headaches for cities.
In marketing materials, Flock has been touting new features to their Raven product—including the ability of the device to alert police based on sounds, including “distress.” The online ad for the product, which allows cities to apply for early access to the technology, shows the image of police getting an alert for “screaming.”
It’s unclear how this technology works. For acoustic gunshot detection, generally the microphones are looking for sounds that would signify gunshots (though in practice they often mistake car backfires or fireworks for gunshots). Flock needs to come forward now with an explanation of exactly how their new technology functions. It is unclear how these devices will interact with state “eavesdropping” laws that limit listening to or recording the private conversations that often take place in public.
Flock is no stranger to causing legal challenges for the cities and states that adopt their products. In Illinois, Flock was accused of violating state law by allowing Immigration and Customs Enforcement (ICE), a federal agency, access to license plate reader data taken within the state. That’s not all. In 2023, a North Carolina judge halted the installation of Flock cameras statewide for operating in the state without a license. When the city of Evanston, Illinois recently canceled its contract with Flock, it ordered the company to take down their license plate readers–only for Flock to mysteriously reinstall them a few days later. This city has now sent Flock a cease and desist order and in the meantime, has put black tape over the cameras. For some, the technology isn’t worth its mounting downsides. As one Illinois village trustee wrote while explaining his vote to cancel the city’s contract with Flock, “According to our own Civilian Police Oversight Commission, over 99% of Flock alerts do not result in any police action.”
Gunshot detection technology is dangerous enough as it is—police showing up to alerts they think are gunfire only to find children playing with fireworks is a recipe for innocent people to get hurt. This isn’t hypothetical: in Chicago a child really was shot at by police who thought they were responding to a shooting thanks to a ShotSpotter alert. Introducing a new feature that allows these pre-installed Raven microphones all over cities to begin listening for human voices in distress is likely to open up a whole new can of unforeseen legal, civil liberties, and even bodily safety consequences.
食品安全委員会(第999回)の開催について【10月7日開催】
消費動向調査(令和7年9月実施分)
JVN: 複数のキーエンス製品における複数の脆弱性
JVN: HEIDENHAIN製TNC 640における安全ではない値を用いたリソースの初期化の脆弱性
The UK Is Still Trying to Backdoor Encryption for Apple Users
The Financial Times reports that the U.K. is once again demanding that Apple create a backdoor into its encrypted backup services. The only change since the last time they demanded this is that the order is allegedly limited to only apply to British users. That doesn’t make it any better.
The demand uses a power called a “Technical Capability Notice” (TCN) in the U.K.’s Investigatory Powers Act. At the time of its signing we noted this law would likely be used to demand Apple spy on its users.
After the U.K. government first issued the TCN in January, Apple was forced to either create a backdoor or block its Advanced Data Protection feature—which turns on end-to-end encryption for iCloud—for all U.K. users. The company decided to remove the feature in the U.K. instead of creating the backdoor.
The initial order from January targeted the data of all Apple users. In August, the US claimed the U.K. withdrew the demand, but Apple did not re-enable Advanced Data Protection. The new order provides insight into why: the U.K. was just rewriting it to only apply to British users.
This is still an unsettling overreach that makes U.K. users less safe and less free. As we’ve said time and time again, any backdoor built for the government puts everyone at greater risk of hacking, identity theft, and fraud. It sets a dangerous precedent to demand similar data from other companies, and provides a runway for other authoritarian governments to issue comparable orders. The news of continued server-side access to users' data comes just days after the UK government announced an intrusive mandatory digital ID scheme, framed as a measure against illegal migration.
A tribunal hearing was initially set to take place in January 2026, though it’s currently unclear if that will proceed or if the new order changes the legal process. Apple must continue to refuse these types of backdoors. Breaking end-to-end encryption for one country breaks it for everyone. These repeated attempts to weaken encryption violates fundamental human rights and destroys our right to private spaces.