日中韓自由貿易協定(FTA)交渉の第10 回交渉会合(局長/局次長会合)が開催されます
「活力あふれる『ビンテージ・ソサエティ』の実現に向けて」(研究会報告書)をとりまとめました
自動走行との連携が期待される、地図情報に関する国際規格が発行されました
東京電力株式会社の会社分割について、電気事業法に基づき認可しました
第92回公共料金等専門調査会【2月18日開催】
第479回 消費者委員会本会議 議事録掲載【1月13日開催】
規制改革推進会議デジタル・AIワーキング・グループ(第7回)
プリオン専門調査会(第140回)の開催について【2月17日開催】
日本経済レポート(2025年度)
お知らせ:JPCERT/CC Eyes「Windowsのイベントログ分析トレーニング用コンテンツの公開」
JVN: PCIe Integrity and Data Encryption(IDE)プロトコル仕様における複数の問題
規制改革推進会議健康・医療・介護ワーキング・グループ(第9回)
AIの社会実装において、障害となる又は不十分な効果をもたらす規制・制度についての情報提供のお願い
EFFecting Change: Get the Flock Out of Our City
Flock contracts have quietly spread to cities across the country. But Flock ALPR (Automated License Plate Readers) erode civil liberties from the moment they're installed. While officials claim these cameras keep neighborhoods safe, the evidence tells a different story. The data reveals how Flock has enabled surveillance of people seeking abortions, protesters exercising First Amendment rights, and communities targeted by discriminatory policing.
This is exactly why cities are saying no. From Austin to Cambridge to small towns across Texas, jurisdictions are rejecting Flock contracts altogether, proving that surveillance isn't inevitable—it's a choice.
Join EFF's Sarah Hamid and Andrew Crocker along with Reem Suleiman from Fight for the Future and Kate Bertash from Rural Privacy Coalition to explore what's happening as Flock contracts face growing resistance across the U.S. We'll break down the legal implications of the data these systems collect, examine campaigns that have successfully stopped Flock deployments, and discuss the real-world consequences for people's privacy and freedom. The conversation will be followed by a live Q&A.
EFFecting Change Livestream Series:Get the Flock Out of Our City
Thursday, February 19th
12:00 PM - 1:00 PM Pacific
This event is LIVE and FREE!
Accessibility
This event will be live-captioned and recorded. EFF is committed to improving accessibility for our events. If you have any accessibility questions regarding the event, please contact events@eff.org.
Event ExpectationsEFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.
Upcoming EventsWant to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates. If you have a friend or colleague that might be interested, please join the fight for your digital rights by forwarding this link: eff.org/EFFectingChange. Thank you for helping EFF spread the word about privacy and free expression online.
RecordingWe hope you and your friends can join us live! If you can't make it, we’ll post the recording afterward on YouTube and the Internet Archive!
The Internet Still Works: Yelp Protects Consumer Reviews
Section 230 helps make it possible for online communities to host user speech: from restaurant reviews, to fan fiction, to collaborative encyclopedias. But recent debates about the law often overlook how it works in practice. To mark its 30th anniversary, EFF is interviewing leaders of online platforms about how they handle complaints, moderate content, and protect their users’ ability to speak and share information.
Yelp hosts millions of reviews written by internet users about local businesses. Most reviews are positive, but over the years, some businesses have tried to pressure Yelp to remove negative reviews, including through legal threats. Since its founding more than two decades ago, Yelp has fought major legal battles to defend reviewers’ rights and preserve the legal protections that allow consumers to share honest feedback online.
Aaron Schur is General Counsel at Yelp. He joined the company in 2010 as one of its first lawyers and has led its litigation strategy for more than a decade, helping secure court decisions that strengthened legal protections for consumer speech. He was interviewed by Joe Mullin, a policy analyst on EFF's Activism Team.
Joe Mullin: How would you describe Section 230 to a regular Yelp user who doesn’t know about the law?
Aaron Schur: I'd say it is a simple rule that, generally speaking, when content is posted online, any liability for that content is with the person that created it, not the platform that is displaying it. That allows Yelp to show your review and keep it up if a business complains about it. It also means that we can develop ways to highlight the reviews we think are most helpful and reliable, and mitigate fake reviews in a way, without creating liability for Yelp, because we're allowed to host third party content.
The political debate around Section 230 often centers around the behavior of companies, especially large companies. But we rarely hear about users, even though the law also applies to users. What is the user story that is getting lost?
Section 230 at heart protects users. It enables a diversity of platforms and content moderation practices—whether it's reviews on Yelp, videos on another platform, whatever it may be.
Without Section 230, platforms would face heavy pressure to remove consumer speech when we’re threatened with legal action—and that harms users, directly. Their content gets removed. It also harms the greater number of users who would access that content.
The focus on the biggest tech companies, I think, is understandable but misplaced when it comes to Section 230. We have tools that exist to go after dominant companies, both at the state and the federal level, and Congress could certainly consider competition-based laws—and has, over the last several years.
Tell me about the editorial decisions that Yelp makes regarding the highlighting of reviews, and the weeding out of reviews that might be fake.
Yelp is a platform where people share their experiences with local businesses, government agencies, and other entities. People come to Yelp, by the millions, to learn about these places.
With traffic like that come incentives for bad actors to game the system. Some unscrupulous businesses try to create fake reviews, or compensate people to write reviews, or ask family and friends to write reviews. Those reviews will be biased in a way that won’t be transparent.
Yelp developed an automated system to highlight reviews we find most trustworthy and helpful. Other reviews may be placed in a “not recommended” section where they don’t affect a business’s overall rating, but they’re still visible. That helps us maintain a level playing field and keep user trust.
Tell me about what your process around complaints around user reviews look like.
We have a reporting function for reviews. Those reports get looked at by an actual human, who evaluates the review and looks at data about it to decide whether it violates our guidelines.
We don't remove a review just because someone says it's “wrong,” because we can't litigate the facts in your review. If someone says “my pizza arrived cold,” and the restaurant says, no, the pizza was warm—Yelp is not in a position to adjudicate that dispute.
That's where Section 230 comes in. It says Yelp doesn’t have to [decide who’s right].
What other types of moderation tools have you built?
Any business, free of charge, can respond to a review, and that response appears directly below it. They can also message users privately. We know when businesses do this, it’s viewed positively by users.
We also have a consumer alert program, where members of the public can report businesses that may be compensating people for positive reviews—offering things like free desserts or discounted rent. In those cases, we can place an alert on the business’s page and link to the evidence we received. We also do this when businesses make certain types of legal threats against users.
It’s about transparency. If a business’s rating is inflated, because that business is threatening reviewers who rate less than five stars with a lawsuit, consumers have a right to know what’s happening.
How are international complaints, where Section 230 doesn’t come into play, different?
We have had a lot of matters in Europe, in particular in Germany. It’s a different system there—it’s notice-and-takedown. They have a line of cases that require review sites to basically provide proof that the person was a customer of the business.
If a review was challenged, we would sometimes ask the user for documentation, like an invoice, which we would redact before providing it. Often, they would do that, in order to defend their own speech online. Which was surprising to me! But they wouldn’t always—which shows the benefit of Section 230. In the U.S., you don’t have this back-and-forth that a business can leverage to get content taken down.
And invariably, the reviewer was a customer. The business was just using the system to try to take down speech.
Yelp has been part of some of the most important legal cases around Section 230, and some of those didn’t exist when we spoke in 2012. What happened in the Hassel v. Bird case, and why was that important for online reviewers?
Hassel v. Bird was a case where a law firm got a default judgment against an alleged reviewer, and the court ordered Yelp to remove the review—even though Yelp had not been a party to the case.
We refused, because the order violated Section 230, due process, and Yelp’s First Amendment rights as a publisher. But the trial court and the appeal court both ruled against us, allowing a side-stepping of Section 230.
The California Supreme Court ultimately reversed those rulings, and recognized that plaintiffs cannot accomplish indirectly [by suing a user and then ordering a platform to remove content] what they could not accomplish directly by suing the platform itself.
We spoke to you in 2012, and the landscape has really changed. Section 230 is really under attack in a way that it wasn’t back then. From your vantage point at Yelp, what feels different about this moment?
The biggest tech companies got even bigger, and even more powerful. That has made people distrustful and angry—rightfully so, in many cases.
When you read about the attacks on 230, it’s really politicians calling out Big Tech. But what is never mentioned is little tech, or “middle tech,” which is how Yelp bills itself. If 230 is weakened or repealed, it’s really the biggest companies, the Googles of the world, that will be able to weather it better than smaller companies like Yelp. They have more financial resources. It won’t actually accomplish what the legislators are setting out to accomplish. It will have unintended consequences across the board. Not just for Yelp, but for smaller platforms.
This interview was edited for length and clarity.
The Internet Still Works: Wikipedia Defends Its Editors
Section 230 helps make it possible for online communities to host user speech: from restaurant reviews, to fan fiction, to collaborative encyclopedias. But recent debates about the law often overlook how it works in practice. To mark its 30th anniversary, EFF is interviewing leaders of online platforms about how they handle complaints, moderate content, and protect their users’ ability to speak and share information.
A decade ago, Wikimedia Foundation, the nonprofit that operates Wikipedia, received 304 requests to alter or remove content over a two-year period, not including copyright complaints. In 2024 alone, it received 664 such takedown requests. Only four were granted. As complaints over user speech have grown, Wikimedia has expanded its legal team to defend the volunteer editors who write and maintain the encyclopedia.
Jacob Rogers is Associate General Counsel at the Wikimedia Foundation. He leads the team that deals with legal complaints against Wikimedia content and its editors. Rogers also works to preserve the legal protections, including Section 230, that make a community-governed encyclopedia possible.
Joe Mullin: What kind of content do you think would be most in danger if Section 230 was weakened?
Jacob Rogers: When you're writing about a living person, if you get it wrong and it hurts their reputation, they will have a legal claim. So that is always a concentrated area of risk. It’s good to be careful, but I think if there was a looser liability regime, people could get to be too careful—so careful they couldn’t write important public information.
Current events and political history would also be in danger. Writing about images of Muhammad has been a flashpoint in different countries, because depictions are religiously sensitive and controversial in some contexts. There are different approaches to this in different languages. You might not think that writing about the history of art in your country 500 years ago would get you into trouble—but it could, if you’re in a particular country, and it’s a flash point.
Writing about history and culture matters to people. And it can matter to governments, to religions, to movements, in a way that can cause people problems. That’s part of why protecting pseudonymity and their ability to work on these topics is so important.
If you had to describe to a Wikipedia user what Section 230 does, how would you explain it to them?
If there was nothing—no legal protection at all—I think we would not be able to run the website. There would be too many legal claims, and the potential damages of those claims could bankrupt the company.
Section 230 protects the Wikimedia Foundation, and it allows us to defer to community editorial processes. We can let the user community make those editorial decisions, and figure things out as a group—like how to write biographies of living persons, and what sources are reliable. Wikipedia wouldn’t work if it had centralized decision making.
What does a typical complaint look like, and how does the complaint process look?
In some cases, someone is accused of a serious crime and there’s a debate about the sources. People accused of certain types of wrongdoing, or scams. There are debates about peoples’ politics, where someone is accused of being “far-right” or “far-left.”
The first step is community dispute resolution. On the top page of every article on Wikipedia there’s a button at the top that translates to “talk.” If you click it, that gives you space to discuss how to write the article. When editors get into a fight about what to write, they should stop and discuss it with each other first.
If page editors can’t resolve a dispute, third-party editors can come in, or ask for a broader discussion. If that doesn’t work, or there’s harassment, we have Wikipedia volunteer administrators, elected by their communities, who can intervene. They can ban people temporarily, to cool off. When necessary, they can ban users permanently. In serious cases, arbitration committees make final decisions.
And these community dispute processes we’ve discussed are run by volunteers, no Wikimedia Foundation employees are involved? Where does Section 230 come into play?
That’s right. Section 230 helps us, because it lets disputes go through that community process. Sometimes someone’s edits get reversed, and they write an angry letter to the legal department. If we were liable for that, we would have the risk of expensive litigation every time someone got mad. Even if their claim is baseless, it’s hard to make a single filing in a U.S. court for less than $20,000. There’s a real “death by a thousand cuts” problem, if enough people filed litigation.
Section 230 protects us from that, and allows for quick dismissal of invalid claims.
When we're in the United States, then that's really the end of the matter. There’s no way to bypass the community with a lawsuit.
How does dealing with those complaints work in the U.S.? And how is it different abroad?
In the US, we have Section 230. We’re able to say, go through the community process, and try to be persuasive. We’ll make changes, if you make a good persuasive argument! But the Foundation isn’t going to come in and change it because you made a legal complaint.
But in the EU, they don’t have Section 230 protections. Under the Digital Services Act, once someone claims your website hosts something illegal, they can go to court and get an injunction ordering us to take the content down. If we don’t want to follow that order, we have to defend the case in court.
In one German case, the court essentially said, "Wikipedians didn’t do good enough journalism.” The court said the article’s sources aren’t strong enough. The editors used industry trade publications, and the court said they should have used something like German state media, or top newspapers in the country, not a “niche” publication. We disagreed with that.
What’s the cost of having to go to court regularly to defend user speech?
Because the Foundation is a mission-driven nonprofit, we can take on these defenses in a way that’s not always financially sensible, but is mission sensible. If you were focused on profit, you would grant a takedown. The cost of a takedown is maybe one hour of a staff member’s time.
We can selectively take on cases to benefit the free knowledge mission, without bankrupting the company. To do litigation in the EU costs something on the order of $30,000 for one hearing, to a few hundred thousand dollars for a drawn-out case.
I don’t know what would happen if we had to do that in the United States. There would be a lot of uncertainty. One big unknown is—how many people are waiting in the wings for a better opportunity to use the legal system to force changes on Wikipedia?
What does the community editing process get right that courts can get wrong?
Sources. Wikipedia editors might cite a blog because they know the quality of its research. They know what's going into writing that.
It can be easy sometimes for a court to look at something like that and say, well, this is just a blog, and it’s not backed by a university or institution, so we’re not going to rely on it. But that's actually probably a worse result. The editors who are making that consideration are often getting a more accurate picture of reality.
Policymakers who want to limit or eliminate Section 230 often say their goal is to get harmful content off the internet, and fast. What do you think gets missed in the conversation about removing harmful content?
One is: harmful to whom? Every time people talk about “super fast tech solutions,” I think they leave out academic and educational discussions. Everyone talks about how there’s a terrorism video, and it should come down. But there’s also news and academic commentary about that terrorism video.
There are very few shared universal standards of harm around the world. Everyone in the world agrees, roughly speaking, on child protection, and child abuse images. But there’s wild disagreement about almost every other topic.
If you do take down something to comply with the UK law, it’s global. And you’ll be taking away the rights of someone in the US or Australia or Canada to see that content.
This interview was edited for length and clarity. EFF interviewed Wikimedia attorney Michelle Paulson about Section 230 in 2012.