EFFecting Change: Get the Flock Out of Our City

5 hours 14 minutes ago

Flock contracts have quietly spread to cities across the country. But Flock ALPR (Automated License Plate Readers) erode civil liberties from the moment they're installed. While officials claim these cameras keep neighborhoods safe, the evidence tells a different story. The data reveals how Flock has enabled surveillance of people seeking abortions, protesters exercising First Amendment rights, and communities targeted by discriminatory policing.

This is exactly why cities are saying no. From Austin to Cambridge to small towns across Texas, jurisdictions are rejecting Flock contracts altogether, proving that surveillance isn't inevitable—it's a choice.

Join EFF's Sarah Hamid and Andrew Crocker along with Reem Suleiman from Fight for the Future and Kate Bertash from Rural Privacy Coalition to explore what's happening as Flock contracts face growing resistance across the U.S. We'll break down the legal implications of the data these systems collect, examine campaigns that have successfully stopped Flock deployments, and discuss the real-world consequences for people's privacy and freedom. The conversation will be followed by a live Q&A. 

EFFecting Change Livestream Series:
Get the Flock Out of Our City
Thursday, February 19th
12:00 PM - 1:00 PM Pacific
This event is LIVE and FREE!



Accessibility

This event will be live-captioned and recorded. EFF is committed to improving accessibility for our events. If you have any accessibility questions regarding the event, please contact events@eff.org.

Event Expectations

EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.

Upcoming Events

Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates. If you have a friend or colleague that might be interested, please join the fight for your digital rights by forwarding this link: eff.org/EFFectingChange. Thank you for helping EFF spread the word about privacy and free expression online. 

Recording

We hope you and your friends can join us live! If you can't make it, we’ll post the recording afterward on YouTube and the Internet Archive!

Melissa Srago

The Internet Still Works: Yelp Protects Consumer Reviews

5 hours 23 minutes ago

Section 230 helps make it possible for online communities to host user speech: from restaurant reviews, to fan fiction, to collaborative encyclopedias. But recent debates about the law often overlook how it works in practice. To mark its 30th anniversary, EFF is interviewing leaders of online platforms about how they handle complaints, moderate content, and protect their users’ ability to speak and share information.

Yelp hosts millions of reviews written by internet users about local businesses. Most reviews are positive, but over the years, some businesses have tried to pressure Yelp to remove negative reviews, including through legal threats. Since its founding more than two decades ago, Yelp has fought major legal battles to defend reviewers’ rights and preserve the legal protections that allow consumers to share honest feedback online.

Aaron Schur is General Counsel at Yelp. He joined the company in 2010 as one of its first lawyers and has led its litigation strategy for more than a decade, helping secure court decisions that strengthened legal protections for consumer speech. He was interviewed by Joe Mullin, a policy analyst on EFF's Activism Team. 

Joe Mullin: How would you describe Section 230 to a regular Yelp user who doesn’t know about the law?   

Aaron Schur: I'd say it is a simple rule that, generally speaking, when content is posted online, any liability for that content is with the person that created it, not the platform that is displaying it. That allows Yelp to show your review and keep it up if a business complains about it. It also means that we can develop ways to highlight the reviews we think are most helpful and reliable, and mitigate fake reviews in a way, without creating liability for Yelp, because we're allowed to host third party content.

The political debate around Section 230 often centers around the behavior of companies, especially large companies. But we rarely hear about users, even though the law also applies to users. What is the user story that is getting lost? 

Section 230 at heart protects users. It enables a diversity of platforms and content moderation practices—whether it's reviews on Yelp, videos on another platform, whatever it may be. 

Without Section 230, platforms would face heavy pressure to remove consumer speech when we’re threatened with legal action—and that harms users, directly. Their content gets removed. It also harms the greater number of users who would access that content. 

The focus on the biggest tech companies, I think, is understandable but misplaced when it comes to Section 230. We have tools that exist to go after dominant companies, both at the state and the federal level, and Congress could certainly consider competition-based laws—and has, over the last several years. 

Tell me about the editorial decisions that Yelp makes regarding the highlighting of reviews, and the weeding out of reviews that might be fake.  

Yelp is a platform where people share their experiences with local businesses, government agencies, and other entities. People come to Yelp, by the millions, to learn about these places.

With traffic like that come incentives for bad actors to game the system. Some unscrupulous businesses try to create fake reviews, or compensate people to write reviews, or ask family and friends to write reviews. Those reviews will be biased in a way that won’t be transparent. 

Yelp developed an automated system to highlight reviews we find most trustworthy and helpful. Other reviews may be placed in a “not recommended” section where they don’t affect a business’s overall rating, but they’re still visible. That helps us maintain a level playing field and keep user trust. 

Tell me about what your process around complaints around user reviews look like. 

We have a reporting function for reviews. Those reports get looked at by an actual human, who evaluates the review and looks at data about it to decide whether it violates our guidelines. 

We don't remove a review just because someone says it's “wrong,” because we can't litigate the facts in your review. If someone says “my pizza arrived cold,” and the restaurant says, no, the pizza was warm—Yelp is not in a position to adjudicate that dispute. 

That's where Section 230 comes in. It says Yelp doesn’t have to [decide who’s right]. 

What other types of moderation tools have you built? 

Any business, free of charge, can respond to a review, and that response appears directly below it. They can also message users privately. We know when businesses do this, it’s viewed positively by users.

We also have a consumer alert program, where members of the public can report businesses that may be compensating people for positive reviews—offering things like free desserts or discounted rent. In those cases, we can place an alert on the business’s page and link to the evidence we received. We also do this when businesses make certain types of legal threats against users.

It’s about transparency. If a business’s rating is inflated, because that business is threatening reviewers who rate less than five stars with a lawsuit, consumers have a right to know what’s happening. 

How are international complaints, where Section 230 doesn’t come into play, different? 

We have had a lot of matters in Europe, in particular in Germany. It’s a different system there—it’s notice-and-takedown. They have a line of cases that require review sites to basically provide proof that the person was a customer of the business. 

If a review was challenged, we would sometimes ask the user for documentation, like an invoice, which we would redact before providing it. Often, they would do that, in order to defend their own speech online. Which was surprising to me! But they wouldn’t always—which shows the benefit of Section 230. In the U.S., you don’t have this back-and-forth that a business can leverage to get content taken down. 

And invariably, the reviewer was a customer. The business was just using the system to try to take down speech. 

Yelp has been part of some of the most important legal cases around Section 230, and some of those didn’t exist when we spoke in 2012. What happened in the Hassel v. Bird case, and why was that important for online reviewers?

Hassel v. Bird was a case where a law firm got a default judgment against an alleged reviewer, and the court ordered Yelp to remove the review—even though Yelp had not been a party to the case. 

We refused, because the order violated Section 230, due process, and Yelp’s First Amendment rights as a publisher. But the trial court and the appeal court both ruled against us, allowing a side-stepping of Section 230. 

The California Supreme Court ultimately reversed those rulings, and recognized that plaintiffs cannot accomplish indirectly [by suing a user and then ordering a platform to remove content] what they could not accomplish directly by suing the platform itself.

We spoke to you in 2012, and the landscape has really changed. Section 230 is really under attack in a way that it wasn’t back then. From your vantage point at Yelp, what feels different about this moment? 

The biggest tech companies got even bigger, and even more powerful. That has made people distrustful and angry—rightfully so, in many cases. 

When you read about the attacks on 230, it’s really politicians calling out Big Tech. But what is never mentioned is little tech, or “middle tech,” which is how Yelp bills itself. If 230 is weakened or repealed, it’s really the biggest companies, the Googles of the world, that will be able to weather it better than smaller companies like Yelp. They have more financial resources. It won’t actually accomplish what the legislators are setting out to accomplish. It will have unintended consequences across the board. Not just for Yelp, but for smaller platforms. 

This interview was edited for length and clarity.

Joe Mullin

The Internet Still Works: Wikipedia Defends Its Editors

5 hours 53 minutes ago

Section 230 helps make it possible for online communities to host user speech: from restaurant reviews, to fan fiction, to collaborative encyclopedias. But recent debates about the law often overlook how it works in practice. To mark its 30th anniversary, EFF is interviewing leaders of online platforms about how they handle complaints, moderate content, and protect their users’ ability to speak and share information. 

A decade ago, Wikimedia Foundation, the nonprofit that operates Wikipedia, received 304 requests to alter or remove content over a two-year period, not including copyright complaints. In 2024 alone, it received 664 such takedown requests. Only four were granted. As complaints over user speech have grown, Wikimedia has expanded its legal team to defend the volunteer editors who write and maintain the encyclopedia. 

Jacob Rogers is Associate General Counsel at the Wikimedia Foundation. He leads the team that deals with legal complaints against Wikimedia content and its editors. Rogers also works to preserve the legal protections, including Section 230, that make a community-governed encyclopedia possible. 

Joe Mullin: What kind of content do you think would be most in danger if Section 230 was weakened? 

Jacob Rogers: When you're writing about a living person, if you get it wrong and it hurts their reputation, they will have a legal claim. So that is always a concentrated area of risk. It’s good to be careful, but  I think if there was a looser liability regime, people could get to be too careful—so careful they couldn’t write important public information. 

Current events and political history would also be in danger. Writing about images of Muhammad has been a flashpoint in different countries, because depictions are religiously sensitive and controversial in some contexts. There are different approaches to this in different languages. You might not think that writing about the history of art in your country 500 years ago would get you into trouble—but it could, if you’re in a particular country, and it’s a flash point. 

Writing about history and culture matters to people. And it can matter to governments, to religions, to movements, in a way that can cause people problems. That’s part of why protecting pseudonymity and their ability to work on these topics is so important. 

If you had to describe to a Wikipedia user what Section 230 does, how would you explain it to them? 

If there was nothing—no legal protection at all—I think we would not be able to run the website. There would be too many legal claims, and the potential damages of those claims could bankrupt the company. 

Section 230 protects the Wikimedia Foundation, and it allows us to defer to community editorial processes. We can let the user community make those editorial decisions, and figure things out as a group—like how to write biographies of living persons, and what sources are reliable. Wikipedia wouldn’t work if it had centralized decision making. 

What does a typical complaint look like, and how does the complaint process look? 

In some cases, someone is accused of a serious crime and there’s a debate about the sources. People accused of certain types of wrongdoing, or scams. There are debates about peoples’ politics, where someone is accused of being “far-right” or “far-left.” 

The first step is community dispute resolution. On the top page of every article on Wikipedia there’s a button at the top that translates to “talk.” If you click it, that gives you space to discuss how to write the article. When editors get into a fight about what to write, they should stop and discuss it with each other first. 

If page editors can’t resolve a dispute, third-party editors can come in, or ask for a broader discussion. If that doesn’t work, or there’s harassment, we have Wikipedia volunteer administrators, elected by their communities, who can intervene. They can ban people temporarily, to cool off. When necessary, they can ban users permanently. In serious cases, arbitration committees make final decisions. 

And these community dispute processes we’ve discussed are run by volunteers, no Wikimedia Foundation employees are involved? Where does Section 230 come into play?

That’s right. Section 230 helps us, because it lets disputes go through that community process. Sometimes someone’s edits get reversed, and they write an angry letter to the legal department. If we were liable for that, we would have the risk of expensive litigation every time someone got mad. Even if their claim is baseless, it’s hard to make a single filing in a U.S. court for less than $20,000. There’s a real “death by a thousand cuts” problem, if enough people filed litigation. 

Section 230 protects us from that, and allows for quick dismissal of invalid claims. 

When we're in the United States, then that's really the end of the matter. There’s no way to bypass the community with a lawsuit. 

How does dealing with those complaints work in the U.S.? And how is it different abroad? 

In the US, we have Section 230. We’re able to say, go through the community process, and try to be persuasive. We’ll make changes, if you make a good persuasive argument! But the Foundation isn’t going to come in and change it because you made a legal complaint. 

But in the EU, they don’t have Section 230 protections. Under the Digital Services Act, once someone claims your website hosts something illegal, they can go to court and get an injunction ordering us to take the content down. If we don’t want to follow that order, we have to defend the case in court. 

In one German case, the court essentially said, "Wikipedians didn’t do good enough journalism.” The court said the article’s sources aren’t strong enough. The editors used industry trade publications, and the court said they should have used something like German state media, or top newspapers in the country, not a “niche” publication. We disagreed with that. 

What’s the cost of having to go to court regularly to defend user speech? 

Because the Foundation is a mission-driven nonprofit, we can take on these defenses in a way that’s not always financially sensible, but is mission sensible. If you were focused on profit, you would grant a takedown. The cost of a takedown is maybe one hour of a staff member’s time. 

We can selectively take on cases to benefit the free knowledge mission, without bankrupting the company. To do litigation in the EU costs something on the order of $30,000 for one hearing, to a few hundred thousand dollars for a drawn-out case.

I don’t know what would happen if we had to do that in the United States. There would be a lot of uncertainty. One big unknown is—how many people are waiting in the wings for a better opportunity to use the legal system to force changes on Wikipedia? 

What does the community editing process get right that courts can get wrong? 

Sources. Wikipedia editors might cite a blog because they know the quality of its research. They know what's going into writing that. 

It can be easy sometimes for a court to look at something like that and say, well, this is just a blog, and it’s not backed by a university or institution, so we’re not going to rely on it. But that's actually probably a worse result. The editors who are making that consideration are often getting a more accurate picture of reality. 

Policymakers who want to limit or eliminate Section 230 often say their goal is to get harmful content off the internet, and fast. What do you think gets missed in the conversation about removing harmful content? 

One is: harmful to whom? Every time people talk about “super fast tech solutions,” I think they leave out academic and educational discussions. Everyone talks about how there’s a terrorism video, and it should come down. But there’s also news and academic commentary about that terrorism video. 

There are very few shared universal standards of harm around the world. Everyone in the world agrees, roughly speaking, on child protection, and child abuse images. But there’s wild disagreement about almost every other topic. 

If you do take down something to comply with the UK law, it’s global. And you’ll be taking away the rights of someone in the US or Australia or Canada to see that content. 

This interview was edited for length and clarity. EFF interviewed Wikimedia attorney Michelle Paulson about Section 230 in 2012.

Joe Mullin

On Its 30th Birthday, Section 230 Remains The Lynchpin For Users’ Speech

8 hours 52 minutes ago

For thirty years, internet users have benefited from a key federal law that allows everyone to express themselves, find community, organize politically, and participate in society. Section 230, which protects internet users’ speech by protecting the online intermediaries we rely on, is the legal support that sustains the internet as we know it.

Yet as Section 230 turns 30 this week, there are bipartisan proposals in Congress to either repeal or sunset the law. These proposals seize upon legitimate concerns with the harmful and anti-competitive practices of the largest tech companies, but then misdirect that anger toward Section 230.

But rolling back or eliminating Section 230 will not stop invasive corporate surveillance that harms all internet users. Killing Section 230 won’t end to the dominance of the current handful of large tech companies—it would cement their monopoly power

The current proposals also ignore a crucial question: what legal standard should replace Section 230? The bills provide no answer, refusing to grapple with the tradeoffs inherent in making online intermediaries liable for users’ speech.

This glaring omission shows what these proposals really are: grievances masquerading as legislation, not serious policy. Especially when the speech problems with alternatives to Section 230’s immunity are readily apparent, both in the U.S. and around the world. Experience shows that those systems result in more censorship of internet users’ lawful speech.

Let’s be clear: EFF defends Section 230 because it is the best available system to protect users’ speech online. By immunizing intermediaries for their users’ speech, Section 230 benefits users. Services can distribute our speech without filters, pre-clearance, or the threat of dubious takedown requests. Section 230 also directly protects internet users when they distribute other people’s speech online, such as when they reshare another users’ post or host a comment section on their blog.

It was the danger of losing the internet as a forum for diverse political discourse and culture that led to the law in 1996. Congress created Section 230’s limited civil immunity  because it recognized that promoting more user speech outweighed potential harms. Congress decided that when harmful speech occurs, it’s the speaker that should be held responsible—not the service that hosts the speech. The law also protects social platforms when they remove posts that are obscene or violate the services’ own standards. And Section 230 has limits: it does not immunize services if they violate federal criminal laws.

Section 230 Alternatives Would Protect Less Speech

With so much debate around the downsides of Section 230, it’s worth considering: What are some of the alternatives to immunity, and how would they shape the internet?

The least protective legal regime for online speech would be strict liability. Here, intermediaries always would be liable for their users’ speech—regardless of whether they contributed to the harm, or even knew about the harmful speech. It would likely end the widespread availability and openness of social media and web hosting services we’re used to. Instead, services would not let users speak without vetting the content first, via upload filters or other means. Small intermediaries with niche communities may simply disappear under the weight of such heavy liability.

Another alternative: Imposing legal duties on intermediaries, such as requiring that they act “reasonably” to limit harmful user content. This would likely result in platforms monitoring users’ speech before distributing it, and being extremely cautious about what they allow users to say. That inevitably would lead to the removal of lawful speech—probably on a large scale. Intermediaries would not be willing to defend their users’ speech in court, even it is entirely lawful. In a world where any service could be easily sued over user speech, only the biggest services will survive. They’re the ones that would have the legal and technical resources to weather the flood of lawsuits.

Another option is a notice-and-takedown regime, like what exists under the Digital Millennium Copyright Act. That will also result in takedowns of legitimate speech. And there’s no doubt such a system will be abused. EFF has documented how the DMCA leads to widespread removal  https://www.eff.org/takedownsof lawful speech based on frivolous copyright infringement claims. Replacing Section 230 with a takedown system will invite similar behavior, and powerful figures and government officials will use it to silence their critics.

The closest alternative to Section 230’s immunity provides protections from liability until an impartial court has issued a full and final ruling that user-generated content is illegal, and ordered that it be removed. These systems ensure that intermediaries will not have to cave to frivolous claims. But they still leave open the potential for censorship because intermediaries are unlikely to fight every lawsuit that seeks to remove lawful speech. The cost of vindicating lawful speech in court may be too high for intermediaries to handle at scale.

By contrast, immunity takes the variable of whether an intermediary will stand up for their users’ speech out of the equation. That is why Section 230 maximizes the ability for users to speak online.

In some narrow situations, Section 230 may leave victims without a legal remedy. Proposals aimed at those gaps should be considered, though lawmakers should pay careful attention that in vindicating victims, they do not broadly censor users’ speech. But those legitimate concerns are not the criticisms that Congress is levying against Section 230.

EFF will continue to fight for Section 230, as it remains the best available system to protect everyone’s ability to speak online.

Aaron Mackey

RIP Dave Farber, EFF Board Member and Friend

8 hours 57 minutes ago

We are sad to report the passing of longtime EFF Board member, Dave Farber. Dave was 91 and lived in Tokyo from age 83, where he was the Distinguished Professor at Keio University and Co-Director of the Keio Cyber Civilization Research Center (CCRC).  Known as the Grandfather of the Internet, Dave made countless contributions to the internet, both directly and through his support for generations of students.  

Dave was the longest-serving EFF Board member, having joined in the early 1990s, before the creation of the World Wide Web or the widespread adoption of the internet.  Throughout the growth of the internet and the corresponding growth of EFF, Dave remained a consistent, thoughtful, and steady presence on our Board.  Dave always gave us credibility as well as ballast.  He seemed to know and be respected by everyone who had helped build the internet, having worked with or mentored too many of them to count.  He also had an encyclopedic knowledge of the internet's technical history. 

From the beginning, Dave saw both the promise and the danger to human rights that would come with the spread of the internet around the world. He committed to helping make sure that the rights and liberties of users and developers, especially the open source community, were protected. He never wavered in that commitment.  Ever the teacher, Dave was also a clear explainer of internet technologies and basically unflappable.  

Dave also managed the Interesting People email list, which provided news and connection for so many internet pioneers and served as model for how people from disparate corners of the world could engage in a rolling conversation about all things digital.  His role as the Chief Technologist at the U.S. Federal Communications Commission from 2000 to 2001 gave him a strong perspective on the ways that government could help or hinder civil liberties in the digital world. 

We will miss his calm, thoughtful voice, both inside EFF and out in the world. May his memory be a blessing.  

Cindy Cohn

【神奈川支部リポート】 知ってますか? 軍転法 横須賀の平和運動家に聞く=藤森 研

10 hours 46 minutes ago
 クイズです。横須賀、呉、佐世保、舞鶴の4市に共通するのは何?戦前、海軍の「鎮守府」があった。正解です。では戦後は?海上自衛隊の地方総監部がある。正解です。 それから、この4市にだけ旧軍港市転換法(軍転法)が適用されている、というのも正解です。 この法律を、筆者は全く知りませんでした。昨年10月の神奈川支部の例会に、非核市民宣言運動・ヨコスカの中心メンバー、新倉裕史さんを招いて講演してもらい、その後も横須賀市の事務所を訪れて話を聞きました。 「軍転法」は第1条で「この法律は、..
JCJ

Op-ed: Weakening Section 230 Would Chill Online Speech

11 hours 27 minutes ago

(This appeared as an op-ed published Friday, Feb. 6 in the Daily Journal, a California legal newspaper.)

Section 230, “the 26 words that created the internet,” was enacted 30 years ago this week. It was no rush-job—rather, it was the result of wise legislative deliberation and foresight, and it remains the best bulwark to protect free expression online.

The internet lets people everywhere connect, share ideas and advocate for change without needing immense resources or technical expertise. Our unprecedented ability to communicate online—on blogs, social media platforms, and educational and cultural platforms like Wikipedia and the Internet Archive—is not an accident. In writing Section 230, Congress recognized that for free expression to thrive on the internet, it had to protect the services that power users’ speech. Section 230 does this by preventing most civil suits against online services that are based on what users say. The law also protects users who act like intermediaries when they, for example, forward an email, retweet another user or host a comment section on their blog.

The merits of immunity, both for internet users who rely on intermediaries—from ISPs to email providers to social media platforms, and for internet users who are intermediaries—are readily apparent when compared with the alternatives.

One alternative would be to provide no protection at all for intermediaries, leaving them liable for anything and everything anyone says using their service. This legal risk would essentially require every intermediary to review and legally assess every word, sound or image before it’s published—an impossibility at scale, and a death knell for real-time user-generated content.

Another option: giving protection to intermediaries only if they exercise a specified duty of care, such as where an intermediary would be liable if they fail to act reasonably in publishing a user’s post. But negligence and other objective standards are almost always insufficient to protect freedom of expression because they introduce significant uncertainty into the process and create real chilling effects for intermediaries. That is, intermediaries will choose not to publish anything remotely provocative—even if it’s clearly protected speech—for fear of having to defend themselves in court, even if they are likely to ultimately prevail. Many Section 230 critics bemoan the fact that it prevented courts from developing a common law duty of care for online intermediaries. But the criticism rarely acknowledges the experience of common law courts around the world, few of which adopted an objective standard, and many of which adopted immunity or something very close to it.

Congress’ purposeful choice of Section 230’s immunity is the best way to preserve the ability of millions of people in the U.S. to publish their thoughts, photos and jokes online, to blog and vlog, post, and send emails and messages.

Another alternative is a knowledge-based system in which an intermediary is liable only after being notified of the presence of harmful content and failing to remove it within a certain amount of time. This notice-and-takedown system invites tremendous abuse, as seen under the Digital Millennium Copyright Act’s approach: It’s too easy for someone to notify an intermediary that content is illegal or tortious simply to get something they dislike depublished. Rather than spending the time and money required to adequately review such claims, intermediaries would simply take the content down.

All these alternatives would lead to massive depublication in many, if not most, cases, not because the content deserves to be taken down, nor because the intermediaries want to do so, but because it’s not worth assessing the risk of liability or defending the user’s speech. No intermediary can be expected to champion someone else’s free speech at its own considerable expense.Nor is the United States the only government to eschew “upload filtering,” the requirement that someone must review content before publication. European Union rules avoid this also, recognizing how costly and burdensome it is. Free societies recognize that this kind of pre-publication review will lead risk-averse platforms to nix anything that anyone anywhere could deem controversial, leading us to the most vanilla, anodyne internet imaginable.

The advent of artificial intelligence doesn’t change this. Perhaps there’s a tool that can detect a specific word or image, but no AI can make legal determinations or be prompted to identify all defamation or harassment. Human expression is simply too contextual for AI to vet; even if a mechanism could flag things for human review, the scale is so massive that such human review would still be overwhelmingly burdensome.

Congress’ purposeful choice of Section 230’s immunity is the best way to preserve the ability of millions of people in the U.S. to publish their thoughts, photos and jokes online, to blog and vlog, post, and send emails and messages. Each of those acts requires numerous layers of online services, all of which face potential liability without immunity.

This law isn’t a shield for “big tech.” Its ultimate beneficiaries are all of us who want to post things online without having to code it ourselves, and so that we can read and watch content that others create. If Congress eliminated Section 230 immunity, for example, we would be asking email providers and messaging platforms to read and legally assess everything a user writes before agreeing to send it. 

For many critics of Section 230, the chilling effect is the point: They want a system that will discourage online services to publish protected speech that some find undesirable. They want platforms to publish less than what they would otherwise choose to publish, even when that speech is protected and nonactionable.

When Section 230 was passed in 1996, about 40 million people used the internet worldwide; by 2025, estimates ranged from five billion to north of six billion. In 1996, there were fewer than 300,000 websites; by last year, estimates ranged up to 1.3 billion. There is no workforce and no technology that can police the enormity of everything that everyone says.

Internet intermediaries—whether social media platforms, email providers or users themselves—are protected by Section 230 so that speech can flourish online.

David Greene

広告主等向けガイダンスセミナー「デジタル広告のリスク対策の実践 −知識から行動へ、総務省ガイダンスの活用と実務課題の乗り越え方−」開催のお知らせ(総務省・広告4団体共催)

1 day 7 hours ago
広告主等向けガイダンスセミナー「デジタル広告のリスク対策の実践 −知識から行動へ、総務省ガイダンスの活用と実務課題の乗り越え方−」開催のお知らせ(総務省・広告4団体共催)
総務省