Watch EFF's Talks from DEF CON 31

5 days 2 hours ago

EFF had a blast at DEF CON 31! Thank you to everyone who came and supported EFF at the membership booth, participated in our contests, and checked out our various talks. We had a lot of things going on this year, and it was great to see so many new and familiar faces.

This year was our biggest DEF CON yet, with over 900 attendees starting or renewing an EFF membership at the conference. Thank you! Your support is the reason EFF can push for initiatives like protecting encrypted messaging, fighting back against illegal surveillance, and defending your right to tinker and hack the devices you own. Of course if you missed us at DEF CON, you can still become an EFF member and grab some new gear when you make a donation today!

Now you can catch up on the EFF talks from DEF CON 31! Below is a playlist with the various talks EFF participated in that covers topics from digital surveillance, the world's dumbest cyber mercenaries, the UN Cybercrime Treaty, and more. Check them out here:

Watch EFF Talks from DEF CON 31

Thank you to everyone in the infosec community who supports our work. DEF CON 32 will come sooner than we all expect, so hopefully we'll see you there next year!

Christian Romero

Get Real, Congress: Censoring Search Results or Recommendations Is Still Censorship

5 days 3 hours ago

Are you a young person fighting back against bad bills like KOSA? Become an EFF member at a new, discounted Neon membership level specifically for you--stickers included! 

For the past two years, Congress has been trying to revise the Kids Online Safety Act (KOSA) to address criticisms from EFF, human and digital rights organizations, LGBTQ groups, and others, that the core provisions of the bill will censor the internet for everyone and harm young people. All of those changes fail to solve KOSA’s inherent censorship problem: As long as the “duty of care” remains in the bill, it will still force platforms to censor perfectly legal content. (You can read our analyses here and here.)

Despite never addressing this central problem, some members of Congress are convinced that a new change will avoid censoring the internet: KOSA’s liability is now theoretically triggered only for content that is recommended to users under 18, rather than content that they specifically search for. But that’s still censorship—and it fundamentally misunderstands how search works online. 

Congress should be smart enough to recognize this bait-and-switch fails to solve KOSA’s many faults

As a reminder, under KOSA, a platform would be liable for not “acting in the best interests of a [minor] user.” To do this, a platform would need to “tak[e] reasonable measures in its design and operation of products and services to prevent and mitigate” a long list of societal ills, including anxiety, depression, eating disorders, substance use disorders, physical violence, online bullying and harassment, sexual exploitation and abuse, and suicidal behaviors. As we have said, this will be used to censor what young people and adults can see on these platforms. The bills’ coauthors agree, writing that KOSA “will make platforms legally responsible for preventing and mitigating harms to young people online, such as content promoting suicide, eating disorders, substance abuse, bullying, and sexual exploitation.” 

Our concern, and the concern of others, is that this bill will be used to censor legal information and restrict the ability for minors to access it, while adding age verification requirements that will push adults off the platforms as well. Additionally, enforcement provisions in KOSA give power to state attorneys general to decide what is harmful to minors, a recipe for disaster that will exacerbate efforts already underway to restrict access to information online (and offline). The result is that platforms will likely feel pressured to remove enormous amounts of information to protect themselves from KOSA’s crushing liability—even if that information is not harmful.

The ‘Limitation’ section of the bill is intended to clarify that KOSA creates liability only for content that the platform recommends. In our reading, this is meant to refer to the content that a platform shows a user that doesn’t come from an account the user follows, is not content the user searches for, and is not content that the user deliberately visits (such as by clicking a URL). In full, the ‘Limitation’ section states that the law is not meant to prevent or preclude “any minor from deliberately and independently searching for, or specifically requesting, content,” nor should it prevent the “platform or individuals on the platform from providing resources for the prevention or mitigation of suicidal behaviors, substance use, and other harms, including evidence-informed information and clinical resources.” 

In layman’s terms, minors will supposedly still have the freedom to follow accounts, search for, and request any type of content, but platforms won’t have the freedom to share some types of content to them. Again, that fundamentally misunderstands how social media works—and it’s still censorship. 



Courts Have Agreed: Recommendations are Protected

If, as the bills’ authors write, they want to hold platforms accountable for “knowingly driving toxic, addicting, and dangerous content” to young people, why stop at search—which can also show toxic, addicting, or dangerous content? We think this section was added for two reasons. 

First, members of Congress have attacked social media platforms’ use of automated tools to present content for years, claiming that it causes any number of issues ranging from political strife to mental health problems. The evidence supporting those claims is unclear (and the reverse may be true). 

Second, and perhaps more importantly, the authors of the bill likely believe pinning liability on recommendations will allow them to square a circle and get away with censorship while complying with the First Amendment. It will not.

Courts have affirmed that recommendations are meant to facilitate the communication and content of others and “are not content in and of themselves.” Making a platform liable for the content they recommend is making them liable for the content itself, not the recommending of the content—and that is unlawful. Platforms’ ability to “filter, screen, allow, or disallow content;” “pick [and] choose” content; and make decisions about how to “display,” “organize,” or “reorganize” content is protected by 47 U.S.C. § 230 (“Section 230”), and the First Amendment. (We have written about this in various briefs, including this one.) This “Limitation” in KOSA doesn’t make the bill any less censorious. 

Search Results Are Recommendations

Practically speaking, there is also no clear distinction between “recommendations” and “search results.” The coauthors of KOSA seem to think that content which is shown as a result of a search is not a recommendation by the platform. But of course it is. Accuracy and relevance in search results are algorithmically generated, and any modern search method uses an automated process to determine the search results and the order in which they are presented, which it then recommends to the user. 

KOSA’s authors also assume, incorrectly, that content on social media can easily be organized, tagged, or described in the first place, such that it can be shown when someone searches for it, but not otherwise. But content moderation at infinite scale will always fail, in part because whether content fits into a specific bucket is often subjective in the first place.

The coauthors of KOSA seem to think that content which is shown as a result of a search is not a recommendation by the platform. But of course it is.

For example: let’s assume that using KOSA, an attorney general in a state has made it clear that a platform that recommends information related to transgender healthcare will be sued for increasing the risk of suicide in young people. (Because trans people are at a higher risk of suicide, this is one of many ways that we expect an attorney general could torture the facts to censor content—by claiming that correlation is causation.) 

If a young person in that state searches social media for “transgender healthcare,” does this mean that the platform can or cannot show them any content about “transgender healthcare” as a result? How can a platform know which content is about transgender healthcare, much less whether the content matches the attorney general’s views on the subject, or whether they have to abide by that interpretation in search results? What if the user searches for “banned healthcare?” What if they search for “trans controversy?” (Most people don’t search for the exact name of the piece of content they want to find, and most pieces of content on social media aren’t “named” at all.) 

In this example, and in an enormous number of other cases, platforms can’t know in advance what content a person is searching for—and will, at the risk of showing something controversial that the person did not intend to find, remove it entirely—from recommendations as well as search results. If liability exists for showing it, platforms will remove users’ ability to access all content that relates to a dangerous topic rather than risk showing it in the occasional instance when they can determine, for certain, that is what the user is looking for. This blunt response will not only harm children who need access to information, but adults who also may seek the same content online.

“Nerd Harder” to Remove Content Will Never Work

Third, as we have written before, it is impossible for platforms to know what types of content they would be liable for recommending (or showing in search results) in the first place. Because there is no definition of harmful or depressing content that doesn’t include a vast amount of protected expression, almost any content could fit into the categories that platforms would have to censor.  This would include truthful news about what’s going on in the world, such as wars, gun violence, and climate change. 

This Limitation section will have no meaningful effect on the censorial nature of the law. If KOSA passes, the only real option for platforms would be to institute age verification and ban minors entirely, or to remove any ‘recommendations’ and ‘search’ functions almost entirely for minors. As we’ve said repeatedly, these efforts will also impact adult users who either lack the ability to prove they are not minors or are deterred from doing so. Most smaller platforms would be pressured to ban minors entirely, while larger ones, with more money for content moderation and development, would likely block them from finding enormous swathes of content unless they have the exact URL to locate it. In that way, KOSA’s censorship would further entrench the dominant social media platforms.

Congress should be smart enough to recognize this bait-and-switch fails to solve KOSA’s many faults. We urge anyone who cares about free speech and privacy online to send a message to Congress voicing your opposition. 



Are you a young person fighting back against bad bills like KOSA? Become an EFF member at a new, discounted Neon membership level specifically for you--stickers included! 

Jason Kelley

The Federal Government’s Privacy Watchdog Concedes: 702 Must Change

5 days 4 hours ago

The Privacy and Civil Liberties Oversight Board (PCLOB) has released its much-anticipated report on Section 702, a legal authority that allows the government to collect a massive amount of digital communications around the world and in the U.S. The PCLOB agreed with EFF and organizations across the political spectrum that the program requires significant reforms if it is to be renewed before its December 31, 2023 expiration. Of course, EFF believes that Congress should go further–including letting the program expire–in order to restore the privacy being denied to anyone whose communications cross international boundaries. 

PCLOB is an organization within the federal government appointed to monitor the impact of national security and law enforcement programs and techniques on civil liberties and privacy. Despite this mandate, the board has a history of tipping the scales in favor of the privacy annihilating status quo. This history is exactly why the recommendations in their new report are such a big deal: the report says Congress should require individualized authorization from the Foreign Intelligence Surveillance Court (FISC) for any searches of 702 databases for U.S. persons. Oversight, even by the secretive FISC, would be a departure from the current system, in which the Federal Bureau of Investigation can, without warrant or oversight, search for communications to or from anyone of the millions of people in the United States whose communications have been  vacuumed up by the mass surveillance program.

The report also recommends a permanent end to the legal authority that allows “abouts” collection, a search that allows the government to look at digital communications between two “non-targets”–people who are not the subject of the investigation–as long as they are talking “about” a specific individual.  The Intelligence Community voluntarily ceased this collection after increasing skepticism about its legality from the FISC. We agree with the PCLOB that it’s time to put the final nail in the coffin of this unconstitutional mass collection. 

Section 702 allows the National Security Agency to collect communications from all over the world. Although the authority supposedly prohibits targeting people on U.S. soil, people in the United States communicate with people overseas all the time and routinely have their communications collected and stored under this program. This results in a huge pool of what the government calls “incidentally” collected communications from Americans which the FBI and other federal law enforcement organizations eagerly exploit by searching without a warrant. These unconstitutional “backdoor” searches have happened millions of times and have continued despite a number of attempts by courts and Congress to rein in the illegal practice.

Along with over a dozen organizations, including ACLU, Center for Democracy in Technology, Demand Progress, Freedom of the Press Foundation, Project on Government Oversight, Brennan Center, EFF lent its voice to the request that the following reforms be the bare minimum for precondition for any re-authorization of Section 702: 

  • Requiring the government to obtain a warrant before searching the content of Americans’ communications collected under intelligence authorities;
  • Establishing legislative safeguards for surveillance affecting Americans that is conducted overseas under Executive Order 12333–an authority that raises many of the same concerns as Section 702, as previously noted by PCLOB members;
  • Closing the data broker loophole, through which intelligence and law enforcement agencies purchase Americans’ sensitive location, internet, and other data without any legal process or accountability;
  • Bolstering judicial review in FISA-related proceedings, including by shoring up the government’s obligation to give notice when information derived from FISA is used against a person accused of a crime; and
  • Codifying reasonable limits on the scope of intelligence surveillance.

Use this handy tool to tell your elected officials: No reauthorization of 702 without drastic reform:

Take action

TELL congress: End 702 Absent serious reforms

Matthew Guariglia


5 days 5 hours ago
アニメやマンガのキャラクターの服装ではよく見かけるものの、現実には存在しないのではと言われた「乳袋」と呼ばれる形状の服が制作され話題になっている。乳袋とは普通の服を着ているのに、服の布地が乳房に貼り付いているような描き方をされていることを指す。ピクシブ百科事典の表現を借りると「まるで服におっぱいの形をした袋がついているようだ」とのこと(Rush11@コミトレD42bさんのポスト、Togetter)。 一般的な服がバストの形状にピッタリ張り付くことは無いため、乳袋はアニメや漫画固有の表現の一つとされてきた。SNSのX上で話題となっているのは、この乳袋を再現したコスプレ用の制服。その再現度から、特定かつ多くの人々から驚きと興奮を持って迎えられたという。X上のポストでは制作方法としての立体裁断技術などに関しても議論されていた模様。

すべて読む | ITセクション | 日本 | テクノロジー | アニメ・マンガ | 変なモノ | SNS |

中国の法改正案、「中華民族の精神に悪影響を与えたりする」衣服などで拘束・罰金 2023年09月14日
京都大学の卒業式でゼレンスキー大統領のコスプレをした卒業生(4浪)が現れる 2023年03月29日
チケット制になったコミケC100、問題行動が減るも新規参加者の激減という課題も 2022年08月17日


EFF to D.C. Circuit: Animal Rights Activists Shouldn’t Be Censored on Government Social Media Pages Because Agency Disagrees With Their Viewpoint

5 days 5 hours ago

Intern Muhammad Essa contributed to this post.

EFF, along with the Foundation for Individual Rights and Expression (FIRE), filed a brief in the U.S. Court of Appeals for the D.C. Circuit urging the court to reverse a lower court ruling that upheld the censorship of public comments on a government agency’s social media pages. The district court’s decision is problematic because it undermines our right to freely express opinions on issues of public importance using a modern and accessible way to communicate with government representatives.

People for the Ethical Treatment of Animals (PETA) sued the National Institutes of Health (NIH), arguing that NIH blocks their comments against animal testing in scientific research on the agency’s Facebook and Instagram pages, thus violating of the First Amendment. NIH provides funding for research that involves testing on animals from rodents to primates.

NIH claims to apply a general rule prohibiting public comments that are “off topic” to the agency’s social media posts—yet the agency implements this rule by employing keyword filters that include words such as cruelty, revolting, tormenting, torture, hurt, kill, and stop. These words are commonly found in comments that express a viewpoint that is against animal testing and sympathetic to animal rights.

First Amendment law makes it clear that when a government agency opens a forum for public participation, such as the interactive spaces of the agency’s social media pages, it is prohibited from censoring a particular viewpoint in that forum. Any speech restrictions that it may apply must be viewpoint-neutral, meaning that the restrictions should apply equally to all viewpoints related to a topic, not just to the viewpoint that the agency disagrees with.

EFF’s brief argues that courts must approach with scepticism a government agency’s claim that its “off topic” speech restriction is viewpoint-neutral and is only intended to exclude irrelevant comments. How such a rule is implemented could reveal that it is in fact a guise for unconstitutional viewpoint discrimination. This is the case here and the district court erred in ruling for the government.

For example, EFF’s brief argues that NIH’s automated keyword filters are imprecise—they are incapable of accurately implementing an “off topic” rule because they are incapable of understanding context and nuance, which is necessary when comparing a comment to a post. Also, NIH’s keyword filters and the agency’s manual enforcement of the “off topic” rule are highly underinclusive—that is, other people's comments that are “off topic” to a post are often allowed to remain on the agency’s social media pages. Yet PETA’s comments against animal testing are reliably censored.

Imprecise and underinclusive enforcement of the “off topic” rule suggests that NIH’s rule is not viewpoint-neutral but is really a means to block PETA activists from engaging with the agency online.

EFF’s brief urges the D.C. Circuit to reject the district court’s erroneous holding and rule in favor of the plaintiffs. This would protect everyone’s right to express their opinions freely online. The free exchange of opinions informs public policy and is a crucial characteristic of a democratic society. A genuine representative government must not be afraid of public criticism.

Sophia Cope