The House Intelligence Committee's Surveillance 'Reform' Bill is a Farce

1 day 2 hours ago

Earlier this week, both the House Committee on the Judiciary (HJC) and the House Permanent Select Committee on Intelligence (HPSCI) marked up two very different bills (H.R. 6570 - Protect Liberty and End Warrantless Surveillance Act in HJC, and HR 6611, the FISA Reform and Reauthorization Act of 2023 in HPSCI), both of which would reauthorize Section 702 of the Foreign Intelligence Surveillance Act (FISA)—but in very different ways. Both bills head to the House floor next week under a procedural rule called “Queen of the Hill,” where the bill with the most votes gets sent to the Senate for consideration. 

While renewing any surveillance authority remains a complicated and complex issue, this choice is clear - we urge all Members to vote NO on the Intelligence Committee’s bill, H.R.6611, the FISA Reform and Reauthorization Act of 2023.

Take action

TELL congress: Defeat this bad 702 Bill

On Nov. 16, HPSCI released a report calling for reauthorization of Section 702 with essentially superficial reforms. The bill that followed, H.R. 6611, was as bad as expected. It would renew the mass surveillance authority Section 702 for another eight years. It would create new authorities that the intelligence community has sought for years, but that have been denied by the courts. It would continue the indiscriminate collection of U.S. persons’ communications when they talk with people abroad for use by domestic law enforcement. This was not the intention of this national security program, and people on U.S. soil should not have their communications collected without a warrant because of a loophole.

As a reminder, Section 702 was designed to allow the government to warrantlessly surveil non-U.S. citizens abroad for foreign intelligence purposes. Increasingly, it’s this U.S. side of digital conversations that domestic law enforcement agencies trawl through—all without a warrant. FBI agents have been using the Section 702 databases to conduct millions of invasive searches for Americans’ communications, including those of protesters, racial justice activists, 19,000 donors to a congressional campaign, journalists, and even members of Congress

Additionally, the HPSCI bill authorizes the use of this unaccountable and out-of-control mass surveillance program as a new way of vetting asylum seekers by sifting through their digital communications. According to a newly released Foreign Intelligence Surveillance Court (FISC) opinion, the government has sought some version of this authority for years, but was repeatedly denied it, only receiving court approval for the first time this year. Because the court opinion is so heavily redacted, it is impossible to know the current scope of immigration- and visa-related querying, or what broader proposal the intelligence agencies originally sought. 

This new authority proposes to give immigration services the ability to audit entire communication histories before deciding whether an immigrant can enter the country. This is a particularly problematic situation that could cost someone entrance to the United States based on, for instance, their own or a friend’s political opinions—as happened to a Palestinian Harvard student when his social media account was reviewed when coming to the U.S. to start his semester.

The HPSCI bill also includes a call “to define Electronic Communication Service Provider to include equipment.” Earlier this year, the FISA Court of Review released a highly redacted opinion documenting a fight over the government's attempt to subject an unknown company to Section 702 surveillance. However, the court agreed that under the circumstances the company did not qualify as an "electronic communication service provider" under the law. Now, the HPSCI bill would expand that definition to include a much broader range of providers, including those who merely provide hardware through which people communicate on the Internet. Even without knowing the details of the secret court fight, this represents an ominous expansion of 702's scope, which the committee introduced without any explanation or debate of its necessity. 

By contrast, the House Judiciary Committee bill, H.R. 6570, the Protect Liberty and End Warrantless Surveillance Act, would actually address a major problem with Section 702 by banning warrantless backdoor searches of Section 702 databases for Americans’ communications. This bill would also prohibit law enforcement from purchasing Americans’ data that they would otherwise need a warrant to obtain, a practice that circumvents core constitutional protections. Importantly, this bill would also renew this authority for only three more years, giving Congress another opportunity to revisit how the reforms are implemented and to make further changes if the government is still abusing the program.

EFF has long fought for significant changes to Section 702. By the government’s own numbers, violations are still occurring at a rate of more than 4,000 per year. Our government, with the FBI in the lead, has come to treat Section 702—enacted by Congress for the surveillance of foreigners on foreign soil —as a domestic surveillance program of Americans. This simply cannot be allowed to continue. While we will continue to push for further reforms to Section 702, we urge all members to reject the HPSCI bill.

Hit the button below to tell your elected officials to vote against this bill:

Take action

TELL congress: Defeat this bad 702 Bill

Related Cases: Jewel v. NSA
India McKinney

In Landmark Battle Over Free Speech, EFF Urges Supreme Court to Strike Down Texas and Florida Laws that Let States Dictate What Speech Social Media Sites Must Publish

1 day 22 hours ago
Laws Violate First Amendment Protections that Help Create Diverse Forums for Users’ Free Expression

WASHINGTON D.C.—The Electronic Frontier Foundation (EFF) and five organizations defending free speech today urged the Supreme Court to strike down laws in Florida and Texas that let the states dictate certain speech social media sites must carry, violating the sites’ First Amendment rights to curate content they publish—a protection that benefits users by creating speech forums accommodating their diverse interests, viewpoints, and beliefs.

The court’s decisions about the constitutionality of the Florida and Texas laws—the first laws to inject government mandates into social media content moderation—will have a profound impact on the future of free speech. At stake is whether Americans’ speech on social media must adhere to government rules or be free of government interference.

Social media content moderation is highly problematic, and users are rightly often frustrated by the process and concerned about private censorship. But retaliatory laws allowing the government to interject itself into the process, in any form, raises serious First Amendment, and broader human rights, concerns, said EFF in a brief filed with the National Coalition Against Censorship, the Woodhull Freedom Foundation, Authors Alliance, Fight for The Future, and First Amendment Coalition.

“Users are far better off when publishers make editorial decisions free from government mandates,” said EFF Civil Liberties Director David Greene. “These laws would force social media sites to publish user posts that are at best, irrelevant, and, at worst, false, abusive, or harassing.

“The Supreme Court needs to send a strong message that the government can’t force online publishers to give their favored speech special treatment,” said Greene.

Social media sites should do a better job at being transparent about content moderation and self-regulate by adhering to the Santa Clara Principles on Transparency and Accountability in Content Moderation. But the Principles are not a template for government mandates.

The Texas law broadly mandates that online publishers can’t decline to publish others’ speech based on anyone’s viewpoint expressed on or off the platform, even when that speech violates the sites' rules. Content moderation practices that can be construed as viewpoint-based, which is virtually all of them, are barred by the law. Under it, sites that bar racist material, knowing their users object to it, would be forced to carry it. Sites catering to conservatives couldn’t block posts pushing liberal agendas.

The Florida law requires that social media sites grant special treatment to electoral candidates and “journalistic enterprises” and not apply their regular editorial practices to them, even if they violate the platforms' rules. The law gives preferential treatment to political candidates, preventing publishers at any point before an election from canceling their accounts or downgrading their posts or posts about them, giving them free rein to spread misinformation or post about content outside the site’s subject matter focus. Users not running for office, meanwhile, enjoy no similar privilege.

What’s more, the Florida law requires sites to disable algorithms with respect to political candidates, so their posts appear chronologically in users’ feeds, even if a user prefers a curated feed. And, in addition to dictating what speech social media sites must publish, the laws also place limits on sites' ability to amplify content, use algorithmic ranking, and add commentary to posts.

“The First Amendment generally prohibits government restrictions on speech based on content and viewpoint and protects private publisher ability to select what they want to say,” said Greene. “The Supreme Court should not grant states the power to force their preferred speech on users who would choose not to see it.”

“As a coalition that represents creators, readers, and audiences who rely on a diverse, vibrant, and free social media ecosystem for art, expression, and knowledge, the National Coalition Against Censorship hopes the Court will reaffirm that government control of media platforms is inherently at odds with an open internet, free expression, and the First Amendment,” said Lee Rowland, Executive Director of National Coalition Against Censorship.

“Woodhull is proud to lend its voice in support of online freedom and against government censorship of social media platforms,” said Ricci Joy Levy, President and CEO at Woodhull Freedom Foundation. “We understand the important freedoms that are at stake in this case and implore the Court to make the correct ruling, consistent with First Amendment jurisprudence.”

"Just as the press has the First Amendment right to exercise editorial discretion, social media platforms have the right to curate or moderate content as they choose. The government has no business telling private entities what speech they may or may not host or on what terms," said David Loy, Legal Director of the First Amendment Coalition.

For the brief:
https://www.eff.org/document/eff-brief-moodyvnetchoice

Contact:  DavidGreeneCivil Liberties Directordavidg@eff.org
Karen Gullo

Think Twice Before Giving Surveillance for the Holidays

2 days 1 hour ago

With the holidays upon us, it's easy to default to giving the tech gifts that retailers tend to push on us this time of year: smart speakers, video doorbells, bluetooth trackers, fitness trackers, and other connected gadgets are all very popular gifts. But before you give one, think twice about what you're opting that person into.

A number of these gifts raise red flags for us as privacy-conscious digital advocates. Ring cameras are one of the most obvious examples, but countless others over the years have made the security or privacy naughty list (and many of these same electronics directly clash with your right to repair).

One big problem with giving these sorts of gifts is that you're opting another person into a company's intrusive surveillance practice, likely without their full knowledge of what they're really signing up for.

For example, a smart speaker might seem like a fun stocking stuffer. But unless the giftee is tapped deeply into tech news, they likely don't know there's a chance for human review of any recordings. They also may not be aware that some of these speakers collect an enormous amount of data about how you use it, typically for advertising–though any connected device might have surprising uses to law enforcement, too.

There's also the problem of tech companies getting acquired like we've seen recently with Tile, iRobot, or Fitbit. The new business can suddenly change the dynamic of the privacy and security agreements that the user made with the old business when they started using one of those products.

And let's not forget about kids. Long subjected to surveillance from elves and their managers, electronics gifts for kids can come with all sorts of surprise issues, like the kid-focused tablet we found this year that was packed with malware and riskware. Kids’ smartwatches and a number of connected toys are also potential privacy hazards that may not be worth the risks if not set up carefully.

Of course, you don't have to avoid all technology purchases. There are plenty of products out there that aren't creepy, and a few that just need extra attention during set up to ensure they're as privacy-protecting as possible. 

What To Do Instead

While we don't endorse products, you don't have to start your search in a vacuum. One helpful place to start is Mozilla's Privacy Not Included gift guide, which provides a breakdown of the privacy practices and history of products in a number of popular gift categories. This way, instead of just buying any old smart-device at random because it's on sale, you at least have the context of what sort of data it might collect, how the company has behaved in the past, and what sorts of potential dangers to consider. U.S. PIRG also has guidance for shopping for kids, including details about what to look for in popular categories like smart toys and watches.

Finally, when shopping it's worth keeping in mind two last details. First, some “smart” devices can be used without their corresponding apps, which should be viewed as a benefit, because we've seen before that app-only gadgets can be bricked by a shift in company policies. Also, remember that not everything needs to be “smart” in the first place; often these features add little to the usability of the product.

Your job as a privacy-conscious gift-giver doesn't end at the checkout screen.

If you're more tech savvy than the person receiving the item, or you're helping set up a gadget for a child, there's no better gift than helping set it up as privately as possible. Take a few minutes after they've unboxed the item and walk through the set up process with them. Some options to look for: 

  • Enable two-factor authentication when available to help secure their new account.
  • If there are any social sharing settings—particularly popular with fitness trackers and game consoles—disable any unintended sharing that might end up on a public profile.
  • Look for any options to enable automatic updates. This is usually enabled by default these days, but it's always good to double-check.
  • If there's an app associated with the new device (and there often is), help them choose which permissions to allow, and which to deny. Keep an eye out for location data, in particular, especially if there's no logical reason for the app to need it. 
  • While you're at it, help them with other settings on their phone, and make sure to disable the phone’s advertising ID.
  • Speaking of advertising IDs, some devices have their own advertising settings, usually located somewhere like, Settings > Privacy > Ad Preferences. If there's an option to disable any ad tracking, take advantage of it. While you're in the settings, you may find other device-specific privacy or data usage settings. Take that opportunity to opt out of any tracking and collection when you can. This will be very device-dependent, but it's especially worth doing on anything you know tracks loads of data, like smart TVs
  • If you're helping set up a video or audio device, like a smart speaker or robot vacuum, poke around in the options to see if you can disable any sort of "human review" of recordings.

If during the setup process, you notice some gaps in their security hygiene, it might also be a great opportunity to help them set up other security measures, like setting up a password manager

Giving the gift of electronics shouldn’t come with so much homework, but until we have a comprehensive data privacy law, we'll likely have to contend with these sorts of set-up hoops. Until that day comes, we can all take the time to help those who need it.

Thorin Klosowski

EFF Reminds the Supreme Court That Copyright Trolls Are Still a Problem

2 days 2 hours ago

At EFF, we spend a lot of time calling out the harm caused by copyright trolls and protecting internet users from their abuses. Copyright trolls are serial plaintiffs who use search tools to identify technical, often low-value infringements on the internet, and then seek nuisance settlements from many defendants. These trolls take advantage of some of copyright law’s worst features—especially the threat of massive, unpredictable statutory damages—to impose a troublesome tax on many uses of the internet.

On Monday, EFF continued the fight against copyright trolls by filing an amicus brief in Warner Chappell Music v. Nealy, a case pending in the U.S. Supreme Court. The case doesn’t deal with copyright trolls directly. Rather, it involves the interpretation of the statute of limitations in copyright cases. Statutes of limitations are laws that limit the time after an event within which legal proceedings may be initiated. The purpose is to encourage plaintiffs to file their claims promptly, and to avoid stale claims and unfairness to defendants when time has passed and evidence might be lost. For example, in California, the statute of limitations for a breach of contract claim is generally four years.

U.S. copyright law contains a statute of limitations of three years “after the claim accrued.” Warner Chappell Music v. Nealy deals with the question of exactly what this means. Warner Chappell Music, the defendant in the case, argued that the claim accrued when the alleged infringement occurred, giving a plaintiff three years after that to recover damages. Plaintiff Nealy argued that his claim didn’t “accrue” until he discovered the alleged infringement, or reasonably should have discovered it. This “discovery rule” would permit Nealy to recover damages for acts that occurred long ago—much longer than three years—as long as he filed suit within three years of that “discovery.”

How does all this affect copyright trolls? The “discovery rule” lets trolls reach far, far back in time to find alleged infringements (such as a photo posted on a website), and plausibly threaten their targets with years of accumulated damages. All they have to do is argue that they couldn’t reasonably have discovered the infringement until recently. The trolls’ targets would have trouble defending against ancient claims, and be more likely to have to pay nuisance settlements.

EFF’s amicus brief provided the court with an overview of the copyright trolling problem and gave examples of types of trolls. The brief then showed how an unlimited look-back period for damages under the discovery rule adds risk and uncertainty for the targets of copyright trolls and would encourage abuse of the legal system.

EFF’s brief in this case is a little unusual—the case doesn’t directly involve technology or technology companies (except indirectly, to the extent they could be targets of copyright trolls). The party we’re supporting is a leading music publishing company. Other amici on the same side include the RIAA, the U.S. Chamber of Commerce, and the Association of American Publishers. Because statutes of limitations are fundamental to the justice system, this infrequent coalition perhaps isn’t that surprising.

In many previous copyright troll cases, the courts have caught on to their abuse of the judicial system, and taken steps to shut down the trolling. EFF filed its brief in this case to ask the Supreme Court to extend these judicial safeguards, by holding that copyright infringement damages can only be recovered for acts occurring three years before the filing of the complaint. An indefinite statute of limitations would throw gasoline on the copyright troll fire and risk encouraging new trolls to come out from under the figurative bridge.

Related Cases: Warner Chappell Music v. Nealy
Michael Barclay

Meta Announces End-to-End Encryption by Default in Messenger

2 days 4 hours ago

Yesterday Meta announced that they have begun rolling out default end-to-end encryption for one-to-one messages and voice calls on Messenger and Facebook. While there remain some privacy concerns around backups and metadata, we applaud this decision. It will bring strong encryption to over one billion people, protecting them from dragnet surveillance of the contents of their Facebook messages. 

Governments are continuing to attack encryption with laws designed to weaken it. With authoritarianism on the rise around the world, encryption is more important with each passing day. Strong default encryption, sooner, might have prevented a woman in Nebraska from being prosecuted for an abortion based primarily on evidence from her Facebook messages. This update couldn’t have come at a more important time. This introduction of end-to-end encryption on Messenger means that the two most popular messaging platforms in the world, both owned by Meta, will now include strong encryption by default. 

For now this change will only apply to one-to-one chats and voice calls, and will be rolled out to all users over the next few months, with default encryption of group messages and Instagram messages to come later. Regardless, this rollout is a huge win for user privacy across the world. Users will also have many more options for messaging security and privacy, including how to back-up their encrypted messages safely, turning off “read receipts,” and enabling “disappearing” messages. Choosing between these options is important for your privacy and security model, and we encourage users to think about what they expect from their secure messenger.

Backing up securely: the devil is in the (Labyrinthian) details

The technology behind Messenger’s end-to-end encryption will continue to be a slightly modified version of the Signal protocol (the same as Whatsapp). When it comes to building secure messengers, or in this case, porting a billion users onto secure messaging, the details are the most important part. In this case, the encrypted backup options provided by Meta are the biggest detail: in addressing backups, how do they balance security with usability and availability?

Backups are important for users who expect to log into their account from any device and retrieve their message history by default. From an encryption standpoint, how backups are handled can break certain guarantees of end-to-end encryption. WhatsApp, Meta’s other messaging service, only provided the option for end-to-end encrypted backups just a few years ago. Meta is also rolling out an end-to-end encrypted backup system for Messenger, which they call Labyrinth.

Encrypted backups means your backed-up messages will be encrypted on Facebook servers, and won’t be readable without your private key. Enabling encrypted backups (necessarily) breaks forward secrecy, in exchange for usability. If an app is forward-secret, then you could delete all your messages and hand someone else your phone and they would not be able to recover them. Deciding between this tradeoff is another factor you should weigh when choosing how to use secure messengers that give you the option.

If you elect to use encrypted backups, you can set a 6-digit PIN to secure your private key, or back up your private keys up to cloud storage such as iCloud or Google Cloud. If you back up keys to a third-party, those keys are available to that service provider and could be retrieved by law enforcement with a warrant, unless that cloud account is also encrypted. The 6-digit PIN provides a bit more security than the cloud back-up option, but also at the cost of usability for users who might not be able to remember a pin. 

Choosing the right secure messenger for your use case

There are still significant concerns about metadata in Messenger. By design, Meta has access to a lot of unencrypted metadata, such as who sends messages to whom, when those messages were sent, and data about you, your account, and your social contacts. None of that will change with the introduction of default encryption. For that reason we recommend that anyone concerned with their privacy or security consider their options carefully when choosing a secure messenger.

Cooper Quintin

Speaking Freely: Alison Macrina

3 days ago

Cohn: Alright, we’re doing a Speaking Freely Interview with Alison- Alison why don’t you say your name?

Alison Macrina, like Ballerina

Cohn: From the Library Freedom Project- and an EFF Award Winner 2023! Alright, let’s get into it. What does freedom of speech mean to you, Alison?

Well, to me it means the freedom to seek information, to use it, to speak it, but specifically without fear of retribution from those in power. And in LFP (Library Freedom Project) we’re really particularly concerned about how free speech and power relate. In the US, I think about power that comes from, not just the government, but also rich individuals and how they use their money to influence things like free speech, as well as corporations. I also think about free speech in terms of how it allows us to define the terms of public debate and conversation. And how also we can use it to question and shift the status quo to, in my view, more progressive ends. I think the best way that we can use our speech is using it to challenge and confront power. And identifying power structures. I think those power structures are really present in how we talk about speech. I’ve spent a lot of time thinking about all the big money that’s involved with shaping speech like the Koch brothers, etc, and how they’re influencing the culture wars. Which is why I think it’s really important, when I think about free speech, to think about things like social and economic justice. In LFP we talk about information democracy – that’s like the EFF Award that we got – and what that means to us is about how free expression, access, privacy, power, and justice interact. It’s about recognizing the different barriers to free expression, and what is actually being said, and also prioritizing the collective and our need to be able to criticize and hold accountable the people with power so that we can make a better world. 

Cohn: One of the things that I think the Library Freedom Project is that it’s really talking about the ability to access information as part of freedom of expression. Sometimes we only think about it as the speaking part, the part where it goes out, and I think one of the things that LFP really does is elevate the part where you get access to information which is equally, and importantly, a part of free speech. Is that something you want to talk about a little more? 

I think it’s one of the things that make libraries so special, right? It’s like what else do we have in our society that is a space that is just dedicated to information access? You know, anybody can use the library. Libraries exist in every community in the country. There’s all kinds of little sound bites about that, like, “there’s more libraries than there are McDonalds,” or, “there’s more libraries than Starbucks,” and what I think is also really unique and valuable about libraries is that they’re a public good that’s not means-tested. So in other words, they show up in poor communities, they’re in rich communities, they’re in middle-class communities. Most other public goods – if they exist – they’re only for the super, super poor. So it’s this, kind of… at it’s best… libraries can be such an equalizer. Some of the things we do in Library Freedom Project, we try to really push what the possibilities are for that kind of access. So offering trainings for librarians that expand on our understanding of free speech and access and privacy. Things like helping people understand artificial intelligence and algorithmic literacy. What are these tools? What do they mean? How do they work? Where are they at use? So helping librarians understand that so they can teach their communities about it. We try to think creatively about – what are the different kinds of technology at use in our world and how can librarians be the ones to offer better information about them in our communities? 

Cohn: What are the qualities that make you passionate about freedom of expression or freedom of speech? 

I mean it’s part of why I became a librarian. I don’t remember when or why it was what I wanted to do. I just knew it was what I wanted. I had like this sort of Loyd Dobler “say anything” moment where he’s like “I don’t want to buy anything that’s bought, sold, or made. I don’t want to sell anything that’s sold, bought, or made.” You know, I knew I wanted to do something in the public good. And I loved to read. And I loved to have an opinion and talk. And I felt like the library was the place that, not only where I could do that, but was a space that just celebrated that. And I think especially, all of the things that are happening in the world now, libraries are a place where we can really come together around ideas, we can expand our ideas, we can get introduced to ideas that are different from our own. I think that’s really extraordinary and super rare. I’ve always just really loved the library and wanted do it for my life. And so that’s why I started Library Freedom Project.

Cohn: That’s wonderful. Let’s talk a little about online speech and regulation. How do you think about online speech and regulation and how we should think about those issues? 

Well, I think we’re in a really bad position about it right now because, to my mind, there was a too-long period of inaction by these companies. And I think that now a decade or so of inaction created the conditions for a really harmful information movement. And now, it’s like, anything that we do, there’s unintended consequences. Content moderation is obviously extremely important- it’s an important public demand. I think it should be transparent and accountable. But all the ways that there are harmful information movements, everything I have seen, attempts to regulate them, have just resulted in people becoming hardened in their positions. 

This morning, for example, I was listening to the Senate Judiciary Hearings on book banning – because I’m a nerd – and it was a mess. It ended up not even really being about the book banning issue – which is a huge, huge issue in the library world – but it was all these Republican Senators talking about how horrible it was that the Biden administration was suppressing different kinds of COVID misinfo and disinfo. And they didn’t call it that, obviously, they called it “information” or “citizen science” or whatever. And it’s true that the Biden administration did do that – they made those demands of Facebook and so what were the results? It didn’t stop any of that disinformation. It didn’t change anybody’s minds about it. I think another big failure was Facebook and other companies trying to react to fake news by labeling stuff. And that was just totally laughable. And a lot of it was really wrong. You know, they were labeling all these leftwing outlets as Russian propaganda. I think that I don’t really know what the solution is to dealing with all of that. 

I think, though, that we’re at a place where the toothpaste is already so far out of the tube that I don’t know that any amount of regulation of it is going to be effective. I wish that those companies were regulated like public resources. I think that would make for a big shift. I don’t think companies should be making those kinds of decisions about speech. It’s such a huge problem, especially thinking about how it plays out for us at the local level in libraries- like because misinfo and disinfo are so popular, now we have people who request those materials from the library. And librarians have to make the decision- are we going to give in to public demand and buy this stuff or are we going to say, no, we are curators of information and we care about truth? We’re now in this position that because of this environment that’s been created outside of us, we have to respond to it. And it’s really hard- we’re also facing, relatedly, a massive rightwing assault on the library. A lot of people are familiar with this showing up as book bans, but it’s legislation, it’s taking over Boards, and all these other things. 

Cohn: What kind of situations, if any, is appropriate for governments or companies to limit speech? And I think they’re two separate questions, governments on the one hand and companies on the other. 

I think that, you know, Alex Jones should not be allowed to say that Sandyhook was a hoax – obviously, he’s facing consequences for that now. But the damage was done. Companies are tricky, because on the one hand, I think that different environments should be able to dictate the terms of how their platforms work. Like LFP is technically a company, and you’re not coming on any of my platforms and saying Nazi shit. But I also don’t want those companies to be arbiters of speech. They already are, and I think it’s a bad thing. I think that government regulation of speech we have to be really careful about. Because obviously it has the unintended consequence – or sometimes the intended consequences – are always harmful to marginalized people. 

Part of what motivated me to care about free speech is, I’ve been a political activist most of my life, on the left, and I am a big history nerd. And I paid a lot of attention to, historically, the way that leftist movements - how they’re speech has been marginalized and censored. From the Red Scare to anti-war speech. And I also look at a lot of what is happening now with the repression after the 2020 uprising, the No Cop City people just had this huge RICO indictment come down. And that is all speech repression that impacts things that I care about. And so I don’t want the government to intervene in any way there. At the same time, white supremacy is a really big problem. It has very real material effects and harms people. And one way this is a really big issue in my world, is part of the rightwing attack on libraries is, there is a bad faith free speech effort among them. They talk about free speech a lot. They talk about [how] they want their speech to be heard. But what they actually mean is, they want to create a hostile environment for other people. And so this is something that I end up feeling really torn about. Because I don’t want to see anyone go to prison for speech. I don’t want to see increased government regulation of speech. But I also think that allowing white supremacists to use the library meeting room or have their events there creates an environment where marginalized people just don’t go. I’m not sure what the responsible thing for us to do is. But I think that thinking about free speech outside of the abstract – thinking about the real material consequences that it has for people, especially in the library world – a lot of civil libertarians like to say, “you just respond with more speech.” And it’s like, well, that’s not realistic. You can’t easily do that especially when you’re talking about people who will cause some harm to these communities. One thing I do think, one reasonable speech regulation, is that I don’t think cops should be allowed to lie. And they are allowed, so we should do something about that. 

Cohn: Who is your free speech hero?

Well, okay, I have a few. Number one is so obvious that I feel like it’s trite to say, but, duh, Chelsea Manning. Everyone says Chelsea Manning, right? But we should give her her flowers again and again. Her life has been shaped by the decisions that she made about the things that she had to say in the public interest. I think that all whistleblowers in general are people that I have enormous respect for. People who know there are going to be consequences for their speech and do it anyway. And will sacrifice themselves for public good – it’s astounding. 

I also am very fortunate to be surrounded by free speech heroes all the time who are librarians. Not just in the nature of the work of the library, like the everyday normal thing, but also in the environment that we’re in right now. Because they are constantly pushing the bounds of public conversation about things like LGBT issues and racial justice and other things that are social goods, under extremely different conditions. Some of them are like, the only librarian in a rural community where, you know, the Proud Boys or the three percenters or whatever militant group is showing up to protest them, is trying to defund their library, is trying to remove them from their positions, is trying to get the very nature of the work criminalized, is trying to redefine what “obscenity” means. And these people, under those conditions, are still pushing for free speech and I think that’s amazing. 

And then the third one I’ll say is, I really try to keep an internationalist approach, and think about what the rest of the world experiences, because we really, even as challenging as things are in the US right now, we have it pretty good. So, when I was part of the Tor Project I got to go to Uganda with Tor to meet with some different human rights activists and talk to them about how they used Tor and help them with their situations. And I met all of these amazing Ugandan environmental activists who were fighting the construction of a pipeline – a huge pipeline from Tanzania to Uganda. And these are some of the world’s poorest people fighting some of the biggest corporations and Nation-States – because the US, Israel, and China all have a major interest in this pipeline. And these are people who were publishing anonymous blogs, with the use of Tor, under extreme threat. Many of them would get arrested constantly. Members of their organization would get disappeared for a few days. And they were doing it anyway, often with the knowledge that it wasn’t even going to change anything. Which just really was mind-blowing. And I stop and think about that a lot, when I think about all the issues that we have with free speech here. Because I think that those are the conditions that, honestly, most of the world is operating under, and those people are everyday heroes and they need to get their flowers. 

Cohn: Terrific, thank you Alison, for taking the time. You have articulated many of the complexities of the current place that we are and a few truths that we can hold, so thank you.

Cindy Cohn

The Combined Federal Campaign Pledge Period is Closing Soon!

3 days 1 hour ago

The Combined Federal Campaign (CFC) closes on January 15, 2024! U.S. federal employees and retirees can make a pledge to help support EFF’s lawyers, activists, and technologists fight for user rights online.

If you’re a U.S. federal employee or retiree, giving to EFF through the CFC is easy! Just head over to GiveCFC.org and use our ID 10437. Once there, click DONATE to give via payroll deduction, credit/debit, or an e-check. If you have a renewing pledge, you can increase your support as well! Scan the QR code to easily make a pledge or go to GiveCFC.org!

Last year, 175 members of the CFC community raised over $34,000 for EFF's initiatives fighting for free expression and privacy online. But, in a year with many threats popping up to our digital rights, we need your support now more than ever.

With support from those who pledged through the CFC last year, EFF has:

  • Made great strides in passing protections for the right to repair your tech, with the combined strength of innovation advocates around the country.
  • Launched our Red Flag Machine, a quiz that illustrates the inaccuracies of student monitoring tools and the surveillance many students face.
  • Exposed the sketchy malware that comes pre-installed on many low-budget tablets purchased from vendors like Amazon.
  • Pushed California to limit law enforcement’s over-sharing of license plate reader data with out-of-state and federal agencies.
  • Authored Privacy First, EFF’s guide to a comprehensive data privacy law, which would fix many of the underlying issues of today’s internet.

Federal employees and retirees have a tremendous impact on the shape of our democracy and the future of civil liberties and human rights online. Support EFF’s work by using our CFC ID 10437 when you make a pledge today!

Christian Romero

The Latest EU Media Freedom Act Agreement Is a Bad Deal for Users

3 days 2 hours ago

The European Parliament and Member States’ representatives last week negotiated a controversial special status for media outlets that are active on large online platforms. The EU Media Freedom Act (EMFA), though well-intended, has significant flaws. By creating a special class of privileged self-declared media providers whose content cannot be removed from big tech platforms, the law not only changes company policies but risks harming users in the European Union (EU) and beyond. 

Fostering Media Plurality: Good Intentions 

Last year, the EU Commission presented the EMFA as a way to bolster media pluralism in the EU. It promised increased transparency about media ownership and safeguards against government surveillance and the use of spyware against journalists—real dangers that EFF has warned against for years. Some of these aspects are still in flux and remain up for negotiation, but the political agreement on EMFA’s content moderation provisions could erode public trust in media and jeopardize the integrity of information channels. 

Content Hosting by Force: Bad Consequences 

Millions of EU users trust that online platforms will take care of content that violates community standards. But contrary to concerns raised by EFF and other civil society groups, Article 17 of the EMFA enforces a 24-hour content moderation exemption for media, effectively making platforms host content by force.  

This “must carry” rule prevents large online platforms like X or Meta, owner of Facebook, Instagram, and WhatsApp, from removing or flagging media content that breaches community guidelines. If the deal becomes law, it could undermine equality of speech, fuel disinformation, and threaten marginalized groups. It also poses important concerns about government interference in editorial decisions.

Imagine signing up to a social media platform committed to removing hate speech, only to find that EU regulations prevent platforms from taking any action against it. Platforms must instead create a special communication channel to discuss content restrictions with news providers before any action is taken. This approach not only undermines platforms’ autonomy in enforcing their terms of use but also jeopardizes the safety of marginalized groups, who are often targeted by hate speech and propaganda. This policy could also allow orchestrated disinformation to remain online, undermining one of the core goals of EMFA to provide more “reliable sources of information to citizens”.  

Bargaining Hell: Platforms and Media Companies Negotiating Content  

Not all media providers will receive this special status. Media actors must self-declare their status on platforms, and demonstrate adherence to recognized editorial standards or affirm compliance with regulatory requirements. Platforms will need to ensure that most of the reported information is publicly accessible. Also, Article 17 is set to include a provision on AI-generated content, with specifics still under discussion. This new mechanism puts online platforms in a powerful yet precarious position of deciding over the status of a wide range of media actors. 

The approach of the EU Media Freedom Act effectively leads to a perplexing bargaining situation where influential media outlets and platforms negotiate over which content remains visible—Christoph Schmon, EFF International Policy Director

It’s likely that the must carry approach will lead to a perplexing bargaining situation where influential media outlets and platforms negotiate over which content remains visible. There are strong pecuniary interests by media outlets to pursue a fast-track communication channel and make sure that their content is always visible, potentially at the expense of smaller providers.  

Implementation Challenges 

It’s positive that negotiators listened to some of our concerns and added language to safeguard media independence from political parties and governments. However, we remain concerned about the enforcement reality and the potential exploitation of the self-declaration mechanism, which could undermine the equality of free speech and democratic debate. While lawmakers stipulated in Article 17 that the EU Digital Services Act remains intact and that platforms are free to shorten the suspension period in crisis situations, the practical implementation of the EMFA will be a challenge. 

Christoph Schmon

Digital Rights Groups Urge Meta to Stop Silencing Palestine

3 days 12 hours ago

Legal intern Muhammad Essa Fasih contributed to this post.

In the wake of the October 7 attack on Israel and the ensuing backlash on Palestine, Meta has engaged in unjustified content and account takedowns on its social media platforms. This has suppressed the voices of journalists, human rights defenders, and many others concerned or directly affected by the war. 

This is not the first instance of biased moderation of content related to Palestine and the broader MENA region. EFF has documented numerous instances over the past decade in which platforms have seemingly turned their backs on critical voices in the region. In 2021, when Israel was forcibly evicting Palestinian families from their homes in Jerusalem, international digital and human rights groups including EFF partnered in a campaign to hold Meta to account. These demands were backed by prominent signatories, and later echoed by Meta’s Oversight Board.

The campaign—along with other advocacy efforts—led to Meta agreeing to an independent review of its content moderation activities in Israel and Palestine, published in October 2022 by BSR. The BSR audit was a welcome development in response to our original demands; however, we are yet to see its recommendations fully implemented in Meta’s policies and practices.

The rest of our demands went unmet. Therefore, in the context of the current crackdown on pro-Palestinian voices, EFF and 17 other digital and human rights organizations are  issuing an updated set of demands to ensure that Meta considers the impact of its policies and content moderation practices on Palestinians, and takes serious action to ensure that its content interventions are fair, balanced, and consistent with the Santa Clara Principles on Transparency and Accountability in Content Moderation. 

Why it matters

The campaign is crucial for many reasons ranging from respect for free speech and equality to prevention of violence.

Free public discourse plays an important role in global conflicts in that it has the ability to affect the decision making of those occupying decisive positions. Dissemination of information and public opinion can reflect the majority opinion and can build the necessary pressure on individuals in positions of power to make democratic and humane decisions. Borderless platforms like Meta, therefore, have colossal power to shape narratives across the globe. In order to reflect a true picture of the majority public opinion, it is essential that these platforms allow for a level playing field for all sides of a conflict.

These leviathan platforms have the power and responsibility to refuse to succumb to unjustifiable government demands intended to skew the discourse in favor of the latter’s geopolitical and economic interests. There is already a significant imbalance between the government of Israel and the Palestinian people, particularly in their economic and geopolitical influence. Adding to that, suppression of information coming out of or about the weaker party has the potential to aid and abet further suffering.

Meta’s censorship of content showing the scale of current devastation and suffering in Palestine by loosely using categories like nudity, sexual activity, and graphic content, in a situation where the UN is urging the entire international community to work to "mitigate the risk of genocide", interferes with the right to information and free expression at a time when those rights are more needed than ever. According to some estimates over 90% of pro-Palestinian content has been deleted following Israel’s requests since October 7.

As we’ve said many times before, content moderation is impossible at scale, but clear signs and a record of discrimination against certain groups escapes justification and needs to be addressed immediately.

In the light of all this, it is imperative that interested organizations continue to play their role in holding Meta to account for such glaring discrimination. Meta must cooperate and meet these reasonable demands if it wants to present itself as a platform that respects free speech. It is about time that Mark Zuckerberg started to back his admiration for Frederick Douglass’ quote on free speech with some material practice.

 



Jillian C. York

Our “How to Fix the Internet” Podcast is an Anthem Awards Finalist— Help Make It a Winner!

3 days 22 hours ago

EFF’s “How to Fix the Internet” podcast is a finalist in the Anthem Awards Community Voice competition, and we need YOUR help to put it over the top!

The Anthem Awards honors the purpose and mission-driven work of people, companies and organizations around the world. By amplifying the voices (and podcasts) that spark global change, the awards seek to inspire others to take action in their own community.

That’s exactly why we launched “How to Fix the Internet” — to offer a better way forward. Through curious conversations with some of the leading minds in law and technology, we explore creative solutions to some of today’s biggest tech challenges. We want our listeners to become deeply informed on vital technology issues and join the movement working to build a better technological future.

This nomination is testament to our support by the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology, and to all the amazing thinkers, makers, and doers who have been our guests. We want to honor them by winning this!

If you’re a fan of How to Fix the Internet (and EFF), here’s how you can help:

  1. Go to this link to get to the Anthem Awards website
  2. Scroll down until you see the tile for EFF’s “How to Fix the Internet,” and “celebrate” us with your vote! The site requires a quick, free, sign-up, but we hope you’ll feel comfortable helping us out this way. 
  3. Share with your friends! Suggested post: I’m a fan of EFF, so I am voting for their podcast, How to Fix the Internet, in the Anthem Awards. Please vote for them too: https://www.eff.org/anthemvote
  4. You can also share our posts on Twitter, Facebook, Mastodon, and Bluesky.

Thanks for your support, and stay tuned for details of the next season of “How to Fix the Internet,” coming in early 2024! 

Josh Richman

How to Secure Your Kid's Android Device

5 days ago

After finding risky software on an Android (Google’s mobile operating system) device marketed for kids, we wanted to put together some tips to help better secure your kid's Android device (and even your own). Despite the dangers that exist, there are many things that can be done to at least mitigate harm and assist parents and children. There are also safety tools that your child can use at their own discretion.

There's a handful of different tools, settings, and apps that can help better secure your kid’s device, depending on their needs. We've broken them down into four categories: Parental Monitoring, Security, Safety, and Privacy.

Note: If you do not see these settings in your Android device, it may be out of date or a heavily modified Android distribution. This is based on Android 14’s features.

Parental Monitoring

Google has a free app for parental controls called Family Link, which gives you tools to establish screen time limits, app installs, and more. There’s no need to install a third-party application. Family Link sometimes comes pre-installed with some devices marketed for children, but it is also available in the Google Play store for installation. This is helpful given that some third-party parental safety apps have been caught in the act of selling children’s data and involved in major data leaks. Also, having a discussion with your child about these controls can possibly provide something that technology can’t provide: trust and understanding.

Security

There are a few basic security steps you can take on both your own Google account and your child’s device to improve their security.

  • If you control your child's Google account with your own, you should lock down your own account as best as possible. Setting up two-factor authentication is a simple thing you can do to avoid malicious access to your child’s account via yours.
  • Encrypt their device with a passcode (if you have Android 6 or later).

Safety

You can also enable safety measures your child can use if they are traveling around with their device.

  • Safety Check allows a device user to automatically reach out to established emergency contacts if they feel like they are in an unsafe situation. If they do not mark themselves “safe” after the safety check duration ends, emergency location sharing with emergency contacts will commence. The safety check reason and duration (up to 24 hours) is set by the device user. 
  • Emergency SOS assists in triggering emergency actions like calling 911, sharing your location with your emergency contacts, and recording video.
  • If the "Unknown tracker alerts" setting is enabled, a notification will trigger on the user's device if there is an unknown AirTag moving with them (this feature only works with AirTags currently, but Google says will expand to other trackers in the future). Bluetooth is required to be turned on for this feature to function properly.

Privacy

There are some configurations you can also input to deter tracking of your child’s activities online by ad networks and data brokers.

  • Delete the device’s AD ID.
  • Install an even more overall privacy preserving browser like Firefox, DuckDuckGo, or Brave. While Chrome is the default on Android and has decent security measures, they do not allow web extensions on their mobile browser. Preventing the use of helpful extensions like Privacy Badger to help prevent ad tracking.
  • Review the privacy permissions on the device to ensure no apps are accessing important features like the camera, microphone, or location without your knowledge.

For more technically savvy parents, Pi-hole (a DNS software) is very useful to automatically block ad-related network requests. It blocked most shady requests on major ad lists from the malware we saw during our investigation on a kid’s tablet. The added benefit is you can configure many devices to one Pi-hole set up.

DuckDuckGo’s App Tracking protection is an alternative to using Pi-hole that doesn’t require as much technical overhead. However, since it looks at all network traffic coming from the device, it will ask to be set up as a VPN profile upon being enabled. Android forces any app that looks at traffic in this manner to be set up like a VPN and only allows one VPN connection at a time.

It can be a source of stress to set up a new device for your child. However, taking some time to set up privacy and security settings can help you and your child discuss technology from a more informed perspective for the both of you.

Alexis Hancock

Tor University Challenge: First Semester Report Card

5 days 3 hours ago

In August of 2023 EFF announced the Tor University Challenge, a campaign to get more universities around the world to operate Tor relays. The primary goal of this campaign is to strengthen the Tor network by creating more high bandwidth and reliable Tor nodes. We hope this will also make the Tor network more resilient to censorship since any country or smaller network cutting off access to Tor means it would be also cutting itself off from a large swath of universities, academic knowledge, and collaborations.

If you have already started a relay at your university, and want help or a prize LET US KNOW.

We started the campaign with thirteen institutions:

  • Technical University Berlin (Germany)
  • Boston University (US)
  • University of Cambridge (England)
  • Carnegie Mellon University (US)
  • University College London (England)
  • Georgetown University (US)
  • Johannes Kepler Universität Linz (Austria)
  • Karlstad University (Sweden)
  • KU Leuven (Belgium)
  • University of Michigan (US)
  • University of Minnesota (US)
  • Massachusetts Institute of Technology (US)
  • University of Waterloo (Canada)

People at each of these institutions have been running Tor relays for over a year and are contributing significantly to the Tor network.

Since August, we've spent much of our time discovering and making contact with existing relays.  People at these institutions were already accomplishing the campaign goals, but hadn't made it into the launch:

  • University of North Carolina (US)
  • Universidad Nacional Autónoma de México (Mexico)
  • University of the Philippines (Philippines)
  • University of Bremen (Germany)
  • University of Twente (Netherlands)
  • Karlsruhe Institute of Technology (Germany)
  • Universitatea Politehnica Timișoara (Romania)

In addition, two of the institutions in the original launch list have started public relays. University of Michigan used to run only a Snowflake back-end bridge, and now they're running a new exit relay too. Georgetown University used to run only a default obfs4 bridge, and now they're running a non-exit relay as well.

Setting up new relays at educational institutions can be a lengthy process, because it can involve getting buy-in and agreement from many different parts of the organization. Five new institutions are in the middle of this process, and we're hopeful we'll be able to announce them soon. For many of the institutions on our list we were able to reaffirm their commitment to running Tor relays or help provide the impetus needed to make the relay more permanent. In some cases we were also able to provide answers to technical questions or other concerns.

In Europe, we are realizing that relationship-building with the per-country National Research and Education Network organizations (NREN) is key to sustainability. In the United States each university buys its own internet connection however it likes, but in Europe each university gets its internet from its nation's NREN. That means relays running in the NRENs themselves—while not technically in a university—are even more important because they represent Tor support at the national level. Our next step is to make better contact with the NRENs that appear especially Tor-supportive: Switzerland, Sweden, the Netherlands, and Greece.

Now that we have fostered connections with many of the existing institutions that are running relays we want to get new institutions on board! We need more institutions to step up and start running Tor relays, whether as part of your computer science or cybersecurity department, or in any other  department where you can establish a group of people to maintain such a relay. But you don’t have to be a CS or engineering student or professor to join us! Political science, international relations, journalism, and any other department can all join in on the fun and be a part of making a more censorship resistant internet! We also welcome universities from anywhere in the world. For now universities from the US and EU make up the bulk of the relays. We would love to see more universities from the global south join our coalition.

We have many helpful technical, legal, and policy arguments about why your university should run a Tor relay on our website if you need help convincing people at your university.

And don’t forget about the prizes! Any university who keeps a Tor relay up for more than a year will receive these fantastic custom designed challenge coins, one for each member of your Tor team!

The beautiful challenge coins you can get for participating in the Tor University Challenge

If you have already started a relay at your university, and want help or a prize LET US KNOW.
Cooper Quintin

Victory! Montana’s Unprecedented TikTok Ban is Unconstitutional

1 week ago

A federal court on Thursday blocked Montana’s effort to ban TikTok from the state, ruling that the law violated users’ First Amendment rights to speak and to access information online, and the company’s First Amendment rights to select and curate users’ content. 

Montana passed a law in May that prohibited TikTok from operating anywhere within the state and imposed $10,000 penalties on TikTok or any mobile application store that allowed users to access TikTok. The law was scheduled to take effect in January. EFF opposed enactment of this law, along with ACLU, CDT, and others. 

In issuing a preliminary injunction, the district court rejected the state’s claim that it had a legitimate interest in banning the popular video sharing application because TikTok is owned by a Chinese company. And although Montana has an interest in protecting minors from harmful content and protecting consumers’ privacy, the law’s total ban was not narrowly tailored to address the state’s concerns.

“SB 419 bans TikTok outright and, in doing so, it limits constitutionally protected First Amendment speech,” the court wrote. 

EFF and the ACLU filed a friend-of-the-court brief in support of the challenge, brought by TikTok and a group of the app’s users who live in Montana. The brief argued that Montana’s ban was as unprecedented as it was unconstitutional, and we are pleased that the district court blocked the law from going into effect. 

The district court agreed that Montana’s statute violated the First Amendment. Although the court declined to decide whether the law was subject to heightened review under the Constitution (known as strict scrutiny), it ruled that Montana’s banning of TikTok failed to satisfy even less-searching review known as intermediate scrutiny.

“Ultimately, if Montana’s interest in consumer protection and protecting minors is to be carried out through legislation, the method sought to achieve those ends here was not narrowly tailored,” the court wrote.

The court’s decision this week joins a growing list of cases in which judges have halted state laws that unconstitutionally burden internet users’ First Amendment rights in the name of consumer privacy or child protection.

As EFF has said repeatedly, state lawmakers are right to be concerned about online services collecting massive volumes of their residents’ private data. But lawmakers should address those concerns directly by enacting comprehensive consumer data privacy laws, rather than seeking to ban those services entirely or prevent children from accessing them. Consumer data privacy laws both directly address lawmakers’ concerns and do not raise the First Amendment issues that lead to courts invalidating laws like Montana’s.

Aaron Mackey

Latest Draft of UN Cybercrime Treaty Is A Big Step Backward

1 week 1 day ago

A new draft of the controversial United Nations Cybercrime Treaty has only heightened concerns that the treaty will criminalize expression and dissent, create extensive surveillance powers, and facilitate cross-border repression. 

The proposed treaty, originally aimed at combating cybercrime, has morphed into an expansive surveillance treaty, raising the risk of overreach in both national and international investigations. The new draft retains a controversial provision allowing states to compel engineers or employees to undermine security measures, posing a threat to encryption.  

This new draft not only disregards but also deepens our concerns, empowering nations to cast a wider net by accessing data stored by companies abroad, potentially in violation of other nations’ privacy laws. It perilously broadens its scope beyond the cybercrimes specifically defined in the Convention, encompassing a long list of non-cybercrimes. This draft retains the concerning issue of expanding the scope of evidence collection and sharing across borders for any serious crime, including those crimes that blatantly violate human rights law. Furthermore, this new version overreaches in investigating and prosecuting crimes beyond those detailed in the treaty; until now such power was limited to only the crimes defined in article 6-16 of the convention.  

We are deeply troubled by the blatant disregard of our input, which moves the text further away from consensus. This isn't just an oversight; it's a significant step in the wrong direction. 

Initiated in 2022, treaty negotiations have been marked by ongoing disagreements between governments on the treaty’s scope and on what role, if any, human rights should play in its design and implementation. The new draft was released Tuesday, Nov. 28; governments will hold closed-door talks December 19-20 in Vienna, in an attempt to reach consensus on what crimes to include in the treaty, and the draft will be considered at the final negotiating session in New York at the end of January 2024, when it’s supposed to be finalized and adopted.  

Deborah Brown, Human Rights Watch’s acting associate director for technology and human rights, said this latest draft

“is primed to facilitate abuses on a global scale, through extensive cross border powers to investigate virtually any imaginable ‘crime’ – like peaceful dissent or expression of sexual orientation – while undermining the treaty’s purpose of addressing genuine cybercrime. Governments should not rush to conclude this treaty without ensuring that it elevates, rather than sacrifices, our fundamental rights.” 

Katitza Rodriguez

U.S. Senator: What Do Our Cars Know? And Who Do They Share that Information With?

1 week 1 day ago

U.S. Senator Ed Markey of Massachusetts has sent a much-needed letter to car manufacturers asking them to clarify a surprisingly hard question to answer: what data cars collect? Who has the ability to access that data? Private companies can often be a black box of secrecy that obscure basic facts of the consumer electronics we use. This becomes a massive problem when the devices become more technologically sophisticated and capable of collecting audio, video, geolocation data, as well as biometric information. As the letter says,

“As cars increasingly become high-tech computers on wheels, they produce vast amounts of data on drivers, passengers, pedestrians, and other motorists, creating the potential for severe privacy violations. This data could reveal sensitive personal information, including location history and driving behavior, and can help data brokers develop detailed data profiles on users.”

Not only does the letter articulate the privacy harms imposed by vehicles (and trust us, cars are some of the least privacy-oriented devices on the market), it also asks probing questions of companies regarding what data is collected, who has access, particulars about how and for how long data is stored, whether data is sold, and how consumers and the public can go about requesting the deletion of that data.

Also essential are the questions concerning the relationship between car companies and law enforcement. We know, for instance, that self-driving car companies have also built relationships with police and have given footage, on a number of occasions, to law enforcement to aid in investigations. Likewise both Tesla employees and law enforcement had been given or gained access to footage from the electric vehicles.

A push for public transparency by members of Congress is essential and a necessary first step toward some much needed regulation. Self-driving cars, cars with autonomous modes, or even just cars connected to the internet and equipped with cameras pose a vital threat to privacy, not just to drivers and passengers, but also to other motorists on the road and pedestrians who are forced to walk past these cars every day. We commend Senator Markey for this letter and hope that the companies respond quickly and honestly so we can have a better sense of what needs to change. 

You can read the letter here

Matthew Guariglia

The Intelligence Committees’ Proposals for a 702 Reauthorization Bill are Beyond Bad

1 week 1 day ago

Both congressional intelligence committees have now released proposals for reauthorizing the government's Section 702 spying powers, largely as-is, and in the face of repeated abuse. 

The House Permanent Select Committee on Intelligence (HPSCI) in the U.S. House of Representatives released a Nov. 16 report calling for reauthorization, which includes an outline of the legislation to do so. According to the report, the bill would renew the mass surveillance authority Section 702 and, in the process, invokes a litany of old boogeymen to justify why the program should continue to collect U.S. persons’ communications when they talk with people abroad.

As a reminder, the program was intended to collect communications of people outside of the United States, but because we live in an increasingly globalized world, the government intercepts and retains a massive trove of communications between Americans and people overseas. Increasingly, it’s this U.S. side of digital conversations that domestic law enforcement agencies trawl through—all without a warrant.

Private communications are the cornerstone of a free society.

It’s an old tactic. People in the intelligence community chafe against any proposals that would cut back on their “collect it all” mentality. This leads them to make a habit of finding the most current threat to public safety in order scare the public into pushing for much needed reforms, with terrorism serving as the most consistent justification for mass surveillance. In this document, HPSCI mentions that Section 702 could be the key to fighting: ISIS, Al-Qaeda, MS-13, and fentanyl trafficking. They hope that one, or all, of these threats will resonate with people enough to make them forget that the government has an obligation to honor the privacy of Americans communications and prevent them from being collected and hoarded by spy agencies and law enforcement.

The House Report

While we are still waiting for the official text, this House report proposes that Section 702 authorities be expanded to include “new provisions that make our nation more secure.” For example, the proposal may authorize the use of this unaccountable and out-of-control mass surveillance program as a new way of vetting asylum seekers by, presumably, sifting through their digital communications. According to a newly released Foreign Intelligence Surveillance Court (FISC) opinion, the government has sought some version of this authority for years, was repeatedly rejected, and received court approval for the first time this year. Because the court opinion is so heavily redacted, it is impossible to know the current scope of immigration- and visa-related querying, or what broader proposal the intelligence agencies originally sought. It’s possible the forthcoming proposal seeks to undo even the modest limitations that the FISC imposes on the government.

This new authority might give immigration services the ability to audit entire communication histories before deciding whether an immigrant can enter the country. This is a particularly problematic situation that could cost someone entrance to the United States based on, for instance, their own or a friend’s political opinions—as happened to a Palestinian Harvard student when his social media account was reviewed when coming to the U.S. to start his semester.

The House report’s bill outline also includes a call “to define Electronic Communication Service Provider to include equipment.” A 2023 FISC of Review opinion refused the intelligence community’s request for a novel interpretation of whether an entity was “an electronic communication service provider,” but that opinion is so heavily redacted that we don’t know what was so controversial. This crucial definition determines who may be compelled to turn over users’ personal information to the government so changes would likely have far-reaching impacts.

The Senate Bill

Not wanting to be outdone, this week the Senate Select Committee on Intelligence proposed a bill that would renew the surveillance power for 12 years—until 2035. Congress has previously insisted on sunsets of post-9/11 surveillance authorities every four to six years. These sunsets drive oversight and public discussion, forcing transparency that might not otherwise exist. And over the last two decades, periodic reauthorizations represent the only times that any statutory limitations have been put on FISA and similar authorities. Despite the veil of secrecy around Section 702, intelligence agencies are reliably caught breaking the law every couple of years, so a 12-year extension is simply a non-starter.

The SSCI bill also fails to include a warrant requirement for US person queries of 702 data—something that has been endorsed by dozens of nonprofit organizations and independent oversight bodies like the Privacy and Civil Liberties Oversight Board. Something that everyone outside of the intelligence community considers common sense should be table stakes for any legislation.

Private communications are the cornerstone of a free society. That’s why EFF and a coalition of other civil right, civil liberties, and racial justice organizations have been fighting to seriously reform Section 702 otherwise let it expire when it sunsets at the end of 2023. One hopeful alternative has emerged: the Government Surveillance Reform Act, a bill that would make some much needed changes to Section 702 and which has earned our endorsement. Unlike either of these proposals, the GSRA would require court approval of government queries for Americans’ communications in Section 702 databases, allows Americans who have suffered injuries from Section 702 surveillance to use the evidentiary provisions FISA sets forth, and strengthens the government’s duties to provide notice when using data resulting from Section 702 surveillance in criminal prosecutions must serve as priorities for Congress as it considers reauthorizing Section 702.

Matthew Guariglia

The Government Shouldn’t Prosecute People With Unreliable “Black Box” Technology

1 week 2 days ago

On Tuesday, EFF urged the Massachusetts Supreme Judicial Court, the highest court in that state, to affirm that a witness who has no knowledge of the proprietary algorithm used in black box technology is not qualified to testify to its reliability. We filed this amicus brief in Commonwealth v. Arrington together with the American Civil Liberties Union, the American Civil Liberties Union of Massachusetts, the National Association of Criminal Defense Lawyers, and the Massachusetts Association of Criminal Defense Lawyers. 

At issue is the iPhone’s “frequent location history” (FLH), a location estimate generated by Apple’s proprietary algorithm that has never been used in Massachusetts courts before. Generally, for information generated by a new technology to be used as evidence in a case, there must be a finding that the technology is sufficiently reliable.  

In this case, the government presented a witness who had only looked at 23 mobile devices, and there was no indication that any of them involved FLH. The witness also stated he had no idea how the FLH algorithm worked, and he had no access to Apple’s proprietary technology. The lower court correctly found that this witness was not qualified to testify on the reliability of FLH, and that the government had failed to demonstrate FLH had met the standard to be used as evidence against the defendant. 

The Massachusetts Supreme Judicial Court should affirm this ruling. Courts serve a “gatekeeper” function by determining the type of evidence that can appear before a jury at trial. Only evidence that is sufficiently reliable to be relevant should be admissible. If the government wants to present information that is derived from new technology, they need to prove that it’s reliable. When they can’t, courts shouldn’t let them use the output of black box tech to prosecute you. 

The use of these tools raises many concerns, including defendants’ constitutional rights to access the evidence against them, as well as the reliability of the underlying technology in the first place. As we’ve repeatedly pointed out before, many new technologies sought to be used by prosecutors have been plagued with serious flaws. These flaws can especially disadvantage members of marginalized communities. Robust standards for technology used in criminal cases are necessary, as they can result in decades of imprisonment—or even the death penalty. 

EFF continues to fight against governmental use of secret software and opaque technology in criminal cases. We hope that the Supreme Judicial Court will follow other jurisdictions in upholding requirements that favor disclosure and access to information regarding proprietary technology used in the criminal justice system.   

Hannah Zhao

Speaking Freely: Ron Deibert

1 week 3 days ago

Ron Deibert is a Canadian professor of political science, a philosopher, an author, and the founder of the renowned Citizen Lab, situated in the Munk School of Global Affairs at the University of Toronto. He is perhaps best known to readers for his research on targeted surveillance, which won the Citizen Lab a 2015 EFF Award. I had the pleasure of working with Ron early on in my career on another project he co-founded, the OpenNet Initiative, a project that documented internet filtering (blocking) in more than 65 countries, and his mentorship and work has been incredibly influential for me. We sat down for an interview to discuss his views on free expression, its overlaps with privacy, and much more.

York: What does free expression mean to you?

The way that I think about it is from the perspective of my profession, which is as a professor. And at the core of being an academic is the right…the imperative, to speak freely. Free expression is a foundational element of what it is to be an academic, especially when you’re doing the kind of academic research that I do. So that’s the way I think about it. Even though I’ve done a lot of research on threats to free expression online and various sorts of chilling effects that I can talk about…for me personally, it really boils down to this. I recognize it’s a privileged position: I have tenure, I’m a full-time professor at an established university…so I feel that I have an obligation to speak freely. And I don’t take that for granted because there’s so many parts of the world where the type of work that we do, the things that we speak about, just wouldn’t be allowed.

York: Tell me about an early experience that shaped your views on free expression or brought you to the work that you do. 

The recognition that there were ways in which governments—either on their own or with internet service providers—were putting in place filtering mechanisms to prevent access to content. When we first started in the early 2000s there was still this mythology around the internet that it would be a forum for free expression and access to information. I was skeptical. Coming from a security background, with a familiarity with intelligence practices, I thought: this wasn’t going to be easy. There’ll be lots of ways in which governments are going to restrict free speech and access to information. And we started discovering that and systematically mapping it. 

That was one of the first projects at the Citizen Lab: documenting internet censorship. There was one other time, that was probably in the late 2000s where I think you and Helmi Noman...I remember you talking about the ways in which internet censorship has an impact on content online. In other words, what he meant is that if websites are censored, after a while they realize there’s no point in maintaining them because their principal audience is restricted from accessing that information and so they just shut it down. That always stuck in my head. Later, Jon Penney started doing a lot of work on how surveillance affects freedom of expression. And again there, I thought that was an interesting, kind of not so obvious connection between free expression and censorship.

York: You shifted streams awhile back from a heavy focus on censorship to surveillance research. How do you view the connection between censorship and surveillance, free expression, and privacy?

They’re all a mix. I see this as a highly contested space. You have contestation occurring from different sectors of society. So governments are obviously trying to manage and control things. And when governments are towards the more authoritarian end of the spectrum they’re obviously trying to limit free expression and access to information and undertake surveillance in order to constrain popular democratic participation and hide what they’re doing. And so now we see that there’s an extraordinary toolkit available to them, most of it coming from the private sector. And then with the private sector you have different motivations, usually driven principally by business considerations. Which can end up – often in unintended ways – chilling free expression. 

The example I think of is, if social media platforms loosen the reins over what is acceptable speech or not and allow much more leeway in terms of the types of content that people can post – including potentially hateful, harmful content – I have seen on the other end of that, speaking to victims of surveillance, that they’re effectively intimidated out of the public sphere. They feel threatened, they don’t want to communicate. And that’s because of perhaps something that you could even give managers of the platforms some credit for, and you could say, well they’re doing this to maximize free speech.  When in fact they’re creating the conditions for harmful speech to proliferate and actually silence people. And of course these are age-old battles. It isn’t anything particular to the internet or social media, it’s about the boundaries around free expression in a liberal, democratic society. 

Where do we draw the lines? How do we regulate that conduct to prevent harmful speech from circulating? It’s a tricky question, for sure. Especially in the context of platforms that are global in scope, that cut across multiple national jurisdictions, and which provide people with the ability to have effectively their own radio broadcast or their own newspaper – that was the original promise of the internet, of course. But now we’re living in it. And the results are not always pretty, I would say. 

York: I had the pleasure of working with you very early on in my career on a project called the OpenNet Initiative and your writings influenced a lot of my work. Can you tell our readers a little bit about that project and why it was important?

That was a phenomenal project in hindsight, actually. It was, like many things, you don’t really know what you’re doing until later on. Many years later you can look back and reflect on the significance of it. And I think that’s the case here. Although we all understood we were doing interesting work and we got some publicity around it. I don’t think we fully appreciated what exactly we were mounting, for better or for worse. My pathway to that was that I set up the Citizen Lab in 2001 and one of the first projects was building out some tests about internet censorship in China and Saudi Arabia. That was led by Nart Villeneuve. He had developed a technique to log onto proxy computers inside those countries and then just do a kind of manual comparison. Then we read that Jonathan Zittrain and Ben Edelman were doing something, except Ben was doing it with dialup. He was doing it with dialup remotely and then making these tests. So we got together and decided we should collaborate and put in a project proposal to MacArthur Foundation and Open Society Foundations. And that’s how the project got rolling. Of course Rafal [Rohozinski] was also involved then, he was at Cambridge University. 

And we just started building it out going down the roads that made logical sense to elaborate on the research. So if you think about Ben and Nart doing slightly different things, well the next sequence in that, if you wanted to improve upon it, is, okay well let’s build a software that automates a lot of this. Build a database on the back end where we had a list of all the websites. At that time we couldn’t think of any other way to do it than to have people inside the country run these tests. I was actually thinking about the other day, you were on Twitter and you and I maybe had an exchange about this at the time, about well, we need volunteers to run these tests, should we put out a call on Twitter for it? And we were debating the wisdom of that. It’s the kind of thing we would never do now. But back then we were like, “yeah, maybe we should.” There were obviously so many problems with the software and a lot of growing pains around how we actually implement this. We didn’t really understand a lot of the ethical considerations until we were almost done. And OONI (Open Observatory of Network Interference) came along and kind of took it to the next level and actually implemented some of the things that were being bandied about early on and actually Jonathan Zittrain [moving forward referred to as JZ]. 

So JZ had this idea of—well actually we both had the same idea separately—and didn’t realize it until we got together. Which was kind of a SETI@home for internet censorship. What OONI is now, if you go back, you can even see media interviews with both of us talking about something similar. We launched at the lab at one point something called Internet Censorship Explore, and we had automated connections to proxy computers. And so people could go to a website and make a request, I want to test a website in Saudi Arabia, in Bahrain, or whatever. Of course the proxies would go up and down and there were all sorts of methodological issues with relying on data from that. There are ethical considerations that we would take into account now that we didn’t then. But that was like a primitive version of OONI, and that was around 2003. So later on OONI comes along and it just so happened that we were winding the project down for various reasons, and they took off at that time and we just said, this is fantastic, let’s just collaborate with them. 

One more important thing: there was an early decision. We were meeting at Berkman, it was JZ, John Palfrey, myself, Rafal Rohozinski, Nart, and Ben Edelman. We’re all in a room and I was like, “we should be doing tests for internet censorship but also surveillance.” And I can remember, with the Harvard colleagues there was a lot of concern about that… about potentially getting into national security stuff. And I was like, “Well, what’s wrong with that? I’m all for that.” So that’s where we carved off a separate project at the lab called the Information Warfare Monitor. And then we got into the targeted espionage work through that. In the end we had a ten year run. 

York: In your book Reset, you say there’s “no turning back” from social media. Despite all of the harms, you’ve taken the clear view that social media still has positive uses. Your book came out before Elon Musk took over Twitter and before the ensuing growth of federated social networks. Do you see this new set of spaces as being any different from what we had before?

Yeah, 100%. They’re the sort of thing I spoke about at the end where I said we need to experiment with platforms or ways of engaging with each other online that aren’t mediated through the business model of personal data surveillance or surveillance capitalism. Of course, here I am speaking to you, someone who’s been talking about this. I also think of Ethan Zuckerman, who’s also been talking about this for ages. So this is nothing original to me, I’m just saying, “Hey, we don’t need to do it this way.” 

Obviously, there are other models. And they may be a bit painful at first, they may have growing pains around getting at that type of, the level of engagement you need for it to cascade into something. That’s the trick, I think. In spite of the toxic mess of Twitter, which by the way pretty much aligns with what I wrote in Reset, the concern around, you have someone coming into this platform and then actually loosening the reins around whatever constraints existed in a desperate attempt to accelerate engagement led to a whole toxic nightmare. People fled the platform and experimented with others. Some of which are not based around surveillance capitalism. The challenge is, of course, to get that network effect. To get enough people to make it attractive to other people and then more people come onboard. 

I think that’s symptomatic of wider social problems as a whole, which really boil down to capitalism, at its core. And we’re kind of at a point where the limits of capitalism have been reached and the downsides are apparent to everybody, whether it’s ecological or social issues. We don’t really know how to get out of it. Like how would we live in something other than this? We can think about it hypothetically, but practically speaking, how do we manage our lives in a way that doesn’t depend on—you know, you can think about this with social media or you can think about it with respect to the food supply. What would it look like to actually live here in Toronto without importing avocados? How would I do that? How would we do that in my neighborhood? How would we do that in Toronto? That’s a similar kind of challenge we face around social media. How could we do this without there being this relentless data vacuum cleaning operation where we’re treated like livestock for their data farms? Which is what we are. How do we get out of that? 

York: We’re seeing a lot of impressive activism from the current youth generation. Do you see any positive online spaces for activism given the current landscape of platforms and the ubiquity of surveillance? Are there ways young people can participate in digital civil disobedience that won’t disenfranchise them?

I think it’s a lot harder to do online civil disobedience of the sort that we saw—and that I experienced—in the late 1990s and early 2000s. I think of Electronic Disturbance Theatre and the Zapatistas. There was a lot of experimentation of website defacement and DDoS attacks as political expression. There were some interesting debates going on around Cult of the Dead Cow and Oxblood Ruffin and those sorts of people. I think today, the finegrain surveillance net that is cast over people’s lives right down to the biological layer is so intense that it makes it difficult to do things without there being immediate consequences or at least observation of what you’re doing. I think it induces a more risk-averse behavior and that’s problematic for sure.

There are many experiments happening, way more than I’m aware of. But I think it’s more difficult now to do things that challenge the established system when there’s this intense surveillance net cast around everything. 

York: What do you currently see as the biggest threat to the free and open internet?

Two things. One is the area that we do a lot of work in, which is targeted espionage. To encapsulate where we’re at right now, the most advanced mercenary surveillance firms are providing services to the most notorious abusers of human rights. The worst despots and sociopaths in the world, thanks to these companies, now have the ability to take over any device anywhere in the world without any visible indication that anything is wrong on the part of the victim. So one minute your phone is fine, and the next it’s not. And it’s streaming data a continent away to some tyrant. That’s really concerning. For just about everything to do with rights and freedom and any type of rule-based political order. And if you look at, we’ve remarkably, like as we’re speaking, we’ve delivered two responsible disclosures to Apple just based on capturing these exploits that are used by these mercenary surveillance companies. That’s great, but there’s a time period where those things are not disclosed, and typically they run about an average of 100 days. There’s a period of time where everyone in the world is vulnerable to this type of malfeasance. And what we are seeing, of course, is all sorts of, an epidemic of harms against vulnerable, marginalized communities, against political opposition, against lawyers. All the pillars of liberal, democratic society are being eroded because of this. So to me that’s the most acute threat right now. 

The other one is around AI-enabled disinformation. The combination of easy-to-use platforms, which enabled a generation of coordinated, inauthentic campaigns that harass and intimidate and discredit people – these are now industrialized, they’re becoming commodified and, again, available to any sociopath in the world now has this at their fingertips. It’s extraordinarily destructive on so many levels. 

Those two are the biggest concerns on my plate right now. 

York: You’ve been at the forefront of examining how tech actors use new technology against people—what are your ideas on how people can use new technology for good?

I’ve always thought that there’s a line running from the original idea of “hacktivism” that continues to today that’s about having a particular mindset with respect to technology. Where, if one is approaching the subject through an experimental lens, trying to think about creating technical systems that help support free expression, access to information, basic human rights.  That’s something very positive to me, I don’t think it’s gone away, you can see it in the applications that people have developed and are widely used. You can see it in the federated social media platforms that we spoke about before. So it’s this continuous struggle to adapt to a new risk environment by creating something and experimenting. 

I think that’s something we need to cultivate more among young people, how to do this ethically. Unfortunately, the term “hacktivism” has been distorted. It’s become a pejorative term to mean somebody who is doing something criminal in nature. I define it in Reset, and in other books, as something that I can trace back to, at least for me, I see it as part of this American pragmatist position, a la John Dewey. We need to craft together something that supports the version of society that we want to lean towards, the kind of technical artifact-creating way of approaching the world. We don’t do that at the Lab any longer, but I think it’s something important to encourage.

York: Tell me about a personal experience you’ve had with censorship or with utilizing your freedom of expression for good.

We have been sued and threatened with lawsuits several times for our research. And typically these are corporations that are trying to silence us through strategic litigation. And even if they don’t have grounds to stand on, this is a very powerful weapon for those actors to silence inconvenient information from coming forward for them. For example, Netsweeper, one morning I woke up and had in my email inbox a letter from their lawyer saying they were suing me personally for three million dollars. I can remember the moment I looked at that and just thought, “Wow, what’s next?” And so obviously I consulted with the University of Toronto’s legal counsel, and the back and forth between the lawyers went on for several months. And during that time we weren’t allowed to speak publicly on the case. We couldn’t speak publicly about Netsweeper. Then just at the very end they withdrew the lawsuit. Fortunately, I’d instructed the team to do a kind of major capstone report on Netsweeper – find every Netsweeper device we can in the world and let’s do a big report. And that ended up being something called Planet Netsweeper. We couldn’t speak about that at the time, but I was teeing that up in the hope that we’d be able to publish. And fortunately we were able to. But had that gone differently, had they successfully sued us into submission, it would have killed my research and my career. And that’s not the first time that’s happened. So I really worry about the legal environment for doing this kind of adversarial research. 

York: Who’s your free speech hero? 

There’s too many, it’s hard to pick one…I’ll say Loujain AlHathloul. Her bravery in the face of formidable opposition and state sanctions is incredibly inspiring. She became a face of a movement that embodies basic equity and rights issues: lifting the ban on women driving in Saudi Arabia. And she has paid, and continues to pay, a huge price for that activism. She is a living illustration of speaking truth to power. She has refused to submit and remain silent in the face of ongoing harassment, imprisonment and torture. She’s a real hero of free expression. She should be given an award – like an EFF Award! 

Also, Cory Doctorow. I marvel at how he’s able to just churn this stuff out and always has a consistent view of things.

Jillian C. York

Let Them Know It’s Time to Power Up

1 week 4 days ago

Power Up Your Donation Week is here! Right now, your contribution will have double the impact on digital privacy, security, and free speech rights for everyone.

Power Up!

Donate to EFF for an instant 2X match

Thanks to a fund made by a group of dedicated EFF supporters, now through December 5th every online donation gets an instant match up to $304,200! This means every dollar you give becomes two dollars toward fighting surveillance, defending encryption, promoting open access to information, and much more. EFF makes every cent count. 

Free the Web!

Where else can you get health information, talk about your favorite anime, tell a friend you miss them, and learn how to fix that weird issue on your phone all at once? It’s hard to imagine a place where you can be as creative and connected as you are on the internet.

But these gifts also mean that corporations and governments fight hard to control what technology users say and do on the web. Together, we're defending civil liberties and human rights online. Thank you for helping EFF attorneys, activists, policy analysts, and technologists free the web for everyone. And even if you're not able to donate today, consider helping us spread the word to others. Here’s some sample language that you can share:

Donate to EFF this week and you’ll instantly double your impact on digital privacy, security, and free speech rights for everyone. https://eff.org/power-up

Email | Facebook | LinkedIn | "X"

Thank you for allowing EFF to stand by you and your rights as technology touches evermore aspects of the world. It's a tough place to navigate and we're proud to give our all for your digital freedom. I hope you'll consider making a year-end contribution to help us make the future a little brighter.

Aaron Jue

Digital Rights Updates with EFFector 35.15

1 week 5 days ago

With the holiday season upon us, it can be difficult to keep track of the latest digital rights news. Lucky for you, EFF's EFFector newsletter has you covered with the latest happenings, from a breakdown of our latest Privacy Badger update, an investigation into Android TV set-top boxes infected with malware, and a report on how to better address online harms by focusing on user privacy.

EFFector 35.15 is out now—you can read the full newsletter here, and subscribe to get the next issue in your inbox automatically! You can also listen to the audio version below:

LISTEN ON YouTube

EFFECTOR 35.15 - Privacy First

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero
Checked
1 hour 36 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed