Leaving Twitter's Walled Garden

2 weeks 4 days ago

This post is part of a series on Mastodon and the fediverse. We also have a post on privacy and security on Mastodon, and why the fediverse will be great—if we don't screw it up, and more are on the way. You can follow EFF on Mastodon here.

A wave of people have announced that they're leaving Twitter to check out something called Mastodon, and that leaves many wondering, what is Mastodon anyway? More importantly, what is the “fediverse” and what is “ActivityPub”? This explainer will help you make heads or tails of this new approach to communications and social media.

What is the Fediverse, Federation and Mastodon?

Federation is a broad term that means a group that has smaller groups within it that retain some measure of autonomy within that whole. In internet terms, the most well-known federated system is our old friend, email.

No matter how much you love or hate email itself, it is a working federated system that’s  been around for over a half-century. It doesn’t matter what email server you use, what email client you use, we all use email and the experience is more or less the same for us all, and that’s a good thing. The Web is also federated – any web site can link to, embed, refer to stuff on any other site and in general, it doesn’t matter what browser you use. The internet started out federated, and even continues to be.

The World Wide Web Consortium (W3C, a standards organization that gives us many protocols, especially HTML) created a protocol in 2018 called ActivityPub that enbales federated systems similar to social networking. The systems built on top of ActivityPub, are collectively referred to as the fediverse.

One of the most famous services within the fediverse is Mastodon, a Twitter-like social network and communications system to which many users are switching following the recent turmoil at Twitter. At a very basic level, Mastodon is a web server (or app) that acts as a social network. Just like a service such as Twitter or TikTok, you use it by visiting a website or using an app on a smartphone, and you can post text, images, and videos that can be seen by your followers. You can also follow others and see their posts in your own timeline. In this way, Mastodon is very similar to services you already know and probably use every day. In fact, it’s very much like Twitter itself, which is why people unhappy with Twitter are considering Mastodon as an alternative and why we’re writing this essay. What makes Mastodon interesting, though, is that the server (or “instance”—the terms are interchangeable) that you might use isn't the only server running Mastodon in the world. 

Over the Garden Wall

In the early days of the internet, there were a number of contained services. America Online (AOL), Prodigy, and others were available to anyone who could access them  before the internet proper was open to everyone. Those old systems had many of the same tools and services that we use today – instant messaging, email, shopping, and so on. The problem with those was that if you wanted to send a message to someone on another service, there weren’t good ways to do that. For example, AOL mail did not always allow someone to send an email to a person using Prodigy. The term “walled garden” was coined to describe the situation of these services in opposition to the internet itself.

The walls in these gardens eventually opened up, usually by using some underlying open, interoperable, standard protocol. For example, SMTP is the protocol we still use to send email from one system to another. HTTP itself is an open, interoperable way to get a web page as opposed to each service having a different way to construct and display a page. Now in that tradition, some are hoping ActivityPub will do the same for walled gardens in social media.

An Ecosystem built on ActivityPub

Underneath this, Mastodon is just one of a whole host of different services that communicate using ActivityPub. From one Mastodon server, a person can follow and be followed by anyone else on any other Mastodon server anywhere else in the world–just like you can send an email from one server to anyone else on any server in the world. ActivityPub is also able to convey  many types of content, including text, pictures, and videos, but also concepts such as "likes," replies, and polls.

In fact, ActivityPub is so flexible that it forms the backbone for a number of diverse services in addition to Mastodon: PeerTube, a social video hosting site, like YouTube; PixelFed, which focuses on images; and Bookwyrm, a book cataloging and review site, similar to Goodreads. There is even an open source food delivery system that uses ActivityPub. The power of ActivityPub means that you can follow an account on any of those services even from one of the others. 

Each of these different services is a piece of open source software, so starting up a new server with the right software can immediately let you interact with all of the other servers out there. If you don't have the knowhow or desire to maintain your own server, there are tons of public servers out there where you can create an account for yourself and interact with any user on any other server running any of those services. There are also many hosting sites that will do the heavy lifting of running a server on your behalf and using your domain name so that you can have your own ActivityPub services under your control.

This idea that people can interact with each other across servers is often the hardest piece for people to wrap their heads around, but, as we mentioned earlier, it works very similarly to email. Anyone who's ever seen an email address before knows that they have two parts: the username before the @ symbol, and the domain name after it. That domain name tells you what server that particular account lives on. Some people have email accounts with their university or employer, some have them with a public service like Google's Gmail, Microsoft's Office365, or Protonmail, but no matter what domain comes after the @ sign, you can always send a message to your mom, your friend, or your bank. That's because all of those servers speak the same protocol (called the Simple Mail Transfer Protocol) under the hood.

There is no limit to what services can connect to this growing network. Facebook and Twitter themselves could join the fediverse by implementing a suitable protocol on top of ActivityPub and thus sending their content out to the universe of federated ActivityPub servers and users.

Roadmap to the Fediverse

An account in the fediverse, such as on Mastodon, resembles email in that instead of everyone being on just one server (twitter.com), there are many servers. Instead of someone’s handle being simply @alice, they might be @alice@example.com. There are already thousands of different sites that offer free Mastodon accounts, and just like email or web servers, you could run one yourself. There is even a web site to help you pick which Mastodon instance you might like.  

This leads to the single biggest question people often ask when they approach ActivityPub, especially those moving to Mastodon from Twitter: "WHICH SERVER??" 

Fortunately, there are two good reasons to hold off on panicking. First, anyone on (almost) any server can follow and be followed by anyone on any other server, so you won’t be cut off from your friends and family if they end up on a different server. Second, the fediverse has mechanisms for moving accounts between servers, including ways to export and import your posts and follow and block lists as well as redirect your profile from one server to point to another. So, if for whatever reason you find you don't like the first server you land on, you can always move later.

That said, there are reasons to pick one server over another. The biggest one is moderation. Fediverse services are good at giving individuals the ability to block other accounts or even entire servers that they don't want to see in their timeline. They are also good at letting servers block accounts or entire other servers that don’t meet with their own moderation decisions. One could, for example, make an instance that only allows incoming posts that contain the word “cat” and permanently blocks anyone who uses the word “dog.” Thus, finding a server where you agree with the moderation policies may be a good idea. 

Another reason to pick one server over another is if that server is organized around a common community, maybe through a shared interest or language, and thus will have more conversations pertinent to that community. If you were involved in “Law Twitter” or “Infosec Twitter” or “Historian Twitter” then that might be a reason to pick one server or another. There are also special interest servers, like one made for present and past employees of Twitter.

What’s Different in the Fediverse?

While many people are moving from Twitter to Mastodon, let’s be clear: Mastodon is not the whole fediverse and the fediverse is not simply a Twitter replacement.

The fediverse is an example of how we can have a paradigm shift in how we do social media. It is still undergoing some growing pains—like a small town that is now seeing boom times. New people arriving in large numbers can change the trajectory of the social part of the network, and come with their own points of friction.

Twitter as we know it today has developed over fifteen years and has seen many changes and emergent features–both Twitter hashtags and at-signs were invented by the users, for example. It has also had a dedicated team of professionals build it. Today’s fediverse services are a labor of love from a group of software communities.

Fediverse software is not as robust as Twitter software yet; as one might expect from a decentralized system, there are also several common clients with their own features and issues. The support for multiple accounts is still somewhat spotty, for example. Many small features a Twitter user is accustomed to may not be built yet. On the other hand, these services have many features Twitter users have been asking for, such as higher character limits, content warnings, and an option to automatically delete old posts.

The fediverse has no central authority and that means some features like Twitter’s original blue-check verification simply don’t exist. The closest thing to getting “verified” is proving to your instance that you control an external webpage or resource by including a special hyperlink to your profile.

Since the fediverse is decentralized, there is no single authority to moderate posts, or remove accounts – that’s left to the users and servers themselves. Mastodon users typically mark posts with content warnings, not only for genuine sensitive content (e.g. content warning about war news), but also to minimize a post’s footprint on your timeline. In conjunction with hashtags, they are also used for categorizing and curating posts that are not sensitive (e.g. content warning: “My cat #pets”).

However with these differences, one could easily turn the question on it’s head– what should mastodon users know about switching to Twitter? There will always be tradeoffs evolving between incumbent social media and their federated alternatives. This new spark of competition between platforms and federations holds potential for new innovations and improvements to our autonomy online. 

Ross Schulman

KOSA Would Let the Government Control What Young People See Online

2 weeks 4 days ago

The latest version of the Kids Online Safety Act (KOSA) is focused on removing online information that people need to see—people of all ages. Letting governments—state or federal—decide what information anyone needs to see is a dangerous endeavor. On top of that, this bill, supposedly designed to protect our privacy, actually requires tech companies to collect more data on internet users than they already do. 

EFF has long supported comprehensive privacy protections, but the details matter. KOSA gets the details consistently wrong, and that’s why we’re calling on members of Congress to oppose this bill. 

Although KOSA has been revamped since lawmakers introduced it in February, and improved slightly, it’s still a dangerous bill that presents censorship and surveillance as a solution to some legitimate, and some not-so-legitimate, issues facing young internet users today. 

The Good

KOSA is a sweeping update to the Children’s Online Privacy Protection Act, also known as COPPA. COPPA is the reason that many websites and platforms ask for you to confirm your age, and why many services require their users to be older than 13—because the laws protecting data privacy are much stricter for children than they are for adults. Legislators have been hoping to expand COPPA for years, and there have been good proposals to do so. KOSA, for its part, includes some good ideas: more people should be protected by privacy laws, and the bill expands COPPA’s protections to include minors under 16. That would do a world of good, in theory: the more people we can protect under COPPA, the better. But why stop with protecting the privacy of minors under 16? EFF has long supported comprehensive data privacy legislation for all users.

Another good provision in KOSA would compel sites to allow minor users to delete their account and their personal data, and restrict the sharing of their geolocation data, as well as provide notice if they are tracking it. Again, EFF thinks all users—regardless of their age—should have these protections, and expanding them incrementally is better than the status quo.

The Bad

But KOSA’s chief focus is not to protect young people’s privacy. The bill’s main aim is to censor a broad swath of speech in response to concerns that young people are spending too much time on social media, and too often encountering harmful content. KOSA requires sites to “prevent and mitigate mental health disorders,” including by the promotion or exacerbation of “self-harm, suicide, eating disorders, and substance use disorders.” Make no mistake: this is a requirement that platforms censor content.

This sweeping set of content restrictions wouldn’t just apply to Facebook or Instagram. Platforms covered by KOSA include “any online platform that connects to the internet and that is used, or is reasonably likely to be used, by a minor.” As we said before, this would likely encompass everything from Apple’s iMessage and Signal to web browsers, email applications and VPN software, as well as platforms like  Reddit, Facebook, and TikTok—platforms with wildly different user bases and uses, and with hugely varying abilities, and expectations, to monitor content. 

A huge number of online services would thus be forced to make a choice: overfilter to ensure no one encounters content that could be construed as ambiguously harmful, or raise the age limit for users to 17. Many platforms may even do both. 

Let’s be clear about the dangerous consequences of KOSA’s censorship. Under its vague standard, both adults and children will not be able to access medical and health information online. This is because it will be next to impossible for a website to make case-by-case decisions about which content promotes self-harm or other disorders and which ones provide necessary health information and advice to those suffering from them. This will disparately impact children who lack the familial, social, financial, or other means to obtain health information elsewhere. (Research has shown that a large majority of young people have used the internet for health-related research.)

Another example: KOSA also requires these services to ensure that young people do not see content that exacerbates a substance use disorder. On the face of it, that might seem fairly simple: just delete content that talks about drugs, or hide it from young people. But how do you find and label such content? Put simply: not all content that talks about drugs exacerbates their use.

There is no realistic way to find and filter only that content without also removing a huge swath of content that is beneficial. For just one example, social media posts describing how to use naloxone, a medication that can reverse an overdose from opioids, could be viewed as either promoting self-harm, because it can reduce the potential danger of a fatal overdose, or as providing necessary health information. But KOSA’s vague standard means that a website owner is in a better position legally if they remove that information, which avoids a potential claim later that the information is harmful. That will reduce the availability of important and potentially life-saving information online. KOSA pushes website owners toward government-approved censorship.   

The Ugly

To ensure that users are the correct age, KOSA compels vast data collection efforts that perversely result in even greater potential privacy invasions. 

KOSA would authorize a federal study on creating a device or operating system level age verification system, “including the need for potential hardware and software changes.” The end result would likely be an elaborate age-verification system, run by a third-party, that maintains an enormous database of all internet users’ data. 

Many of the risks of such a program are obvious. They require every user—including children—to hand private data over to a third-party simply to use a website if that user ever wants to see beyond the government’s “parental” controls. 

Moreover, the bill lets Congress decide what’s appropriate for children to view online. This verification scheme would make it much harder for actual parents to make individual choices for their own children. Because it’s so hard to differentiate between minors having discussions about many of these topics in a way that encourages them, as opposed to a way that discourages them, the safest course of action for services under this bill is to block all discussion and viewing of these topics by younger children and teenagers. If KOSA passes, instead of allowing parents to make the decision about what young people will see online, Congress will do it for them. 

A recent study on attitudes toward age verification showed that most parents “are willing to make an exception or allow their child to bypass the age requirement altogether, but then require direct oversight of the account or discussions about how to use the app safely.” Many also fudge the numbers a bit, to ensure that websites don’t have the specific birthdays of their children. With the hard-wired, national age verification system imagined by KOSA, it will be much harder, if not impossible, for parents to decide for themselves what sites and content a young person can encounter. Instead, the algorithm will do it for them. 

KOSA also fails to recognize the reality that some parents do not always have their childrens’ best interest in mind, or are unable to make appropriate decisions for them. Those children suffer under KOSA’s paternal regime, which requires services to set parental controls to their highest levels for those under thirteen.

KOSA is a Poor Substitute for Real Privacy Online

KOSA’s attempt to improve privacy and safety will in fact have negative impacts on both. Instead of using super-powered age-verification to determine who gets the most privacy, and then using that same determination to restrict access to huge amounts of content, Congress should focus on creating strict privacy safeguards for everyone. Real privacy protections that prohibit data collection without opt-in consent address the concerns about children’s privacy while rendering age-verification unnecessary. Congress should get serious about protecting privacy and pass legislation that creates a strong, comprehensive privacy floor with robust enforcement tools. 

Jason Kelley

EFF's Atlas of Surveillance Database Now Documents 10,000+ Police Tech Programs

2 weeks 5 days ago

This week, EFF's Atlas of Surveillance project hit a bittersweet milestone.

With this project, we are creating a searchable and mappable repository of which law enforcement agencies in the U.S. use surveillance technologies such as body-worn cameras, drones, automated license plate readers, and face recognition. It's one of the most ambitious projects we've ever attempted. 

Working with journalism students at the University of Nevada, Reno (UNR), our initial semester-long pilot in 2019 resulted in 250 data points, just from the counties along the U.S. border with Mexico. When we launched the first nationwide site in late summer 2020, we had reached just more than 5,000 data points.

The Atlas of Surveillance has now hit 10,000 data points. It contains at least partial data on approximately 5,500 law enforcement agencies in all 50 states, as well as most territories and districts.

This growth is a testament to the power of crowdsourcing: UNR Reynolds School of Journalism students and other volunteers have completed more than 2,000 micro-research tasks through our Report Back tool, which automatically generates assignments to look up whether a particular agency is using a particular technology. We've also worked with students and volunteers to capture and process new datasets and file hundreds of public records requests.

However, this milestone sadly also reflects the massive growth of surveillance adoption by police agencies. High-tech spying is no longer limited to well-resourced urban areas; even the smallest hamlet’s police department might be deploying powerful technology that gathers data on its residents, regardless of whether those residents are connected to a criminal case. We've seen the number of partnerships between police and the home surveillance company Ring grow from 1,300 to more than 2,000. In the two years since we first published a complementary report on real-time crime centers — essentially police tech hubs, filled with wall-to-wall camera monitors and computers jacked into surveillance datasets — the number of such centers in the U.S. has grown from 80 to 100.

All this might have gone unnoticed had the Atlas of Surveillance project not been keeping track.

Our project began with two main goals.

The first was transparency. For years, national journalists and researchers struggled to get a grip on how certain surveillance technologies were spreading across the country, and we'd often field calls seeking help. Our best methods for gathering this information were to send out public records requests en masse or to simply "Google it." Similarly, we'd often get calls from local reporters, activists, and policymakers who were trying to understand all the different technologies used by their local police and sheriffs. By building the Atlas of Surveillance, we provided them with a resource that could become the first stop on any quest to learn more about police technology.

Our second goal for the Atlas of Surveillance was engagement. We didn't just want to build this internally: We wanted to involve a broader community so that more people could dig in and learn about the techniques and challenges for researching surveillance. This was largely made possible by partnering with the Reynolds School at UNR, where we have taught students at all levels how to do this research, from simple search-engine assignments to full-fledged FOIA requests to data scraping.

On both counts, the project has been successful. Countless news articles have been based on or cited EFF's project, such as local reporting on drones in North Texas, a statewide analysis in New Hampshire, and an investigation into police surveillance of protesters in Charlotte. We've also seen the Atlas used for a large amount of scholarly research. Among our favorites are an analysis of the Atlas in the journal Social Problems and research from the University of California, Berkeley on big data policing’s impact on racial inequality in suburbs. The Atlas is also used in many schools and in the Freedom of the Press Foundation's Journalism School Digital Security curriculum.

We've had more than a dozen UNR interns join us to do even deeper dives into the data, helping us to partner with groups like Data 4 Black Lives and to publish a report on campus police surveillance. In addition to our partnership with UNR, we've also led research sessions with students and volunteers across the country, including the University of Washington, Harvard College, Arizona State University, and Kennesaw State University, as well as with audiences at events like Wikiconference North America and the Aaron Swartz Day International Hackathon.

It's amazing to us to think back on the hundreds and hundreds of people who have donated even a little of their time to learn a little about surveillance and contribute research to the project.

If it's been a while since you last checked your hometown in the Atlas, we recommend taking a moment to explore and to share it with your community. And stay tuned in 2023 as we continue to add new features and build the knowledge needed to hold police accountable.

Donate

Support EFF's Public Interest Technology

Dave Maass

Is Mastodon Private and Secure? Let’s Take a Look

2 weeks 5 days ago

This post is part of a series on Mastodon and the fediverse. We also have a post on what the fediverse is, and why the fediverse will be great—if we don't screw it up, and more are on the way. You can follow EFF on Mastodon here.

With so many users migrating to Mastodon as their micro-blogging service of choice, a lot of questions are being raised about the privacy and security of the platform. Though in no way comprehensive, we have a few thoughts we’d like to share on the topic.

Essentially, Mastodon is about publishing your voice to your followers and allowing others to discover you and your posts. For basic security, instances will employ transport-layer encryption, keeping your connection to the server you’ve chosen private. This will keep your communications safe from local eavesdroppers using your same WiFi connection, but it does not protect your communications, including your direct messages, from the server or instance you’ve chosen—or, if you’re messaging someone from a different instance, the server they’ve chosen. This includes the moderators and administrators of those instances, as well. Just like Twitter or Instagram, your posts and direct messages are accessible by those running the services. But unlike Twitter or Instagram, you have the choice in what server or instance you trust with your communications. Also unlike the centralized social networks, the Mastodon software is relatively open about this fact.

Some have suggested that direct messages on Mastodon should be treated more like a courtesy to other users instead of a private message: a way to filter out content from their feeds that isn’t relevant to them, rather than a private conversation. But users of the feature may not understand that intent. We feel that the intended usage of the feature will not determine people’s expectation of privacy while using it. Many may expect those direct communications to have a greater degree of privacy.

Mastodon could implement direct message end-to-end encryption in the future for its clients. Engineering such a feature would not be trivial, but it would give users a good mechanism to protect their messages to one another. We hope to see this feature implemented for all users, but even a single forward-looking instance could choose to implement it for its users. In the meantime, if you need truly secure end-to-end direct messaging, we suggest using another service such as Signal or Keybase.

Despite its pitfalls, until recently Twitter had long had a strong security team. Mastodon is a largely volunteer-built platform undergoing growing pains, and some prominently used forks have had (and fixed) embarrassing vulnerabilities as of late. Though it’s been around since 2016, the new influx of users and new importance it has taken on will be a trial by force. We expect more bugs will be shaken from the tapestry before too long.

Two-factor authentication with an app or security key is available on Mastodon instances, giving users an extra security check to log on. The software also offers robust privacy controls: allowing users to set up automatic deletion of old posts, set personalized keyword filters, approve followers, and hide your social graph (the list of your followers and those you follow). Unfortunately, there is no analogue to making your account “private.” You can make a post viewable only by your followers at the time of posting, but you cannot change the visibility of your previous posts (either individually or in bulk).

Another aspect of “fediverse” (i.e., the whole infrastructure of federated servers that communicate with each other to provide a service) micro-blogging that differs from Twitter and affects the privacy of users is that there is no way to do a text search of all posts. This cuts down on harassment, because abusive accounts will have a harder time discovering posts and accounts using key words typically used by the population they’re targeting (a technique frequently used by trolls and harassers). In fact, the lack of text search is due to the federated nature of Mastodon: to implement this feature would mean every instance would have to be aware of every post made on every other instance. As it turns out, this is neither practical nor desirable. Instead, users can use hashtags to make their posts propagate to the rest of the fediverse and show up in searches.

Instances of Mastodon are also able to “defederate” from other instances if they find the content coming from the other instance to be abusive or distasteful, or in violation of their own policies on content. Say server A finds the users on server B to consistently be abusive, and chooses to defederate with it. “Defederating” will make all content from server B unavailable on server A, and users of server B cannot comment on posts of or direct message users of server A. Since users are encouraged to join Mastodon instances which align with their interests and attitudes on content and moderation, defederating gives instances and communities a powerful option to protect their users, with the goal of creating less adversarial and more convivial experience.

Centralized platforms are able to quickly determine the origin of fake or harassing accounts and block them across the entire platform. For Mastodon, it will take a lot more coordination to prevent abusive users that are suspended on one instance from just creating a new account on another federated instance. This level of coordination is not impossible, but it takes effort to establish.

Individuals also have powerful tools to control their own user experience. Just like on the centralized platforms, Mastodon users can mute, block, or report other users. Muting and blocking works just as you’d expect: it’s a list associated with your account that just stops the content of that user from appearing in your feed and prevents them from reaching out to you, respectively. Reporting is a bit different: since there is no centralized authority removing user accounts, this option allows you to report an account to your own instance’s moderators. If the user being reported is on the same instance as you, the instance can choose to suspend or freeze that user account. If the user is on another instance, your instance may block that user for all its users, or (if there is a pattern of abuse coming from that instance), it may also choose to defederate as described above. You can additionally choose to report the content to the moderators of that user's instance, if desired.

Federation gives Mastodon users a fuzzy “small town” feeling because your neighbors are those on the same instance as you. There’s even a feed just for your neighbors: “Local.” And since you’re likely to choose an instance with others of similar interests and moderators who want to protect their community of users, they are likely to tweak their moderation practices in a way that keeps their users’ accounts private from groups and individuals who may be predatory or adversarial.

There is a concern that Mastodon may promote insular communities and echo chambers. In some ways this is a genuine risk: encouraging users to join communities of their own interests may make them more likely to encounter other users who are just like them. However, for some people this will be a benefit. The universal and instantaneous ubiquity of posts made on Twitter puts everyone within swinging distance of everyone else, and the mechanisms to filter out hateful content have in the past been widely criticized as ineffective, arbitrary, and without recourse. More recently, even those limited and ineffective mechanisms have been met by Twitters’ new leader with open hostility. Is it any wonder why users are flocking to the small town with greener pastures, one that allows you to easily move your house to the next town over if you don’t like it there, rather than bulldozing the joint?

In 2022, user experience is a function of the privacy and content policies offered by services. Federation makes it possible to have a diverse landscape of different policies, attentive to their own users' attitudes and community customs, while still allowing communication outside those communities to a broader audience. It will be exciting to see how this landscape evolves.

Bill Budington

The Fediverse Could Be Awesome (If We Don’t Screw It Up)

2 weeks 6 days ago

This post is part of a series on Mastodon and the fediverse. We also have a post on what the fediverse issecurity and privacy on Mastodon, and more are on the way. You can follow EFF on Mastodon here.

Something remarkable is happening. For the past two weeks, people have been leaving Twitter. Many others are reducing their reliance on it. Great numbers of ex-Twitter users and employees are making a new home in the “fediverse,” fleeing the chaos of Elon Musk’s takeover. This exodus includes prominent figures from civil society, tech law and policy, business and journalism.  It also represents a rare opportunity to make a better corner of the internet…if we don’t screw it up.

The fediverse isn’t a single, gigantic social media platform like Facebook or Twitter. It’s an expanding ecosystem of interconnected social media sites and services that let people interact with each other no matter which one of these sites and services they have an account with. 

That means that people can tailor and better control their experience of social media, and be less reliant on a monoculture sown by a handful of tech giants. 

The major platforms have already screwed it up, but now we have the chance to get it right and build something better. 

Today’s most popular fediverse service is called Mastodon. Mastodon is a Twitter-like service anyone can host and alter to suit their needs. Each server (or “instance”) can experiment and build its own experience for users, and those users aren’t stuck using services they don’t like just because their contacts are on that service. 

Mastodon is just one corner of the fediverse, and it might not be for everyone. More importantly, Mastodon runs on an open protocol called ActivityPub– a powerful and flexible way to link up all kinds of services and systems. This means all the features of Mastodon are just a sliver of a vast universe of interoperable services. 

The fediverse is an evolving project, and it won’t solve all of the challenges that we’ve seen with big social media platforms. Like other distributed systems, it does have some drawbacks and complications. But a federated social media ecosystem represents some possible escape hatches from some of the more serious problems we have been experiencing in the centralized, platform social media world.  

To be clear: no technology can save us from ourselves, but building a more interoperable social media environment may be a chance to have a do-over on the current lock-in model. It could be awesome, if we don’t screw it up.  

Social Media Platforms Already Screwed It Up.

Up until a few weeks ago, most social media users were trapped in a choice between fiefdoms. Many of the problems users contend with online are downstream of this concentration.

Take privacy: the default with incumbent platforms is usually an all-or-nothing bargain where you accept a platform’s terms or delete your account. The privacy dashboards buried deep in the platform’s settings are a way to tinker in the margins, but even if you untick every box, the big commercial services still harvest vast amounts of your data. To rely on these major platforms is to lose critical autonomy over your privacy, your security, and your free expression.

This handful of companies also share a single business model, based upon tracking us. Not only is this invasion of privacy creepy, but also the vast, frequently unnecessary amount and kinds of data being collected – your location, your friends and other contacts, your thoughts, and more – are often shared, leaked, and sold. They are also used to create inferences about you that can deeply impact your life.  

Even if you don’t mind having your data harvested, the mere act of collecting all of this sensitive information in one place makes for a toxic asset. A single bad lapse in security can compromise the privacy and safety of hundreds of millions of people. And once gathered, the information can be shared or demanded by law enforcement. Law enforcement access is even more worrisome in post-Dobbs America, where we already see criminal prosecutions based in part upon people’s social media activities. 

We’re also exhausted by social media’s often parasitic role in our lives. Many platforms are optimized to keep us scrolling and posting, and to prevent us from looking away. There’s precious little you can do to turn off these enticements and suggestions, despite the growing recognition that they can have a detrimental effect on our mental health and on our public discourse.  Dis- and misinformation, harassment and bullying have thrived in this environment. 

There’s also the impossible task of global content moderation at scale. Content moderation fails on two fronts: first, users all over the world have seen that platforms fail to remove extremely harmful content, including disinformation and incitement that is forbidden by the platforms’ own policies. At the same time, platforms improperly remove numerous forms of vital expression, especially from those with little social power. To add insult to injury, users are given few options for appeal or restoration. 

These failures have triggered a mounting backlash.  On both sides of the U.S. political spectrum, there’s been a flurry of ill-considered legislation aimed at regulating social media moderation practices. Outside of the U.S. we’ve seen multiple “online harms” proposals that are likely to make things worse for users, especially the most marginalized and vulnerable, and don’t meaningfully give everyday people more say over their online life. In some places, such as Turkey, bad legislation is already a reality.  

How the Fediverse Can Get it Right

You don’t fix a dictatorship by getting a better dictator. You have to get rid of the dictator. This moment offers the promise of moving to a better and more democratic social media landscape.

Instead of just fighting to improve social media, digital rights advocates, instance operators, coders and others have an opportunity to build atop an interoperable, open protocol, breaking out of the platform lock-in world we’re in now. This fresh start could spur more innovation and resilience for communities online—but only if we make careful choices today.  

To be clear: there is nothing magical about federated worlds. If a federated social media is better than the centralized incumbents, it will be because people made a conscious choice to make it better - not because of any technological determinism. Open, decentralized systems offer new choices towards a better online world, but it’s up to us to make those choices.

Here are some choices we hope that the operators and users of federated systems will make: 

  1. Adopt the Santa Clara Principles on content moderation:  The shift to smaller federated instances creates more opportunities for better transparency, due process and accountability for content moderation. EFF, along with a broad international coalition of NGOs, has developed a set of principles for content hosts that support the basic human rights of users. We hope that most of the fediverse makes these recommendations for user protections the baseline and even exceeds them, especially for larger hosts in the network.  
  2. Community and local control: The fediverse is set up to facilitate community and local control, so we’ll be watching to see how that develops. Mastodon instances already have very different political ideologies and house rules. Though not part of the ActivityPub network, Gab and TruthSocial are built on forks of Mastodon. We’re already seeing the fediverse self-sorting, with some services choosing to connect to - or block - others, based on their users’ preferences. There are bold experiments in democratic control of instances by users. A social internet where users and communities get to set their own rules can give us better outcomes than resting our hopes on the whims of shareholders or one rich guy with a bruised ego.
  3. Innovation in content moderation: The fediverse itself can’t ban users, but the owners of each server have moderation tools and shared blocklists to cater to the experience their users expect. We’re already seeing these approaches improve. Collaboration in moderation tools can facilitate both cooperation and healthy competition among instances based on what rules they set and how they enforce them. When it comes to protecting their users from bad actors, operators shouldn’t need to start from scratch, but they should preserve the option to make their own choices, even if those are different from other instances or blocklist maintainers. And users can then choose which operators to rely upon.
  4. Lots of application options: Mastodon is the current application of choice for many, but it’s not the only possibility. ActivityPub supports many different application strategies. We’ll be watching to see if innovation continues both on Mastodon and beyond it, so communities can develop different ways of using social media, while still maintaining the ability to connect through federation, as well as disconnect when those relationships break down. 
  5. Remixability: Competitors, researchers, and users should be able to use platforms in creative and unexpected ways, and shouldn’t face legal repercussions for building an add-on just because a service does not approve of it or because it competes with it. The free, open nature of ActivityPub gives us a running start, but we should be on the lookout for efforts to fence in users and fence out innovators. Importantly, this tinkering shouldn’t be a free-for-all: it should be limited by laws protecting users’ privacy and security. We still need strong privacy laws when we move to the fediverse and we still don’t have them. 
  6. Lots of financial support models: The current fediverse mostly runs on volunteer creativity, time, and love. That’s terrific, but as many more people join, the burdens on system operators increase. We want to see a wide range of support models develop: community supported models; academic institutions; municipal and other governmental supported models; and philanthropic support, just to name a few. Business models could include subscriptions, contextual ads, or something else entirely— but (no surprise) we’d like to see behavioral tracking ads bite the dust entirely.
  7. Global accessibility: A new global social media paradigm needs to be open to everyone. This means putting an emphasis on all kinds of accessibility. This includes people with visual or other disabilities, but also must include communities in the global south who are often overlooked by developers in the north. Making it easy to implement new languages or features which serve a particular accessibility or cultural need is essential, as are operators making the choice to offer these accessible features.  
  8. Resisting government interference/Have Your Users' Backs: When governments, local or abroad, crack down on instances, users should know what to expect from site operators and have a clear contingency plan. This could be a seamless portability process to let users easily move between instances as they’re blocked or removed, or implementing solutions which allow servers to collaborate in resisting these governmental forces when necessary. EFF’s 2017 Who Has Your Back safeguards are a good place to start.   
  9. True federation: Users of similar services should be able to communicate with one another across platform borders. Services that choose to plug in to the fediverse should make it possible for their users to interact with users of competing services on equal terms, and should not downgrade the security or privacy or other essential features for those coming in from outside. 
  10. Interoperability and preventing the next lock-in: The flip side of supporting true federation is avoiding lock-in. Let’s not abolish today’s dictators only to come under the thumb of future ones. Even more so, let’s be sure the current tech giants don’t use their market power to take over the fediverse and lock us all into their particular instance. We must be ready to fight “embrace, extend, and extinguish” and other legal and technical lock-in strategies that companies like Microsoft and Facebook have tried in the past. Here are some principles to stand up for:
    • Stopping Anticompetitive Behavior:  New services should not lock out competitors and follow-on innovators simply to avoid competition. They must not abuse the CFAA, DMCA Section 1201, or overreaching terms of service. Today’s tech giants like to cloak their anticompetitive bullying in the language of concern over privacy, content moderation and security. Privacy-washing and security-washing have no place in a federated, open internet.  
    • Portability:  It should be simple for a user to migrate their contacts, the content they’ve created, and other valuable material from one service to a competing service, and to delete their data from a platform that they’ve chosen to leave. Users should not be locked into accounts.
    • Delegability: Users of a service should be able to access it using custom tools and interfaces developed by rivals, tinkerers, or the users themselves. Services should allow users to delegate core functions—like reading news feeds, sending messages, and engaging with content—to third-party clients or services.

Up until very recently, it was easier to imagine the end of the internet than it was to imagine the end of the tech giants. The problems of living under a system dominated by unaccountable, vast corporations seemed inescapable. But growth has stagnated for these centralized platforms, and Twitter is in the midst of an ugly meltdown. It won’t be the last service to disintegrate into a thrashing mess. 

Our hearts go out to the thousands of workers mistreated or let go by the incumbent players. The major platforms have already screwed it up, but now we have the chance to get it right and build something better. 

Cindy Cohn

Celebrating the Life of Aaron Swartz: Nov. 12 and Nov. 13

3 weeks 4 days ago

This weekend, EFF is celebrating the life and work of programmer, activist, and entrepreneur Aaron Swartz by participating in the 2022 Aaron Swartz Day and Hackathon. This year, the event will be held in person at the Internet Archive in San Francisco on Nov. 12 and Nov. 13. It will also be livestreamed; links to the livestream will be posted each morning.

Those interested in attending in-person or remotely can register for the event here.

Aaron Swartz was a digital rights champion who believed deeply in keeping the internet open. His life was cut short in 2013, after federal prosecutors charged him under the Computer Fraud and Abuse Act (CFAA) for systematically downloading academic journal articles from the online database JSTOR. Facing the prospect of a long and unjust sentence, Aaron died by suicide at the age of 26.

EFF was proud to call Aaron a friend and ally. This year, several EFF staffers are speaking about work that carry his spirit and legacy: EFF Executive Director Cindy Cohn; Special Advisor Cory Doctorow; Director of Engineering, Certbot Alexis Hancock; Investigative Researcher Beryl Lipton; and Grassroots Organizer Molly de Blanc.

Those interested in working on projects in Aaron's honor can also contribute to the annual hackathon, which this year includes several projects: SecureDrop, Bad Apple, the Disability Technology Project (Sat. only), and EFF's own Atlas of Surveillance. In addition to the hackathon in San Francisco, there will also be concurrent hackathons in Ecuador, Argentina, and Brazil. For more information on the hackathon and for a full list of speakers, check out the official page for the 2022 Aaron Swartz Day and Hackathon.

Hayley Tsukayama

EFF Files Amicus Brief Challenging Orange County, CA’s Controversial DNA Collection Program

3 weeks 4 days ago

Should the government be allowed to collect your DNA—and retain it indefinitely—if you’re arrested for a low-level offense like shoplifting a tube of lipstick, driving without a valid license, or walking your dog off leash? We don’t think so. As we argue in an amicus brief filed in support of a case called Thompson v. Spitzer at the California Court of Appeal, this practice not only impinges on misdemeanor arrestees’ privacy and liberty rights, but also violates the California Constitution. 

Since 2007, the Orange County District Attorney’s Office (OCDA) has been running an expansive program that coerces thousands of Orange County residents annually to provide a DNA sample in exchange for dropping charges for low-level misdemeanor offenses. Through the program, the OCDA has amassed a database of over 182,000 DNA profiles, larger than the DNA databases of 25 states. OCDA claims a right to indefinitely retain the DNA samples it collects and to share them with third parties who may use them in new and unknown ways in the future. Unlike state and federal arrestee DNA databases, OCDA does not allow anyone to have their DNA expunged from its database.

In 2021, two criminology professors from University of California, Irvine, William Thompson and Simon Cole, challenged OCDA’s program using a legal process called “taxpayer standing.” Under this process, anyone who pays taxes in the state can file a lawsuit to challenge government programs that constitute an illegal expenditure of public funds. This includes programs that violate the state or federal constitution, as alleged in this case.

The plaintiffs sued Orange County and the district attorney, alleging that OCDA’s program violates the California Constitution’s right to privacy. At the trial court, the defendants filed a demurrer (a motion to dismiss the case), arguing misdemeanor arrestees waived their privacy rights by consenting to the collection of their DNA in exchange for having their charges dropped. The trial court granted the motion, and the plaintiffs appealed.

Plaintiffs are right—Orange County’s program violates residents’ constitutional right to privacy and should be stopped. Our DNA contains our entire genetic makeup, which can be used to identify us in the narrow and proper sense of the word. But it also contains some of our most private information, ranging from our biological familial relationships to where our ancestors come from to our predisposition to suffering from certain genetically-determined diseases. Private companies have created a multi-billion dollar industry by purporting to link our DNA to our behavioral traits (are you an introvert or extrovert?), our preferences and aversions (do you like cilantro?), and even our physical appearance (do you have freckles?).

The OCDA’s DNA collection has serious implications for privacy and liberty, not just for the low-level arrestees who give up their DNA under the program, but also their biological relatives and wider communities. As the collection and analysis of DNA has become cheaper and more accessible over the past 30 years, law enforcement has pushed to collect more DNA, extract more information from DNA, and, through familial searching, use DNA to identify more and more people. 

In many instances, the widespread collection of DNA has led to errors such as misidentification of suspects. It has also led to a disproportionate impact on certain communities of color. Research shows that across jurisdictions, DNA from Black communities is collected and stored in state-run databases at rates far higher than that of Black people in the population. In 2006, researchers estimated that, using familial searching, law enforcement could identify over 17 percent of the entire U.S. Black population through existing DNA profiles in the FBI’s CODIS DNA database. Given that this research was conducted when the database contained only 6 million profiles, rather than the 15 million it now contains, this figure could be much higher today.

California law authorizes state and local police to collect DNA from people convicted of crimes and anyone arrested for a felony. It does not authorize the collection of DNA from people arrested for misdemeanors, and Californians have explicitly rejected attempts to change that. The OCDA has been getting around this fact by offering to drop arrestees’ charges in exchange for their DNA. The prosecutor’s office claims arrestees have “consented” to the collection of their DNA and waived their constitutional rights. 

There are strong reasons to believe arrestees’ “consent” is coerced, rather than voluntary.  Those providing their DNA are doing so under extreme circumstances; they are not represented by an attorney, and they may have to make decisions quickly, so they may not understand the full implications of their decision. In high pressure situations such as traffic or Terry stops, or—as here—plea bargaining with a district attorney, research has shown that consent is often a legal fiction, with the vast majority of individuals consenting to law enforcement searches. Moreover, even if a misdemeanor arrestee understands that Orange County can retain their DNA sample indefinitely, they may not understand that their DNA can be tested not just for comparison to crime scene samples, but also for uses such as familial searching—to implicate someone else entirely—or in new and unknown ways in the future.

Given the significant privacy and liberty concerns implicated by Orange County’s program, we argued in our amicus brief that it violates the California Constitution’s privacy clause. With all of the information that DNA can reveal about people’s traits, their biological relatives, and their genetic predisposition for certain illnesses and diseases, it is clear that people have a protected privacy interest in their DNA. This is no less true for misdemeanor arrestees. We urge the Court of Appeal to reverse the trial court’s decision dismissing the case and allow the case to proceed.

Related Cases: Maryland v. King
Jennifer Lynch

The Rise of the Police-Advertiser

3 weeks 5 days ago

In August, the Tulsa police department held a press conference about how its new Automated License Plate Readers (ALPRs), a controversial piece of surveillance technology, was the policing equivalent of “turning the lights on” for the first time. In Ontario, California, the city put out a press release about how its ALPRs were a “vital resource.” In Madison, South Dakota, local news covered how the city’s expenditure of $30,000 for ALPRs “paid off” twice in two days.  

All these stories have two things in common: One, they are all about the same brand of ALPRs, Flock Safety. And two, they’re all reminders of how surveillance technology companies are coaching police behind the scenes on how best to tout their products, right down to pre-writing press releases for the police.

Flock Safety has distributed a Public Information Officer Toolkit, providing “resources and templates for public information officers.” A Flock draft press release states:

The ___ Police Department has solved [CRIME] with the help of their Flock Safety camera system. Flock Safety ALPR cameras help law enforcement investigate crime by providing objective evidence. [CRIME DETAILS AND STORY] ____ Police installed Flock cameras on [DATE] to solve and reduce crime in [CITY].

This Mad Libs of a press release is an advertisement, and one Flock hopes your police departments will distribute so that they can sell more ALPRs.

These kinds of police department press releases, and the news coverage that too often quotes them verbatim, should give you an itchy feeling—the same one you get when you know something is being sold to you by a voice leveraging its public standing. And that’s because police have become salespeople. Brand ambassadors. Advertisers.

The trend has been growing for years. Police, on the hunt for easy solutions to the ebbs and flows of crime, are quick to reassure residents they have found the technological silver bullet. But police must also overcome growing community concerns about surveillance technology, and find ways to justify license plate readers that result in innocent people being pulled from their car by gunpoint, face recognition that too often misidentifies people, and acoustic gunshot detection technology shown by studies to not work well. To do this, police and companies work together to justify the often-shocking expenditures for some of this tech (which these days might be coming from a city’s COVID relief money).

Flock is not alone. A 2021 yearly report to the SEC filed by ShotSpotter, an acoustic gunshot detection company, reports that their marketing team “leveraged our extremely satisfied and loyal customer base to create a significant set of new ‘success stories’ that show proof of value to prospects…. In the area of public relations, we work closely with many of our customers to help them communicate the success of ShotSpotter to their local media and communities.”

What do police get out of these relationships? For one, they can get easier access to digital evidence. Why knock on doors or get a warrant to access a doorbell camera’s footage, when an officer can send an email request to the company that manages the equipment?

But some police get more than just surveillance out of it. One investigation into Amazon’s surveillance doorbell, Ring, found that Los Angeles Police Department officers were given discount codes—and the more devices purchased with that code, the more free devices were given to the officer. In this situation, how would a person know whether the officer encouraging them to purchase a security camera is making an independent recommendation, or hoping to win increased perks from the company? The LAPD has since launched an investigation into their officers’ relationships with the surveillance company.

In police, surveillance technology companies have found the perfect advertisers. They are omnivorous buyers with deep pockets, they want to show voters they’re being proactive about crime, and the news apparatus all too often takes their word as sacrosanct and their motives as unquestionable.

In his farewell address, President Dwight Eisenhower warned of the formation of a military-industrial complex, a financial arrangement in which producing the tools of war would be so lucrative that there would be vested interest among manufacturers to ensure the United States always stay on a wartime footing. We must also beware of a police-industrial complex. As people’s fear of crime continues to grow, regardless of the reality of crime in America, companies and police will be all too eager, for profit or reputation, to apply balm to that panic in the form of increasingly expensive surveillance technology.

Matthew Guariglia

EFF Award Winner: Kyle Wiens

3 weeks 6 days ago

For over thirty years, the Electronic Frontier Foundation (EFF) has awarded those paving the way for freedom and innovation in the digital world. Countless luminaries working in digital privacy and free speech gathered for this Pioneer Award Ceremony in San Francisco over the decades. This year, we are excited to relaunch that annual celebration as the first-ever EFF Awards!

The EFF Awards is a new ceremony dedicated to the growing digital rights communities whose technical, social, economic, and cultural contributions are changing the world. We can feel the impact of their work in diverse fields such as journalism, art, digital access, legislation, tech development, and law.

All are invited to attend the EFF Awards ceremony! The celebration will begin at 6 pm. PT, Thursday, November 10 at The Regency Lodge, 1290 Van Ness Ave. in San Francisco. Register today to attend in person. At 7 pm PT, the awards ceremony will stream live and free on Twitch, YouTube, Facebook, and Twitter.

We are honored to present our three winners of this year's EFF Awards: Alaa Abd El-Fattah, Digital Defense Fund, and Kyle Wiens. But before the ceremony kick off, we want to take a closer look at each of our honorees. Up next, Kyle Wiens, EFF Award for Right to Repair Advocacy:

Kyle Wiens, 38, is CEO and co-founder of iFixit and a godfather of the Right to Repair movement who has empowered millions of people to fix their own goods, keep jobs, and reduce waste, while also helping to win major exemptions to the Digital Millennium Copyright Act.

Wiens and Luke Soules launched iFixit in 2003 in their dorm room at California Polytechnic State University San Luis Obispo, posting a step-by-step repair guide online for Wiens’ broken laptop. As manufacturers used legal threats to limit access to repair manuals, they worked with their rapidly growing community to create homemade step-by-step illustrated repair guides of their own. Today, iFixit is a collaborative effort spanning thousands of fixers, repair seekers, and translators that provides over 80,000 free open-source repair guides for many thousands of devices. The company takes apart and rates products for ease of repairability, inspiring labeling regulations in Europe that are shaping new product designs.

Starting in 2012, Wiens joined with EFF and other organizations to successfully petition the U.S. Copyright Office for sweeping Right to Repair reforms including the rights to repair medical equipment, unlock phones, repair vehicles and farm equipment by modifying their software, and have third parties perform repairs on an owner’s behalf. In the latest round, the Copyright Office granted a broad exemption for repair of smartphones, home appliances, or home systems. Wiens also has been a stalwart crusader for congressional action to clarify and codify fixers’ and consumers’ rights.

Through it all, Wiens has tirelessly championed the Right to Repair not only as a basic digital freedom, but also as a job creator for countless fixers and a sustainable environmental imperative that keeps tons of e-waste out of landfills. “Our planet is impacted by consumption to a degree that no one expected and few understand,” he says. “I’m trying to get a handle on it.”

Register Today!

Attend the first-ever EFF Awards In Person

Christian Romero

Sacramento County Resident Joins EFF Lawsuit After Illegal Sharing of His Electricity Usage Data Makes Him a Target of Law Enforcement

3 weeks 6 days ago

The Sacramento County Utility District (SMUD) and the Sacramento Police Department are running an illegal data sharing scheme, with the police making bulk requests for customers’ energy usage data to enforce a cannabis grow ordinance, according to a new EFF lawsuit.

The secret data sharing arrangement violates SMUD customers’ privacy rights under state law and the California Constitution, while disproportionately subjecting Asian and Asian American communities to police scrutiny.

Alfonso Nguyen knows all too well the harms that resulted after his home energy data was shared with law enforcement. Nguyen is a resident of Sacramento County and has owned a home for over 20 years. An immigrant from Vietnam, Nguyen is an adjunct counselor working in disability support programs at a nearby community college. He lives with his elderly mother.

Like nearly all other residents in the area, electricity to his home is supplied by SMUD, the community-owned local utility.

One evening between 2015 and 2017, two deputies from the Sacramento County Sheriff’s Department showed up to his home around 9 pm wanting to search it. They didn’t have a warrant, so Nguyen asserted his rights and said they could not enter. One of the deputies then pushed open his door, pushed past Nguyen, and searched the home. The search yielded nothing.

The sheriff’s department didn’t stop there. In 2020, two deputies came a second time—this time saying SMUD told them the home was using too much electricity, and accusing Nguyen of growing marijuana, which neither he nor anyone else in his home have ever done.

One deputy put his hand over his holstered gun, as if preparing to draw it. The deputy yelled at Nguyen, called him a liar, and threated to return with a warrant and arrest him. Later, when Nguyen contacted SMUD, the utility first denied that it had shared any of his energy usage data before admitting it had disclosed the data to law enforcement.

It was far from an isolated incident. SMUD since at least 2017 has handed out protected customer data to the Sacramento Police Department, which asks for it by zip code on an ongoing basis—without a warrant or any other court order, nor any suspicion of a particular resident—to find possible illicit cannabis grows. The practice has been a big money maker for Sacramento, yielding at least $100 million in fines levied on owners of properties where cannabis was found over two years. About 86 percent of those penalties were levied upon people of Asian descent.

SMUD’s bulk disclosure of customer utility data turns its entire customer base into potential leads for law enforcement to chase.

EFF and law firm Vallejo, Antolin, Agarwal, and Kanter LLP filed a lawsuit September 22 challenging the practice on grounds that it violates SMUD customer’s privacy rights. State law says public utilities generally “shall not share, disclose, or otherwise make accessible to any third party a customer’s electrical consumption data ....” The California Constitution’s search and seizure clause prohibits unlawful searches absent, at minimum, individualized reasonable suspicion of a violation of the law. But law enforcement lacks any suspicion until SMUD discloses its lists of consumers’ energy data.

The program also targets Asian homeowners for fines. According to the amended complaint,  a SMUD analyst who provided data to police excluded homes in a predominantly white neighborhood. One police official removed non-Asian names on a SMUD list and sent only Asian-sounding names onward for further investigation.

EFF is representing in the lawsuit the Asian American Liberation Network, a Sacramento-based nonprofit; Khurshid Khoja, an Asian American Sacramento resident, SMUD customer, and cannabis industry lawyer rights advocate; and now Nguyen, who has joined the case because he wants to ensure that no one else is ever subjected to the type of law enforcement encounters he endured.

“The illegal sharing of customers’ private energy usage between SMUD and law enforcement has to stop,” Nguyen told EFF. “SMUD should be working for its customers, not the police.”

 

 

 

 

 

 

 

 

 

 

 

Related Cases: Asian American Liberation Network v. SMUD, et al.
Karen Gullo

The Filter Mandate Bill Is a Privacy and Security Mess

3 weeks 6 days ago

Among its many other problems, the Strengthening Measures to Advance Rights Technologies Copyright Act would mandate a slew of filtering technologies that online service providers must "accommodate." And that mandate is broad, so poorly-conceived, and so technically misguided that it will inevitably create serious privacy and security risks. 

Since 1998, the Digital Millennium Copyright Act (DMCA) has required services to accommodate "standard technical measures" to reduce infringement. The DMCA’s definition of standard technical measures (STMs) requires them to be developed by a broad consensus in an open, fair, multi-industry, and perhaps most importantly voluntary process. In other words, current law reflects an understanding that most technologies shouldn’t be adopted as standards because standards affect many, many stakeholders who all deserve a say.

But the filter mandate bill is clearly designed to undermine the measured provisions of the DMCA. It changes the definition of standard technical measures to also include technologies supported by only a small number of rightsholders and technology companies. 

It also adds a new category of filters called "designated technical measures" (DTMs), which must be "accommodated" by online services. "Accommodating" is broadly defined as "adapting, implementing, integrating, adjusting, and conforming" to the designated technical measure. A failure to do so could mean losing the DMCA’s safe harbors and thereby risking crushing liability for the actions of your users.  

The Copyright Office would be in charge of designating those measures. Anyone can petition for such a designation, including companies that make these technologies and want to guarantee a market for them.

The sheer breadth of potential petitions would put a lot of pressure on the Copyright Office—which exists to register copyrights, not evaluate technology. It would put even more pressure on people who have internet users' rights at heart—independent creators, technologists, and civil society—to oppose the petitions and present evidence of the dangers they'd produce. Those dangers are far too likely, given the number of technologies that the new rules would require services to "accommodate."

Requiring This "Accommodation" Would Endanger Security

The filter mandate allows the Copyright Office to mandate "accommodation" for both specific technologies and general categories of technologies. That opens up a number of security issues.

There’s a reason that standardization is a long, arduous process, and it’s to find all the potential problems before requiring it across the board. Requiring unproven, unaudited technology to be universally distributed would be a disaster for security.

Consider a piece of software developed to scan uploaded content for copyrighted works. Even leaving aside questions of fair use, the bill text places no constraints on the security expertise of the developer. At large companies, third-party software is typically thoroughly audited by an in-house security team before being integrated into the software stack. A law, especially one requiring only the minimal approval of the Copyright Office, should not be able to bypass these checks, and certainly shouldn’t require it for companies without the resources to do them themselves. Poorly implemented software leaves potential security vulnerabilities that might be exploited by malicious hackers to exfiltrate the personal information of a service’s users.

Security is hard enough as it is. Mistakes that lead to database breaches happen all the time even with teams doing their best at security; who doesn’t have free credit reporting from a breach at this point? With this bill, what incentive does a company that makes content-matching technology have to invest the time and money into building secure software? The Copyright Office isn’t going to check for buffer overflows. And what happens when a critical vulnerability is found after software has been approved and widely implemented? Companies will have to choose between giving up their DMCA protection and potentially being sued out of existence by turning it off or letting their users be affected by the bug. No one wins in that scenario, and users lose the most.

"Accommodation" Would Also Hurt Privacy

Similar concerns arise over privacy. It’s bad enough that potential bugs could be exploited to divulge user data, but this bill also leaves the door wide open for direct collection of user data. That’s because a DTM could include a program that identifies potential infringement by collecting personal data while a service is being used and then sends that data directly to an external party for review. The scale of such data collection would blow the Cambridge Analytica scandal out of the water, as it would be required across all services, for all of their users. It’s easy to imagine how such functionality would be a dream for copyright enforcers—a direct way to track down and contact infringers, no action required by the service provider—and a nightmare for user privacy.

Even technology that collects information only when it detects use of copyrighted media on a service would be disastrous for privacy. The bill places no restrictions on the channels for content sharing that would fall under these provisions. Publicly viewable content is one thing, but providers could be forced to apply scanning technology to all content that crosses a platform—even if it’s only sent in a private message. Worse, this bill could be used to require platforms to scan the contents of encrypted messages between users, which would fundamentally break the promise of end-to-end encryption. If someone sends a message to a friend, but the scanning software tattles on them to the service or even directly to a media company, that’s simply not end-to-end encryption. Even in the best case, assuming that the software works perfectly as intended, there’s no way to require it across all activities of a service and also allow end-to-end encryption. If information about the contents of a message can be leaked, that message can’t be considered encrypted. In practice, this would happen regularly even for fair use content, as a human would likely have to review it.

The Copyright Office is supposed to "consider" the effect a technology would have on privacy and data security, but it doesn’t have to make it a priority over the  multitude of factors it must also "consider." Furthermore, evaluating the privacy and security concerns requires a level of technological expertise that is outside the office's current scope. If a company says that its technology is safe and there is no independent technologist to argue against it, the Copyright Office might just accept that representation. A company has an interest in defining "secure" and "private" in a way that they can claim their product meets; a user or security expert might define it very differently. Companies also do not have an interest in saying exactly how their technology does what it claims, making it even harder to evaluate the security and privacy issues it might raise. Again, the burden is on outside experts to watch the Copyright Office proceedings and provide information on behalf of the millions of people who use the internet. 

This bill is a disaster in the making. Ultimately, it would require any online service, under penalty of owing hundreds of thousands of dollars to major rightsholders, to endanger the privacy and security of their users. We all have a right to free expression, and we should not have to sacrifice privacy and security when we rely on a platform to exercise that right online.

Katharine Trendacosta

EFF Award Winner: Digital Defense Fund

4 weeks ago

For over thirty years, the Electronic Frontier Foundation (EFF) has awarded those paving the way for freedom and innovation in the digital world. Countless luminaries working in digital privacy and free speech gathered for this Pioneer Award Ceremony in San Francisco over the decades. This year, we are excited to relaunch that annual celebration as the first-ever EFF Awards!

The EFF Awards is a new ceremony dedicated to the growing digital rights communities whose technical, social, economic, and cultural contributions are changing the world. We can feel the impact of their work in diverse fields such as journalism, art, digital access, legislation, tech development, and law.

All are invited to attend the EFF Awards ceremony! The celebration will begin at 6 pm. PT, Thursday, November 10 at The Regency Lodge, 1290 Van Ness Ave. in San Francisco. Register today to attend in person. At 7 pm PT, the awards ceremony will stream live and free on Twitch, YouTube, Facebook, and Twitter.

We are honored to present our three winners of this year's EFF Awards: Alaa Abd El-Fattah, Digital Defense Fund, and Kyle Wiens. But before the ceremony kick off, we want to take a closer look at each of our honorees. Up next, Digital Defense Fund, EFF Award for Civil Rights Technology :

Digital Defense Fund was launched in 2017 to meet the abortion rights movement’s increased need for security and technology resources after the 2016 election. This “multidisciplinary team of organizers, engineers, designers, abortion fund and practical support volunteers” provides digital security and technology support to abortion rights and provider organizations as well as individual organizers.

“DDF’s commitment to building resources for a thriving, resilient, growing abortion access movement has strengthened the field’s transition to digital strategies,” said Cynthia Conti-Cook, a technology fellow working with the Ford Foundation’s Gender, Racial, and Ethnic Justice team. “Its generous contributions and collaborations with other movements makes DDF so much more than an abortion access digital services organization—it has become a model for embedding movement-aligned technical expertise and a platform for fostering cross-movement learning and strategies.”

The fund’s staff provides digital security evaluations, conducts staff training, maintains a library of go-to resources on reproductive justice and digital privacy, and builds software for abortion access organizations.

DDF’s mission to leverage technology to defend and secure access to abortion became even more crucial with this year’s U.S. Supreme Court decision in Dobbs v. Jackson Women's Health Organization, which ended the half-century of abortion rights protected under Roe v. Wade. Despite this setback and the ensuing proliferation of state abortion bans, DDF continues to pursue its vision of “a future where technology and innovation support secure, autonomous reproductive decisions, free from stigma.”

“I don’t think as a culture we recognize, respect, or take care of our digital selves,” DDF Director Kate Bertash tweeted in July. “The me that lives in my machines and across cloud servers of our online spaces is the me that most interacts with the wider world. She deserves as much privacy and protection as physical me.”

Register Today!

Attend the first-ever EFF Awards In Person

Christian Romero

Demand Your Right to Repair in New York State

4 weeks ago

New York's legislature passed a landmark right-to-repair bill this year. Now it's up to Governor Hochul to make it law.

Back in June, we asked New Yorkers to contact your Assemblymembers about the Digital Fair Repair Act, a landmark repair bill in New York. The bill passed the state legislature, but today it sits on Governor Hochul’s desk waiting to be signed. New Yorkers: we need you to speak up today to expand your digital rights. Tell Governor Hochul to support A7006-B (Fahy), which would make it easier to repair phones, tablets and other digital electronic equipment and make these repairs more accessible for all New Yorkers. This proposal passed the Senate and Assembly, but we need your help to ensure it becomes law. Tell Governor Hochul to sign A7006-B and make New York the first state in the country to pass a broad right-to-repair bill into law.

Take Action

New York: Speak up for Your Right to Repair

This measure has the support of leading national proponents of the right to repair. It requires companies to give people access to what they need to fix their stuff by selling spare parts and special tools at “fair and reasonable terms.” It also provides all customers and third-party repair technicians access to repair information, software, and the ability to apply firmware patches.

Establishing a right to repair in New York makes it easier for people to fix their broken devices, helps independent businesses, and helps the environment. Tell Gov. Hochul to sign it into law today. A right-to-repair is about actually owning the devices you have: you get to decide what happens with a device when it breaks. Repairing devices prevents unnecessary electronic waste because, rather than throwing something away, you can get it fixed or fix it yourself.

To learn more, read this story about the bill and its history in the Albany Times Union.

Take Action

New York: Speak up for Your Right to Repair

Molly de Blanc

EFF Award Winner: Alaa Abd El-Fattah

4 weeks 1 day ago

For over thirty years, the Electronic Frontier Foundation (EFF) has awarded those paving the way for freedom and innovation in the digital world. Countless luminaries working in digital privacy and free speech gathered for this Pioneer Award Ceremony in San Francisco over the decades. This year, we are excited to relaunch that annual celebration as the first-ever EFF Awards!

The EFF Awards is a new ceremony dedicated to the growing digital rights communities whose technical, social, economic, and cultural contributions are changing the world. We can feel the impact of their work in diverse fields such as journalism, art, digital access, legislation, tech development, and law.

All are invited to attend the EFF Awards ceremony! The celebration will begin at 6 pm. PT, Thursday, November 10 at The Regency Lodge, 1290 Van Ness Ave. in San Francisco. Register today to attend in person. At 7 pm PT, the awards ceremony will stream live and free on Twitch, YouTube, Facebook, and Twitter.

We are honored to present our three winners of this year's EFF Awards: Alaa Abd El-Fattah, Digital Defense Fund, and Kyle Wiens. But before the ceremony kick off, we want to take a closer look at each of our honorees. First up: Alaa Abd El-Fattah, EFF Award for Democratic Reform Advocacy:

Alaa Abd El-Fattah, 40, is an Egyptian-British blogger, software developer, political activist, and perhaps the most high-profile political prisoner in Egypt if not the entire Arab world.

He was instrumental to the creation of the Arab blogger and techie networks, he ran one of Egypt's longest-running and most-celebrated blogs, and co-founded a trailblazing Egyptian blog aggregator. In 2005, his Manalaa blog won the Special Reporters Without Borders Award in Deutsche Welle's Best of Blogs competition.

Though his arrests for activism date back to 2006, he has been imprisoned by the Egyptian government for all but a few months since the 2013 coup d’etat. He has been repeatedly arrested for alleged crimes such as organizing protests without authorization; most recently he was sentenced in December 2021 to five years in prison for sharing a Facebook post about human rights violations in prison. He began a hunger strike in April 2022, recently passing 200 days, and is reported to be in failing health. As of Monday, November 7, 2022, he is refusing water in addition to his hunger strike, increasing fears for his life.

An anthology of his writing—including some pieces smuggled out from jail—was translated into English by anonymous supporters and published in 2021 as You Have Not Yet Been Defeated. A fierce champion of free expression, an independent judiciary, and government accountability–even at immense personal cost–he still advocates for democratic reforms, tech freedoms, and civil and human rights in Egypt and elsewhere.

“What needs to happen is a complete change in the order of things,” he told RightsCon in 2011, “so that we are making these amazing products, and we’re making a living, but we’re not trying to monopolize, and we’re not trying to control the internet, and we’re not trying to control our users, and we’re not complicit with governments.”

Register Today!

Attend the first-ever EFF Awards In Person

Christian Romero

Turkey's New Disinformation Law Spells Trouble For Free Expression

4 weeks 1 day ago

Turkey’s government recently passed a new law aimed at curbing disinformation that citizens have dubbed the “censorship law,” according to reports. The new law was met with condemnation from both inside the country and abroad.

Troublingly, the vaguely-worded law, passed by parliament on October 13, prescribes three years’ imprisonment for anyone who publishes “false information” with the intent to “instigate fear or panic” or “endanger the country’s security, public order and general health of society.”

This latest law is one of many attempts by the country to restrict its citizens’ internet usage. Dubbed an “enemy of the internet” by Reporters Without Borders several times, Turkey’s government censors thousands of websites and frequently shows up on social media companies’ transparency reports for demanding content removals. The country is also among the world’s top jailers of journalists.

In 2020, at a time when the internet was more vital than ever for citizens the world over, Turkey passed a copycat law reminiscent of Germany’s NetzDG that required large social media companies to appoint a local representative and take down offending content within 48 hours. The law also introduced new powers for courts to order internet providers to throttle social media platforms’ bandwidth by up to 90%, which would effectively block access to those sites in the country.

Now, the disinformation law—which comes just eight months before Turkey’s next major elections—would require companies to remove disinformation within a four-hour time limit. A platform’s obligation to remove content could be triggered by a court order, or Turkey’s Information and Communication Technologies Authority (ICTA). Companies that fail to remove content within the timeframe could face throttling, as with the 2020 law. It also requires companies to report certain information to the ICTA at the agency’s request, including information about the algorithms related to topical hashtags, promoted and demoted content, advertisement policies, and transparency policies.

Companies also face hefty fines if they algorithmically amplify disinformation, and this would require them to make certain content less accessible, for instance through demotion. It also requires companies to hand over information about certain crimes—including child sexual abuse imagery (CSAM), disinformation, and state secrets—as soon as possible or face throttling.

A new provision, which criminalizes the dissemination of false or misleading information is even more concerning. The imprisonment of people for sharing content, which could also affect journalists, activists, and platform operators offering journalistic information, is unacceptable. By adopting the most drastic instead of the least restrictive measure to curb disinformation, the new bill clearly falls short of international human rights standards and will inevitably lead to far-reaching censorship.

It’s not all bad news. Packaged within these dangerous elements are measures not all dissimilar to those included in the EU’s new online platform legislation, the Digital Services Act (DSA); for instance, social network providers will now be obliged to provide clear, understandable, and easily accessible information about which parameters are used to recommend content to the users on their website, and must provide users with an option to limit the use of their personal information, among other things. Nevertheless, this isn’t a case where users should accept the good with the bad: The other provisions simply pose too big a risk to freedom of expression.

Jillian C. York

Stop the Copyright Creep

1 month ago

In 2020, two copyright-related proposals became law despite the uproar against them. The first was the unconstitutional CASE Act. The second was a felony streaming proposal that had never been seen or debated in public. In fact, its inclusion was in the news before its text was ever made public. The only way to find it was when the 6,000-page year-end omnibus was published. We want to make sure that doesn’t happen again.

Take Action

Tell Congress to Stop the Copyright Creep

No copyright proposal—or copyright-adjacent one—has a place in “must-pass” legislation. Must-pass legislation is a bill that is vital to the running of the country and therefore must be passed and signed into law. They are usually the bills that fund the government for the upcoming year, in all its forms.

Because so many copyright-related bills involve proposals that would harm lawful free expression, they are not the kind of controversy-free proposals that belong in such legislation. Too many important rights hang in the balance, so bills that propose to remove expression for any reason must stand alone and be passed on their own merits, not borrow those of a funding bill. The public deserves to know exactly where their representatives stand on online expression and censorship.

Notwithstanding any secret bills like 2020’s felony streaming, there are three terrible bills already on the table:

All three trade some form of protected speech for some corporate profit motive. All three also give a minority with billions of dollars the ability to control the speech of billions of users. That is not acceptable, no matter what the stated reasoning is. In each case, there are good arguments against the proposals and better options for carrying out the stated purpose of each bill.

These proposals, and any like them, should be kept out of the upcoming must-pass bills. They are too flawed and too important to let them evade a public debate and vote on their own merits. Tell Congress to stop copyright from creeping into must-pass laws.

Katharine Trendacosta

EU Lawmakers Must Reject This Proposal To Scan Private Chats

1 month 2 weeks ago

Having a private conversation is a basic human right. Like the rest of our rights, we shouldn’t lose it when we go online. But a new proposal by the European Union could throw our privacy rights out the window. 

LEARN MORE

Tell the European Parliament: Stop Scanning Me

The European Union’s executive body is pushing ahead with a proposal that could lead to mandatory scanning of every private message, photo, and video. The EU Commission wants to open the intimate data of our digital lives up to review by government-approved scanning software, and then checked against databases that maintain images of child abuse. 

The tech doesn’t work right. And launching a system of “bugs in our pockets” is just wrong, even when it’s done in the name of protecting children. 

We don’t need government watchers reviewing our private conversations, whether it’s AI, bots, or live police. Adults don’t need it, and children don’t need it either. 

If you’re in one of the EU’s 27 member countries, it’s a good time to contact your Member of European Parliament and let them know you’re opposed to this dangerous proposal. Today, our partners at European Digital Rights (EDRi) launched a website called “Stop Scanning Me,” with more information about the proposal and its problems. It features a detailed legal analysis of the regulation, and a letter co-signed by 118 NGOs that oppose this proposal, including EFF. German speakers may also want to view and share the Chatkontrolle Stoppen! Website run by German civil liberties groups. 

Even if you’re not an EU resident, this regulation should still be of concern to you. Large messaging platforms won’t withdraw from this massive market, even if that means abandoning privacy and security commitments to their users. That will affect users around the globe, even those that don’t regularly communicate with persons in the EU. 

“Detection Orders” To Listen To Private Conversations

The EU’s proposed Child Sexual Abuse Regulation (CSAR) is a disappointing step backwards. In the past, the EU has taken the lead on privacy legislation that, while not perfect, has moved in the direction of increasing, rather than decreasing, peoples’ privacy, such as the General Data Protection Regulation (GDPR) and the e-Privacy Directive. But the CSA Regulation goes in the opposite direction. It fails to respect the EU Charter of Fundamental Rights and undermines the recently adopted Digital Services Act, which already gives powers to authorities to remove illegal content.

The proposal requires online platforms and messaging service providers to mitigate abusive content and incentivizes general monitoring of user communication. But If “significant” risks of online sexual child abuse remain after these mitigations—and it’s entirely unclear what this means in practice— law enforcement agencies can send “detection orders” to tech platforms. Once a detection order is issued, the company running the platform could be required to scan messages, photos, videos, and other data using software that’s approved by law enforcement. 

With detection orders in place, the platforms won’t be able to host truly private conversations. Whether they’re scanning peoples’ messages on a central server or on their own devices, the CSA regulation simply won’t be compatible with end-to-end encryption. 

Not content with reviewing our data and checking them against government databases of existing child abuse, the proposal authors go much further. The CSAR suggests using algorithms to take guesses at what other images might represent abuse. It even plans to seek out “grooming” by using AI to review peoples’ text messages to try to guess at what communications might suggest future child abuse. 

Large social media companies often can’t even meet the stated promises of their own content moderation policies. It’s incredible that EU lawmakers might now force these companies to use their broken surveillance algorithms to level accusations against their own users of the worst types of crimes. 

The EU Commission is Promoting Crime Detection AI That Doesn’t Work 

It’s difficult to audit the accuracy of the software that’s most commonly used to detect child sexual abuse material (CSAM). But the data that has come out should be sending up red flags, not encouraging lawmakers to move forward. 

  • A Facebook study found that 75% of the messages flagged by its scanning system to detect child abuse material were not “malicious,” and included messages like bad jokes and memes.
  • LinkedIn reported 75 cases of suspected CSAM to EU authorities in 2021. After manual review, only 31 of those cases—about 41%—involved confirmed CSAM.  
  • Newly released data from Ireland, published in a report by our partners at EDRi (see page 34), shows more inaccuracies. In 2020, Irish police received 4,192 reports from the  U.S. National Center for Missing and Exploited Children (NCMEC). Of those, 852 referrals (20.3%) were confirmed as actual CSAM. Of those, 409 referrals (9.7%) were deemed “actionable” and 265 referrals (6.3%) were “completed” by Irish police. 

Despite the insistence of boosters and law enforcement officials that scanning software has magically high levels of accuracy, independent sources make it clear: widespread scanning produces significant numbers of false accusations. Once the EU votes to start running the software on billions more messages, it will lead to millions of more false accusations. These false accusations get forwarded on to law enforcement agencies. At best, they’re wasteful; they also have potential to produce real-world suffering. 

The false positives cause real harm. A recent New York Times story highlighted a faulty Google CSAM scanner that wrongly identified two U.S. fathers of toddlers as being child abusers. In fact, both men had sent medical photos of infections on their children at the request of their pediatricians. Their data was reviewed by local police, and the men were cleared of any wrongdoing. Despite their innocence, Google permanently deleted their accounts, stood by the failed AI system, and defended their opaque human review process. 

With regards to the recently published Irish data, the Irish national police verified that they are currently retaining all personal data forwarded to them by NCMEC—including user names, email addresses, and other data of verified innocent users. 

Growing the Haystack

Child abuse is horrendous. When digital technology is used to exchange images of child sexual abuse, it’s a serious crime that warrants investigation and prosecution. 

That's why we shouldn’t waste efforts on actions that are ineffectual and even harmful. The overwhelming majority of internet interactions aren’t criminal acts. Police investigating online crimes are already in the position of looking for a proverbial “needle in a haystack.” Introducing mandatory scanning of our photos and messages won’t help them narrow in on the target—it will massively expand the “haystack.” 

The EU proposal for a regulation also suggests mandatory age verification as a route to reducing the spread of CSAM. There’s no form of online age verification that doesn’t have negative effects on the human rights of adult speakers. Age verification companies tend to collect (and share) biometric data. The process also interferes with the right of adults to speak anonymously—a right that’s especially vital for dissidents and minorities who may be oppressed, or unsafe. 

EU states or other Western nations may well be the first nations to ban encryption in order to scan every message. They won’t be the last. Governments around the world have made it clear: they want to read peoples’ encrypted messages. They’ll be happy to highlight terrorism, crimes against children, or other atrocities if it gets their citizens to accept more surveillance. If this regulation passes, authoritarian countries, which often have surveillance regimes already in place, will demand to apply EU-style message scanning in order to find their own “crimes.” The list will likely include governments that attack dissidents and openly criminalize LBGT+ communities. 

LEARN MORE

Tell the European Parliament: Stop Scanning Me

Joe Mullin

Better Regulating Drone Use Requires Communication, Not Surveillance

1 month 2 weeks ago

In 2018, Congress gave the Departments of Justice and Homeland Security sweeping new authorities to destroy or commandeer privately-owned drones, as well as intercept the data it sends and receives. EFF objected to The Preventing Emerging Threats Act of 2018 (S. 2836, H.R. 6401) because, among other things, the bill authorized DOJ and DHS to “track,” “disrupt,” “control,” “seize or otherwise confiscate,” “mitigate” or even “destroy” unmanned aircraft that pose a “credible threat” to a “covered facility or asset” in the U.S.—without defining what many of those terms mean.

The definition of “credible threat” was left entirely to the discretion of DOJ and DHS. This means we have no real idea what the threshold would be in order to legally allow authorities to destroy your drone. And the term “covered facility or asset” was defined so broadly it could extend to all federal property. EFF was also concerned that the bill would authorize the government to “intercept” or acquire transmissions to and from the drone, which could be read to include capturing video footage sent from the drone—a major threat to journalists who use this technology.

Unfortunately, with very little public debate, the Preventing Emerging Threats Act of 2018 was included in the FAA Reauthorization Act of 2018 (PL 115-254), which passed Congress and was signed into law that same year. The one bright spot was that the authorities were set to expire in 2022, giving Congress another chance to define the relevant terms, provide transparency in the process, and determine the appropriate, limited authorities for these agencies, as well as their necessary safeguards.

The 2018 law was already too broad, exempting officials from following procedures that ordinarily govern electronic surveillance and hacking, such as the Wiretap Act, Electronic Communications Privacy Act, and the Computer Fraud and Abuse Act.

But somehow, the Administration’s current proposal, introduced in the Senate as “The Safeguarding the Homeland from the Threats Posed by Unmanned Aircraft Systems Act” (S. 4687), is even worse.

The proposal would give long lasting, sweeping surveillance powers to multiple federal agencies, plus certain state and local law enforcement agencies, plus contractors, while also eliminating the limited safeguards that currently exist for electronic surveillance. In other words, the Administration proposes to give itself permanent, unfettered access to the communications of private drones, as well as the drones themselves, without having to get a warrant or explain their actions to the public.

Such vague, unchecked government authority intrudes on the Fourth Amendment right to private electronic communications. It also raises significant First Amendment concerns. Reporters covering police activity or misconduct in an unspecified “public area” could have their footage seized without any due process. Professional photographers or filmmakers who happen to be in the wrong place at the wrong time filming bird migration near the U.S. border or even photographing a wedding close to a government outpost could have their equipment destroyed with no redress.

In some circumstances, the government may have legitimate reasons for engaging drones that pose an actual, imminent, and narrowly defined “threat.” But what the Administration is asking for is beyond reasonable. Whatever threat private drones may pose to public safety does not require handing the government, as well as contractors and private businesses, unfettered authority to destroy, commandeer, or eavesdrop on private drones. 

India McKinney

Spanish ISPs Fall Short of Robust Commitments to User Privacy in New Eticas’ Report

1 month 2 weeks ago

Spanish Internet Service Providers (ISPs) continue to fall short of robust transparency about their data protection and user privacy practices, with many failing to meet criteria  that directly builds on Spanish and EU data protection regulations.

While highlighting that internet companies in Spain need to step up their user privacy game, Eticas Foundation’s third edition of ¿Quien Defiende Tus Datos? (Who Defends Your Data?) Spain showed that Movistar (Telefónica) maintained a leadership position among companies evaluated, with a total of 18 out of 21 points. The ISP scored well in all evaluated criteria except for user notification. On the other hand, Habitaclia received the lowest score, with just 5.5 points.

All of Spain’s ISPs received credits in the privacy policy category, which covers crucial information companies should provide users about their data processing practices. ISPs made significant strides in this category in Eticas’ last report. Yet, this year's edition shows companies have lost traction, improving in some parameters but losing credit for others. The balance between advances and gaps of Spain’s Internet companies shows there is still plenty of room for progress.

This year, Eticas checked public policies and documents of 15 Internet companies that handle user data in their day-to-day activities, including telecom providers, home sales and rental sites, and apps for selling second-hand goods. Eticas added three new companies to the report: the telecom provider Digi Spain Telecom, the second-hand goods app Vinted, and the startup Trovit.es, which offers deals for selling or renting homes, cars, and other products. Telecom provider Euskatel is no longer in the ranking after its acquisition by MásMóvil.

This year’s study has also introduced new criteria. To earn a full star, companies’ privacy policies must state why and through which channels they collect user data. Considering the context of the COVID-19 pandemic and policies to combat the spread of the virus that involve mass collection of user data, Eticas pushed companies to commit to only sharing anonymized and aggregate data for policy, rather than law enforcement, purposes. The new report introduced a special red star to indicate whether ISPs went public with any specific data protection measure related to the pandemic. 

Vodafone was the only service provider to receive credit for both COVID-related data collection categories. The ISP published a specific data protection policy regarding data-sharing for COVID-19 control actions. The policy includes important safeguards, such as only sharing aggregate and anonymized data and respecting principles of proportionality and purpose limitation. The policy’s disclosure about data security, however, only mentions that Vodafone has put “adequate and appropriate security measures” in place, without providing details. The company should include more detail on the type of measures taken and their efficacy in preserving data privacy and security.

The summary of results is below.

Movistar, Som Conexió, Orange, and Vodafone were the only service providers  credited for parameters beyond their privacy policies. Movistar and Vodafone earned scores for disclosing information on the legal framework authorities must follow to request user data, and which competent authorities can request access to users’ communications content and metadata. Movistar, Orange, and Vodafone also received credit for carrying out initiatives promoting user privacy, like the Telecommunications Industry Dialogue and the Global Network Initiative. Disappointingly, Movistar remains the only company that publishes periodic transparency reports with statistical information about government data requests. And Orange, which stood out in previous editions for committing to notify users about data requests, lost this credit in the new edition.

When it comes to ISPs’ privacy policies, there are ups and downs. Almost half of the 15 companies evaluated did not provide information about profiling and automated decision-making, failing to comply with disclosure standards set forth in Spain’s data protection legislation (which incorporates GDPR obligations). They have also fallen short of other parameters that build on GDPR's transparency rules for user data processing. For example, almost one-third of featured companies did not disclose how long they store user data. Almost half of them failed to commit to notifying users about changes in their policies, disclosing information about international data transfers, and enabling users to consent or opt-out to such transfers. On the upside, all service providers share contact information for their officers in charge of compliance with data protection rules, and most of them let users know they can opt out of certain uses of their data.

Eticas’ report also highlights cases where the Spanish Data Protection Authority (AEPD) punished ISPs for mishandling user data. As the most notable example, the AEPD imposed a cumulative fine of EUR 8.5 million to Vodafone for several breaches of data protection and other regulations.

Companies must commit to robust data privacy policies, and be held accountable in their practices for protecting the data their customers have entrusted them with. This new report  shows Spanish Internet companies still need  to improve their public commitments to user privacy. 

Eticas’ study is part of a series of reports across Latin America and Spain holding ISPs accountable to their users, and pushing companies to adopt policies and practices that provide solid data privacy safeguards.  

Veridiana Alimonti

Alaa Abd El Fattah Surpasses 200 Days of Hunger Strike as COP27 Summit Nears

1 month 2 weeks ago

We remain gravely concerned about the deteriorating health of Alaa Abd El Fattah, the British-Egyptian activist, technologist, 2022 EFF Award winner, and Amnesty Prisoner of Conscience. Alaa has now been on hunger strike at Wadi el Natrun Prison in Egypt for more than 200 days, and was reported this week as being “at death’s door” by the UK’s Independent.

World leaders, including UK Prime Minister Liz Truss, are set to gather soon in Egypt for the COP27 climate summit, despite the country’s ongoing crackdown on civil society, which UN experts have criticized as potentially jeopardizing “safety and full participation” at the summit.

Meanwhile, Alaa remains in prison, only able to communicate with his family through letters and monthly twenty-minute visits. A recent Facebook post from his sister Mona details the government’s petty and biased treatment of Alaa, such as withholding a radio that every other prisoner on his block was allowed, and confiscating a letter to his mother. He remains a scapegoat—a symbol to deter others from fighting for their basic rights.

The Egyptian government is ultimately responsible for setting Alaa free. But Alaa is a British citizen and the UK government should also intervene, immediately, to do everything it can to uphold Alaa’s human rights and secure his freedom. In her last days as foreign secretary, now-PM Liz Truss called Alaa’s case a “high priority” and affirmed a commitment to secure his release.

As COP27 draws nearer, Alaa’s sister has initiated a sit-in outside of the UK foreign office, with organizations including Reporters Without Borders and English PEN demonstrating their support. Meanwhile, the campaign to free Alaa continues to call on British citizens to write to their MPs and urge them to call for his release (note: this is an external and not an EFF campaign).

We are thrilled to be able to honor Alaa with the 2022 EFF Award for Democratic Reform Advocacy for his contributions to technology and society. But without his freedom, the honor is bittersweet. We once again urge Liz Truss to do everything in her power to secure Alaa’s release.

Jillian C. York
Checked
2 hours 35 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed